Nov 29 06:34:07 crc systemd[1]: Starting Kubernetes Kubelet... Nov 29 06:34:07 crc restorecon[4753]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 06:34:07 crc restorecon[4753]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 06:34:08 crc restorecon[4753]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 06:34:08 crc restorecon[4753]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Nov 29 06:34:09 crc kubenswrapper[4947]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 29 06:34:09 crc kubenswrapper[4947]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 29 06:34:09 crc kubenswrapper[4947]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 29 06:34:09 crc kubenswrapper[4947]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 29 06:34:09 crc kubenswrapper[4947]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 29 06:34:09 crc kubenswrapper[4947]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.042960 4947 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045728 4947 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045748 4947 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045753 4947 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045757 4947 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045761 4947 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045765 4947 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045769 4947 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045774 4947 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045779 4947 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045788 4947 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045792 4947 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045796 4947 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045800 4947 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045804 4947 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045808 4947 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045811 4947 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045815 4947 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045818 4947 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045822 4947 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045825 4947 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045828 4947 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045832 4947 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045835 4947 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045839 4947 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045842 4947 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045846 4947 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045850 4947 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045854 4947 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045858 4947 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045862 4947 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045866 4947 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045869 4947 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045873 4947 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045877 4947 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045880 4947 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045884 4947 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045887 4947 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045891 4947 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045894 4947 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045899 4947 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045903 4947 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045906 4947 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045910 4947 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045913 4947 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045917 4947 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045921 4947 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045925 4947 feature_gate.go:330] unrecognized feature gate: Example Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045928 4947 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045931 4947 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045935 4947 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045938 4947 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045942 4947 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045945 4947 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045949 4947 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045952 4947 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045957 4947 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045962 4947 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045965 4947 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045970 4947 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045975 4947 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045980 4947 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045984 4947 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045990 4947 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.045995 4947 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.046002 4947 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.046006 4947 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.046010 4947 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.046014 4947 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.046018 4947 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.046022 4947 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.046025 4947 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046105 4947 flags.go:64] FLAG: --address="0.0.0.0" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046116 4947 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046124 4947 flags.go:64] FLAG: --anonymous-auth="true" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046130 4947 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046135 4947 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046139 4947 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046145 4947 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046150 4947 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046154 4947 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046158 4947 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046164 4947 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046168 4947 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046172 4947 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046176 4947 flags.go:64] FLAG: --cgroup-root="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046180 4947 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046184 4947 flags.go:64] FLAG: --client-ca-file="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046188 4947 flags.go:64] FLAG: --cloud-config="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046192 4947 flags.go:64] FLAG: --cloud-provider="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046196 4947 flags.go:64] FLAG: --cluster-dns="[]" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046201 4947 flags.go:64] FLAG: --cluster-domain="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046206 4947 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046211 4947 flags.go:64] FLAG: --config-dir="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046215 4947 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046235 4947 flags.go:64] FLAG: --container-log-max-files="5" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046242 4947 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046247 4947 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046251 4947 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046255 4947 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046260 4947 flags.go:64] FLAG: --contention-profiling="false" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046264 4947 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046268 4947 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046273 4947 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046277 4947 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046283 4947 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046287 4947 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046291 4947 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046295 4947 flags.go:64] FLAG: --enable-load-reader="false" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046300 4947 flags.go:64] FLAG: --enable-server="true" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046304 4947 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046310 4947 flags.go:64] FLAG: --event-burst="100" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046314 4947 flags.go:64] FLAG: --event-qps="50" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046318 4947 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046322 4947 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046326 4947 flags.go:64] FLAG: --eviction-hard="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046331 4947 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046336 4947 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046340 4947 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046344 4947 flags.go:64] FLAG: --eviction-soft="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046348 4947 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046352 4947 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046356 4947 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046361 4947 flags.go:64] FLAG: --experimental-mounter-path="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046364 4947 flags.go:64] FLAG: --fail-cgroupv1="false" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046368 4947 flags.go:64] FLAG: --fail-swap-on="true" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046372 4947 flags.go:64] FLAG: --feature-gates="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046377 4947 flags.go:64] FLAG: --file-check-frequency="20s" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046382 4947 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046386 4947 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046390 4947 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046394 4947 flags.go:64] FLAG: --healthz-port="10248" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046398 4947 flags.go:64] FLAG: --help="false" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046402 4947 flags.go:64] FLAG: --hostname-override="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046407 4947 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046411 4947 flags.go:64] FLAG: --http-check-frequency="20s" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046416 4947 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046420 4947 flags.go:64] FLAG: --image-credential-provider-config="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046424 4947 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046428 4947 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046433 4947 flags.go:64] FLAG: --image-service-endpoint="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046437 4947 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046440 4947 flags.go:64] FLAG: --kube-api-burst="100" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046445 4947 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046449 4947 flags.go:64] FLAG: --kube-api-qps="50" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046453 4947 flags.go:64] FLAG: --kube-reserved="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046457 4947 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046461 4947 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046465 4947 flags.go:64] FLAG: --kubelet-cgroups="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046469 4947 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046474 4947 flags.go:64] FLAG: --lock-file="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046478 4947 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046483 4947 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046488 4947 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046495 4947 flags.go:64] FLAG: --log-json-split-stream="false" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046499 4947 flags.go:64] FLAG: --log-text-info-buffer-size="0" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046503 4947 flags.go:64] FLAG: --log-text-split-stream="false" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046507 4947 flags.go:64] FLAG: --logging-format="text" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046511 4947 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046515 4947 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046520 4947 flags.go:64] FLAG: --manifest-url="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046524 4947 flags.go:64] FLAG: --manifest-url-header="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046529 4947 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046533 4947 flags.go:64] FLAG: --max-open-files="1000000" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046538 4947 flags.go:64] FLAG: --max-pods="110" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046543 4947 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046547 4947 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046551 4947 flags.go:64] FLAG: --memory-manager-policy="None" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046555 4947 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046559 4947 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046564 4947 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046568 4947 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046577 4947 flags.go:64] FLAG: --node-status-max-images="50" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046581 4947 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046585 4947 flags.go:64] FLAG: --oom-score-adj="-999" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046589 4947 flags.go:64] FLAG: --pod-cidr="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046598 4947 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046604 4947 flags.go:64] FLAG: --pod-manifest-path="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046608 4947 flags.go:64] FLAG: --pod-max-pids="-1" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046612 4947 flags.go:64] FLAG: --pods-per-core="0" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046618 4947 flags.go:64] FLAG: --port="10250" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046622 4947 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046626 4947 flags.go:64] FLAG: --provider-id="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046630 4947 flags.go:64] FLAG: --qos-reserved="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046634 4947 flags.go:64] FLAG: --read-only-port="10255" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046639 4947 flags.go:64] FLAG: --register-node="true" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046643 4947 flags.go:64] FLAG: --register-schedulable="true" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046647 4947 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046653 4947 flags.go:64] FLAG: --registry-burst="10" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046658 4947 flags.go:64] FLAG: --registry-qps="5" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046663 4947 flags.go:64] FLAG: --reserved-cpus="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046667 4947 flags.go:64] FLAG: --reserved-memory="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046672 4947 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046676 4947 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046680 4947 flags.go:64] FLAG: --rotate-certificates="false" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046684 4947 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046688 4947 flags.go:64] FLAG: --runonce="false" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046692 4947 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046696 4947 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046701 4947 flags.go:64] FLAG: --seccomp-default="false" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046705 4947 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046709 4947 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046714 4947 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046718 4947 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046723 4947 flags.go:64] FLAG: --storage-driver-password="root" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046727 4947 flags.go:64] FLAG: --storage-driver-secure="false" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046731 4947 flags.go:64] FLAG: --storage-driver-table="stats" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046735 4947 flags.go:64] FLAG: --storage-driver-user="root" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046739 4947 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046743 4947 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046747 4947 flags.go:64] FLAG: --system-cgroups="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046751 4947 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046757 4947 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046761 4947 flags.go:64] FLAG: --tls-cert-file="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046765 4947 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046770 4947 flags.go:64] FLAG: --tls-min-version="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046774 4947 flags.go:64] FLAG: --tls-private-key-file="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046778 4947 flags.go:64] FLAG: --topology-manager-policy="none" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046782 4947 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046786 4947 flags.go:64] FLAG: --topology-manager-scope="container" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046790 4947 flags.go:64] FLAG: --v="2" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046796 4947 flags.go:64] FLAG: --version="false" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046801 4947 flags.go:64] FLAG: --vmodule="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046806 4947 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.046811 4947 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.046911 4947 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.046916 4947 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.046920 4947 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.046923 4947 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.046927 4947 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.046931 4947 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.046934 4947 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.046938 4947 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.046942 4947 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.046951 4947 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.046955 4947 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.046959 4947 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.046962 4947 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.046967 4947 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.046971 4947 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.046975 4947 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.046979 4947 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.046984 4947 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.046991 4947 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.046995 4947 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.046999 4947 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.047003 4947 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.047006 4947 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.047010 4947 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.047014 4947 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.047017 4947 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.047021 4947 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.047024 4947 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.047028 4947 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.047032 4947 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.047035 4947 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.047039 4947 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.047042 4947 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.047045 4947 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.047050 4947 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.047054 4947 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.047057 4947 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.047061 4947 feature_gate.go:330] unrecognized feature gate: Example Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.047065 4947 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.047068 4947 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.047072 4947 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.047076 4947 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.047080 4947 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.047083 4947 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.047087 4947 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.047090 4947 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.047094 4947 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.047097 4947 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.047101 4947 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.047104 4947 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.047109 4947 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.047112 4947 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.047116 4947 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.047119 4947 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.047123 4947 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.047126 4947 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.047131 4947 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.047135 4947 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.047139 4947 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.047143 4947 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.047147 4947 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.047150 4947 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.047154 4947 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.047157 4947 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.047161 4947 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.047164 4947 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.047168 4947 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.047172 4947 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.047175 4947 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.047178 4947 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.047182 4947 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.047188 4947 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.057278 4947 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.058730 4947 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.058884 4947 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.058897 4947 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.058904 4947 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.058911 4947 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.058917 4947 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.058922 4947 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.058927 4947 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.058934 4947 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.058945 4947 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.058952 4947 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.058958 4947 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.058963 4947 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.058968 4947 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.058973 4947 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.058976 4947 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.058980 4947 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.058984 4947 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.058988 4947 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.058992 4947 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.058996 4947 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059000 4947 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059004 4947 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059007 4947 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059011 4947 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059016 4947 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059019 4947 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059023 4947 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059027 4947 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059033 4947 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059036 4947 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059041 4947 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059045 4947 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059048 4947 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059053 4947 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059058 4947 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059061 4947 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059067 4947 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059074 4947 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059078 4947 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059083 4947 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059088 4947 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059092 4947 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059096 4947 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059100 4947 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059105 4947 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059109 4947 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059113 4947 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059117 4947 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059121 4947 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059126 4947 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059130 4947 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059133 4947 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059137 4947 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059141 4947 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059145 4947 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059148 4947 feature_gate.go:330] unrecognized feature gate: Example Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059152 4947 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059156 4947 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059160 4947 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059163 4947 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059171 4947 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059175 4947 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059179 4947 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059184 4947 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059188 4947 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059193 4947 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059196 4947 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059201 4947 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059205 4947 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059208 4947 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059212 4947 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.059256 4947 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059390 4947 feature_gate.go:330] unrecognized feature gate: Example Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059399 4947 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059404 4947 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059408 4947 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059412 4947 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059416 4947 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059420 4947 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059426 4947 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059430 4947 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059434 4947 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059438 4947 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059442 4947 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059446 4947 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059450 4947 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059454 4947 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059458 4947 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059461 4947 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059465 4947 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059469 4947 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059473 4947 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059477 4947 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059483 4947 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059487 4947 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059491 4947 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059495 4947 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059498 4947 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059502 4947 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059505 4947 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059513 4947 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059524 4947 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059529 4947 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059534 4947 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059539 4947 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059544 4947 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059549 4947 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059553 4947 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059557 4947 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059560 4947 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059565 4947 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059568 4947 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059572 4947 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059575 4947 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059579 4947 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059583 4947 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059587 4947 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059591 4947 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059595 4947 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059598 4947 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059602 4947 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059606 4947 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059610 4947 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059613 4947 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059616 4947 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059620 4947 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059625 4947 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059629 4947 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059634 4947 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059638 4947 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059642 4947 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059645 4947 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059649 4947 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059654 4947 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059658 4947 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059662 4947 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059667 4947 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059672 4947 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059678 4947 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059682 4947 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059686 4947 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059690 4947 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.059694 4947 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.059701 4947 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.060193 4947 server.go:940] "Client rotation is on, will bootstrap in background" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.064122 4947 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.064297 4947 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.065084 4947 server.go:997] "Starting client certificate rotation" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.065121 4947 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.065290 4947 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-14 04:33:23.003489799 +0000 UTC Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.065401 4947 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.070627 4947 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 29 06:34:09 crc kubenswrapper[4947]: E1129 06:34:09.071837 4947 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.47:6443: connect: connection refused" logger="UnhandledError" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.072488 4947 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.080107 4947 log.go:25] "Validated CRI v1 runtime API" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.094526 4947 log.go:25] "Validated CRI v1 image API" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.096116 4947 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.099174 4947 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-11-29-06-27-47-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.099216 4947 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.120263 4947 manager.go:217] Machine: {Timestamp:2025-11-29 06:34:09.118712087 +0000 UTC m=+0.163094208 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:f0748469-4a41-446c-a5c3-776c2ca32148 BootID:3ead809c-947a-4269-a0f3-b817113e9662 Filesystems:[{Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:7f:10:ed Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:7f:10:ed Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:b1:32:1c Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:71:47:5d Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:fd:01:e1 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:79:c6:32 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:fe:6b:21 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:1a:32:87:9b:0a:fd Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:e2:4f:32:76:3a:2f Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.120558 4947 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.120753 4947 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.121145 4947 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.121411 4947 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.121448 4947 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.121717 4947 topology_manager.go:138] "Creating topology manager with none policy" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.121734 4947 container_manager_linux.go:303] "Creating device plugin manager" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.121996 4947 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.122050 4947 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.122477 4947 state_mem.go:36] "Initialized new in-memory state store" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.122707 4947 server.go:1245] "Using root directory" path="/var/lib/kubelet" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.123388 4947 kubelet.go:418] "Attempting to sync node with API server" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.123416 4947 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.123456 4947 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.123476 4947 kubelet.go:324] "Adding apiserver pod source" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.123495 4947 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.124741 4947 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.47:6443: connect: connection refused Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.124805 4947 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.47:6443: connect: connection refused Nov 29 06:34:09 crc kubenswrapper[4947]: E1129 06:34:09.124911 4947 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.47:6443: connect: connection refused" logger="UnhandledError" Nov 29 06:34:09 crc kubenswrapper[4947]: E1129 06:34:09.124848 4947 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.47:6443: connect: connection refused" logger="UnhandledError" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.125355 4947 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.125752 4947 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.126463 4947 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.127003 4947 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.127027 4947 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.127035 4947 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.127042 4947 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.127053 4947 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.127060 4947 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.127067 4947 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.127077 4947 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.127085 4947 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.127092 4947 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.127102 4947 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.127110 4947 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.127281 4947 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.127789 4947 server.go:1280] "Started kubelet" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.128047 4947 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.128373 4947 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.47:6443: connect: connection refused Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.128451 4947 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.128832 4947 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 29 06:34:09 crc systemd[1]: Started Kubernetes Kubelet. Nov 29 06:34:09 crc kubenswrapper[4947]: E1129 06:34:09.129736 4947 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.47:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187c66ad70a6a00a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-29 06:34:09.127735306 +0000 UTC m=+0.172117387,LastTimestamp:2025-11-29 06:34:09.127735306 +0000 UTC m=+0.172117387,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.130703 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.130750 4947 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 29 06:34:09 crc kubenswrapper[4947]: E1129 06:34:09.131070 4947 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.131083 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 07:34:18.923119515 +0000 UTC Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.131263 4947 server.go:460] "Adding debug handlers to kubelet server" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.135253 4947 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.134939 4947 volume_manager.go:287] "The desired_state_of_world populator starts" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.136201 4947 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 29 06:34:09 crc kubenswrapper[4947]: E1129 06:34:09.131531 4947 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" interval="200ms" Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.138351 4947 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.47:6443: connect: connection refused Nov 29 06:34:09 crc kubenswrapper[4947]: E1129 06:34:09.138426 4947 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.47:6443: connect: connection refused" logger="UnhandledError" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.138903 4947 factory.go:55] Registering systemd factory Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.138995 4947 factory.go:221] Registration of the systemd container factory successfully Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.139447 4947 factory.go:153] Registering CRI-O factory Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.139469 4947 factory.go:221] Registration of the crio container factory successfully Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.139531 4947 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.139560 4947 factory.go:103] Registering Raw factory Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.139577 4947 manager.go:1196] Started watching for new ooms in manager Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.140180 4947 manager.go:319] Starting recovery of all containers Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.144564 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.144650 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.144667 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.144681 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.144696 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.144709 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.144721 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.144732 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.144744 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.144756 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.144769 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.144782 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.144804 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.144822 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.144834 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.144848 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.144863 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.144876 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.144892 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.144905 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.144918 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.144953 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.144970 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.144985 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145001 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145014 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145032 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145046 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145060 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145077 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145095 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145107 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145120 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145132 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145144 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145157 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145170 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145183 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145196 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145209 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145241 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145255 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145266 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145282 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145294 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145307 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145319 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145332 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145348 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145362 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145374 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145386 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145405 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145419 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145473 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145487 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145498 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145509 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145521 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145535 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145546 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145559 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145570 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145582 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145595 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145606 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145618 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145629 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145640 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145653 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145664 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145674 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145686 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145697 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145709 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145721 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145734 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145747 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145758 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145772 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145788 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145802 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145812 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145824 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145837 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145848 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145861 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145873 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145886 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145898 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145911 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145925 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145939 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145950 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145964 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145976 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.145991 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.146004 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.146022 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.146035 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.146049 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.146069 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.146082 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.146097 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.146117 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.146131 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.146145 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.146159 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.146172 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.146185 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.146198 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.146212 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.146249 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.146264 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.146276 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.146288 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.146301 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.146312 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.146326 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.146339 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.146351 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.146364 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.146377 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147041 4947 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147070 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147086 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147101 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147114 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147131 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147143 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147159 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147178 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147192 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147206 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147237 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147253 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147268 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147282 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147294 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147305 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147317 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147330 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147343 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147356 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147372 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147387 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147403 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147417 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147431 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147449 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147464 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147476 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147488 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147503 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147518 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147531 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147545 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147559 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147573 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147587 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147603 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147617 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147631 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147645 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147672 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147686 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147701 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147724 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147745 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147763 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147778 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147793 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147814 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147835 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147853 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147868 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147890 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147906 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147921 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147935 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147954 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147974 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.147992 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.148011 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.148029 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.148043 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.148057 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.148080 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.148098 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.148118 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.148139 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.148154 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.148167 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.148188 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.148208 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.148241 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.148258 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.148272 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.148288 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.148302 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.148314 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.148327 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.148339 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.148357 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.148421 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.148438 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.148458 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.148476 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.148501 4947 reconstruct.go:97] "Volume reconstruction finished" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.148515 4947 reconciler.go:26] "Reconciler: start to sync state" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.158129 4947 manager.go:324] Recovery completed Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.173892 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.175624 4947 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.175792 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.175873 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.175888 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.176728 4947 cpu_manager.go:225] "Starting CPU manager" policy="none" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.176750 4947 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.176780 4947 state_mem.go:36] "Initialized new in-memory state store" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.177457 4947 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.177508 4947 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.177534 4947 kubelet.go:2335] "Starting kubelet main sync loop" Nov 29 06:34:09 crc kubenswrapper[4947]: E1129 06:34:09.177579 4947 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.179432 4947 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.47:6443: connect: connection refused Nov 29 06:34:09 crc kubenswrapper[4947]: E1129 06:34:09.179559 4947 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.47:6443: connect: connection refused" logger="UnhandledError" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.185885 4947 policy_none.go:49] "None policy: Start" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.186554 4947 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.186594 4947 state_mem.go:35] "Initializing new in-memory state store" Nov 29 06:34:09 crc kubenswrapper[4947]: E1129 06:34:09.235104 4947 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.249273 4947 manager.go:334] "Starting Device Plugin manager" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.249328 4947 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.249342 4947 server.go:79] "Starting device plugin registration server" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.249848 4947 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.249865 4947 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.250016 4947 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.250137 4947 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.250149 4947 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 29 06:34:09 crc kubenswrapper[4947]: E1129 06:34:09.257708 4947 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.277959 4947 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.278130 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.279266 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.279303 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.279313 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.279449 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.279817 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.279881 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.280144 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.280167 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.280177 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.280301 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.280425 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.280471 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.280757 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.280774 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.280782 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.280885 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.280906 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.280917 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.281006 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.281083 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.281099 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.281107 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.281328 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.281425 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.281608 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.281627 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.281636 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.281716 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.281823 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.281849 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.282506 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.282524 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.282531 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.282662 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.282684 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.283328 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.283348 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.283357 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.283398 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.283420 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.283432 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.283783 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.283828 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.283841 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:09 crc kubenswrapper[4947]: E1129 06:34:09.337496 4947 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" interval="400ms" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.350840 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.351039 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.351090 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.351115 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.351437 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.351497 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.351533 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.351566 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.351590 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.351622 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.351649 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.351687 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.351719 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.351745 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.351770 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.351803 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.354662 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.354707 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.354720 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.354748 4947 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 29 06:34:09 crc kubenswrapper[4947]: E1129 06:34:09.355890 4947 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.47:6443: connect: connection refused" node="crc" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.453165 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.453237 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.453262 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.453288 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.453307 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.453326 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.453343 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.453361 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.453375 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.453389 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.453404 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.453419 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.453436 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.453454 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.453477 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.453607 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.453885 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.453961 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.454005 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.454059 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.454082 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.454107 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.454151 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.454169 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.454208 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.454185 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.454210 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.454253 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.454269 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.454180 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.556840 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.558623 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.558658 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.558669 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.558694 4947 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 29 06:34:09 crc kubenswrapper[4947]: E1129 06:34:09.559045 4947 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.47:6443: connect: connection refused" node="crc" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.608195 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.615604 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.619738 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.642111 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-af649577660114748f87ff08254026fe2e3940d6d81fa677ed73671bec55d78e WatchSource:0}: Error finding container af649577660114748f87ff08254026fe2e3940d6d81fa677ed73671bec55d78e: Status 404 returned error can't find the container with id af649577660114748f87ff08254026fe2e3940d6d81fa677ed73671bec55d78e Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.642986 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-f327222adfd0b5f2c1916b81ca4fd820cd82f44c2326dd34888b5b2d4d30f38f WatchSource:0}: Error finding container f327222adfd0b5f2c1916b81ca4fd820cd82f44c2326dd34888b5b2d4d30f38f: Status 404 returned error can't find the container with id f327222adfd0b5f2c1916b81ca4fd820cd82f44c2326dd34888b5b2d4d30f38f Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.654765 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.662484 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.683396 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-f5b6971a8be7d0fdd4222f62f72a97faf2609a54cb5b08df1bb4ba36de7ed168 WatchSource:0}: Error finding container f5b6971a8be7d0fdd4222f62f72a97faf2609a54cb5b08df1bb4ba36de7ed168: Status 404 returned error can't find the container with id f5b6971a8be7d0fdd4222f62f72a97faf2609a54cb5b08df1bb4ba36de7ed168 Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.686448 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-a2ce5fc4f2f76dccc4503a1f2a42f4451121be22019b3de30efadc8594d7163c WatchSource:0}: Error finding container a2ce5fc4f2f76dccc4503a1f2a42f4451121be22019b3de30efadc8594d7163c: Status 404 returned error can't find the container with id a2ce5fc4f2f76dccc4503a1f2a42f4451121be22019b3de30efadc8594d7163c Nov 29 06:34:09 crc kubenswrapper[4947]: E1129 06:34:09.738709 4947 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" interval="800ms" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.959863 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.961976 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.962038 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.962056 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:09 crc kubenswrapper[4947]: I1129 06:34:09.962101 4947 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 29 06:34:09 crc kubenswrapper[4947]: E1129 06:34:09.962641 4947 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.47:6443: connect: connection refused" node="crc" Nov 29 06:34:09 crc kubenswrapper[4947]: W1129 06:34:09.963395 4947 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.47:6443: connect: connection refused Nov 29 06:34:09 crc kubenswrapper[4947]: E1129 06:34:09.963512 4947 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.47:6443: connect: connection refused" logger="UnhandledError" Nov 29 06:34:10 crc kubenswrapper[4947]: I1129 06:34:10.130030 4947 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.47:6443: connect: connection refused Nov 29 06:34:10 crc kubenswrapper[4947]: I1129 06:34:10.132031 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 15:48:37.156149244 +0000 UTC Nov 29 06:34:10 crc kubenswrapper[4947]: I1129 06:34:10.132126 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 105h14m27.024026192s for next certificate rotation Nov 29 06:34:10 crc kubenswrapper[4947]: I1129 06:34:10.184112 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a2ce5fc4f2f76dccc4503a1f2a42f4451121be22019b3de30efadc8594d7163c"} Nov 29 06:34:10 crc kubenswrapper[4947]: I1129 06:34:10.185894 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f5b6971a8be7d0fdd4222f62f72a97faf2609a54cb5b08df1bb4ba36de7ed168"} Nov 29 06:34:10 crc kubenswrapper[4947]: I1129 06:34:10.187408 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3de4272c0b222cd85007a56efe719dee66e1cf6ef12fd41121f741870210a995"} Nov 29 06:34:10 crc kubenswrapper[4947]: I1129 06:34:10.188582 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"af649577660114748f87ff08254026fe2e3940d6d81fa677ed73671bec55d78e"} Nov 29 06:34:10 crc kubenswrapper[4947]: I1129 06:34:10.189796 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"f327222adfd0b5f2c1916b81ca4fd820cd82f44c2326dd34888b5b2d4d30f38f"} Nov 29 06:34:10 crc kubenswrapper[4947]: W1129 06:34:10.483652 4947 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.47:6443: connect: connection refused Nov 29 06:34:10 crc kubenswrapper[4947]: E1129 06:34:10.483751 4947 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.47:6443: connect: connection refused" logger="UnhandledError" Nov 29 06:34:10 crc kubenswrapper[4947]: E1129 06:34:10.539415 4947 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" interval="1.6s" Nov 29 06:34:10 crc kubenswrapper[4947]: W1129 06:34:10.544166 4947 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.47:6443: connect: connection refused Nov 29 06:34:10 crc kubenswrapper[4947]: E1129 06:34:10.544289 4947 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.47:6443: connect: connection refused" logger="UnhandledError" Nov 29 06:34:10 crc kubenswrapper[4947]: W1129 06:34:10.659107 4947 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.47:6443: connect: connection refused Nov 29 06:34:10 crc kubenswrapper[4947]: E1129 06:34:10.659254 4947 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.47:6443: connect: connection refused" logger="UnhandledError" Nov 29 06:34:10 crc kubenswrapper[4947]: I1129 06:34:10.762948 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 06:34:10 crc kubenswrapper[4947]: I1129 06:34:10.764370 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:10 crc kubenswrapper[4947]: I1129 06:34:10.764414 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:10 crc kubenswrapper[4947]: I1129 06:34:10.764424 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:10 crc kubenswrapper[4947]: I1129 06:34:10.764450 4947 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 29 06:34:10 crc kubenswrapper[4947]: E1129 06:34:10.764985 4947 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.47:6443: connect: connection refused" node="crc" Nov 29 06:34:11 crc kubenswrapper[4947]: I1129 06:34:11.087771 4947 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 29 06:34:11 crc kubenswrapper[4947]: E1129 06:34:11.089442 4947 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.47:6443: connect: connection refused" logger="UnhandledError" Nov 29 06:34:11 crc kubenswrapper[4947]: I1129 06:34:11.129946 4947 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.47:6443: connect: connection refused Nov 29 06:34:11 crc kubenswrapper[4947]: I1129 06:34:11.194525 4947 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df" exitCode=0 Nov 29 06:34:11 crc kubenswrapper[4947]: I1129 06:34:11.194603 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df"} Nov 29 06:34:11 crc kubenswrapper[4947]: I1129 06:34:11.194749 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 06:34:11 crc kubenswrapper[4947]: I1129 06:34:11.196200 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:11 crc kubenswrapper[4947]: I1129 06:34:11.196247 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:11 crc kubenswrapper[4947]: I1129 06:34:11.196257 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:11 crc kubenswrapper[4947]: I1129 06:34:11.197476 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"dfae5cdad70e7948b6383df2f934a477b237c38b94cf3d3eda22d11fe02826e7"} Nov 29 06:34:11 crc kubenswrapper[4947]: I1129 06:34:11.197507 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8225fda5f45611099716e891065acf0ffa2db6d2c9b79192b5955c1bac77daf4"} Nov 29 06:34:11 crc kubenswrapper[4947]: I1129 06:34:11.197520 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0d79edf1f405f23d4ec3ff1a52d9583fc26d31d1fb690337a7b081868c397d26"} Nov 29 06:34:11 crc kubenswrapper[4947]: I1129 06:34:11.199575 4947 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="b5ef6bea418d1acfb4cfbf3310e7898127bbd731f3eb432daf74d7eeecc4c796" exitCode=0 Nov 29 06:34:11 crc kubenswrapper[4947]: I1129 06:34:11.199644 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 06:34:11 crc kubenswrapper[4947]: I1129 06:34:11.199634 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"b5ef6bea418d1acfb4cfbf3310e7898127bbd731f3eb432daf74d7eeecc4c796"} Nov 29 06:34:11 crc kubenswrapper[4947]: I1129 06:34:11.200408 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:11 crc kubenswrapper[4947]: I1129 06:34:11.200446 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:11 crc kubenswrapper[4947]: I1129 06:34:11.200457 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:11 crc kubenswrapper[4947]: I1129 06:34:11.201326 4947 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="8d0e4509596cc7d5e28048c72689ccfc8c249cf06f856142be2b48103608b05f" exitCode=0 Nov 29 06:34:11 crc kubenswrapper[4947]: I1129 06:34:11.201347 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"8d0e4509596cc7d5e28048c72689ccfc8c249cf06f856142be2b48103608b05f"} Nov 29 06:34:11 crc kubenswrapper[4947]: I1129 06:34:11.201501 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 06:34:11 crc kubenswrapper[4947]: I1129 06:34:11.202465 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:11 crc kubenswrapper[4947]: I1129 06:34:11.202496 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:11 crc kubenswrapper[4947]: I1129 06:34:11.202507 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:11 crc kubenswrapper[4947]: I1129 06:34:11.202977 4947 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c" exitCode=0 Nov 29 06:34:11 crc kubenswrapper[4947]: I1129 06:34:11.203004 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c"} Nov 29 06:34:11 crc kubenswrapper[4947]: I1129 06:34:11.203089 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 06:34:11 crc kubenswrapper[4947]: I1129 06:34:11.203818 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:11 crc kubenswrapper[4947]: I1129 06:34:11.204211 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:11 crc kubenswrapper[4947]: I1129 06:34:11.204262 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:11 crc kubenswrapper[4947]: I1129 06:34:11.205794 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 06:34:11 crc kubenswrapper[4947]: I1129 06:34:11.206371 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:11 crc kubenswrapper[4947]: I1129 06:34:11.206391 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:11 crc kubenswrapper[4947]: I1129 06:34:11.206400 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:12 crc kubenswrapper[4947]: I1129 06:34:12.210299 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"836bfd5239874f47639673b177b0d441dff3d84e255c7c6d1983c9e0db5134fd"} Nov 29 06:34:12 crc kubenswrapper[4947]: I1129 06:34:12.210866 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"eb7347898f9e11318a33ea5f24ef489a4e58da64e0631ac46aa91f30f5691ff5"} Nov 29 06:34:12 crc kubenswrapper[4947]: I1129 06:34:12.210906 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"0decb70631f9a60b5a44909b2cd152c099aa6955393b715617a93d2639a8f211"} Nov 29 06:34:12 crc kubenswrapper[4947]: I1129 06:34:12.210356 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 06:34:12 crc kubenswrapper[4947]: I1129 06:34:12.213430 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:12 crc kubenswrapper[4947]: I1129 06:34:12.213466 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:12 crc kubenswrapper[4947]: I1129 06:34:12.213478 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:12 crc kubenswrapper[4947]: I1129 06:34:12.217312 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0ce7bb31402783842d6a20f7d0667426065729bdee1f7cf4fcbe636173e79da8"} Nov 29 06:34:12 crc kubenswrapper[4947]: I1129 06:34:12.217412 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a73fed24d3da601900cf1f54e0adc59daba09bd9a50813cc39508f2d8c66ae5c"} Nov 29 06:34:12 crc kubenswrapper[4947]: I1129 06:34:12.217426 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"187288a1894fa8a8aaf843f1889edb54adeb0771fb0a5759a1346d1ae04f9970"} Nov 29 06:34:12 crc kubenswrapper[4947]: I1129 06:34:12.217438 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e518bb57f917b37cc08b0f099557c012da1b6c08a168ff3acb262abc2c9d6983"} Nov 29 06:34:12 crc kubenswrapper[4947]: I1129 06:34:12.219290 4947 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde" exitCode=0 Nov 29 06:34:12 crc kubenswrapper[4947]: I1129 06:34:12.219379 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 06:34:12 crc kubenswrapper[4947]: I1129 06:34:12.219374 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde"} Nov 29 06:34:12 crc kubenswrapper[4947]: I1129 06:34:12.220506 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:12 crc kubenswrapper[4947]: I1129 06:34:12.220541 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:12 crc kubenswrapper[4947]: I1129 06:34:12.220572 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:12 crc kubenswrapper[4947]: I1129 06:34:12.221946 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"20235c8af3fdd17aa70bd0744d8faf37aa7781f63ef194f6e467721a47e388d4"} Nov 29 06:34:12 crc kubenswrapper[4947]: I1129 06:34:12.222048 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 06:34:12 crc kubenswrapper[4947]: I1129 06:34:12.228104 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:12 crc kubenswrapper[4947]: I1129 06:34:12.228153 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:12 crc kubenswrapper[4947]: I1129 06:34:12.228172 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:12 crc kubenswrapper[4947]: I1129 06:34:12.230420 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"f62b86b8b7ede5c01c1026af41f584b1e7a171ff14ef1e3769ddf8e73121296f"} Nov 29 06:34:12 crc kubenswrapper[4947]: I1129 06:34:12.230525 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 06:34:12 crc kubenswrapper[4947]: I1129 06:34:12.231494 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:12 crc kubenswrapper[4947]: I1129 06:34:12.231547 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:12 crc kubenswrapper[4947]: I1129 06:34:12.231564 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:12 crc kubenswrapper[4947]: I1129 06:34:12.365637 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 06:34:12 crc kubenswrapper[4947]: I1129 06:34:12.366910 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:12 crc kubenswrapper[4947]: I1129 06:34:12.366963 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:12 crc kubenswrapper[4947]: I1129 06:34:12.366975 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:12 crc kubenswrapper[4947]: I1129 06:34:12.367001 4947 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 29 06:34:12 crc kubenswrapper[4947]: I1129 06:34:12.763994 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 06:34:13 crc kubenswrapper[4947]: I1129 06:34:13.235565 4947 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af" exitCode=0 Nov 29 06:34:13 crc kubenswrapper[4947]: I1129 06:34:13.235681 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af"} Nov 29 06:34:13 crc kubenswrapper[4947]: I1129 06:34:13.235722 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 06:34:13 crc kubenswrapper[4947]: I1129 06:34:13.236743 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:13 crc kubenswrapper[4947]: I1129 06:34:13.236778 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:13 crc kubenswrapper[4947]: I1129 06:34:13.236827 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:13 crc kubenswrapper[4947]: I1129 06:34:13.239066 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ce17a0c77733e53cc77e1ced6944f39b53419840d78420bc7cf4ad34fec82e8b"} Nov 29 06:34:13 crc kubenswrapper[4947]: I1129 06:34:13.239126 4947 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 29 06:34:13 crc kubenswrapper[4947]: I1129 06:34:13.239182 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 06:34:13 crc kubenswrapper[4947]: I1129 06:34:13.239185 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 06:34:13 crc kubenswrapper[4947]: I1129 06:34:13.239287 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 06:34:13 crc kubenswrapper[4947]: I1129 06:34:13.239185 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 06:34:13 crc kubenswrapper[4947]: I1129 06:34:13.240838 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:13 crc kubenswrapper[4947]: I1129 06:34:13.240855 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:13 crc kubenswrapper[4947]: I1129 06:34:13.240880 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:13 crc kubenswrapper[4947]: I1129 06:34:13.240897 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:13 crc kubenswrapper[4947]: I1129 06:34:13.240902 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:13 crc kubenswrapper[4947]: I1129 06:34:13.240862 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:13 crc kubenswrapper[4947]: I1129 06:34:13.240927 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:13 crc kubenswrapper[4947]: I1129 06:34:13.240970 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:13 crc kubenswrapper[4947]: I1129 06:34:13.240938 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:13 crc kubenswrapper[4947]: I1129 06:34:13.240992 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:13 crc kubenswrapper[4947]: I1129 06:34:13.240916 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:13 crc kubenswrapper[4947]: I1129 06:34:13.241073 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:13 crc kubenswrapper[4947]: I1129 06:34:13.780175 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 06:34:13 crc kubenswrapper[4947]: I1129 06:34:13.786756 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 06:34:14 crc kubenswrapper[4947]: I1129 06:34:14.248572 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"17ef81ddc4f773e06ee9c296ecc2e67ce76bf795fbd8357236cad6c0462d7f1a"} Nov 29 06:34:14 crc kubenswrapper[4947]: I1129 06:34:14.248628 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"daa666baf286a9b54dbafbe35f35c3be377481392a4e3811f72cf189d1adbee4"} Nov 29 06:34:14 crc kubenswrapper[4947]: I1129 06:34:14.248642 4947 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 29 06:34:14 crc kubenswrapper[4947]: I1129 06:34:14.248654 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"06bdbb1b45cdc328f5c4345797e222ac33d9f1ea052011c183a229c02134cdad"} Nov 29 06:34:14 crc kubenswrapper[4947]: I1129 06:34:14.248675 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4712b1e44979f37b926f325313325a788ecb9e01b30583bf8210ab6c53d71d57"} Nov 29 06:34:14 crc kubenswrapper[4947]: I1129 06:34:14.248698 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 06:34:14 crc kubenswrapper[4947]: I1129 06:34:14.248794 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 06:34:14 crc kubenswrapper[4947]: I1129 06:34:14.250519 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:14 crc kubenswrapper[4947]: I1129 06:34:14.250522 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:14 crc kubenswrapper[4947]: I1129 06:34:14.250577 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:14 crc kubenswrapper[4947]: I1129 06:34:14.250596 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:14 crc kubenswrapper[4947]: I1129 06:34:14.250595 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:14 crc kubenswrapper[4947]: I1129 06:34:14.250778 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:15 crc kubenswrapper[4947]: I1129 06:34:15.122737 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 06:34:15 crc kubenswrapper[4947]: I1129 06:34:15.257111 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"360e1f2c76ee825b275816919d9ab1473c6db5da5ab22b43134738150306f87d"} Nov 29 06:34:15 crc kubenswrapper[4947]: I1129 06:34:15.257238 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 06:34:15 crc kubenswrapper[4947]: I1129 06:34:15.257327 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 06:34:15 crc kubenswrapper[4947]: I1129 06:34:15.257399 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 06:34:15 crc kubenswrapper[4947]: I1129 06:34:15.258459 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:15 crc kubenswrapper[4947]: I1129 06:34:15.258494 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:15 crc kubenswrapper[4947]: I1129 06:34:15.258507 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:15 crc kubenswrapper[4947]: I1129 06:34:15.258702 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:15 crc kubenswrapper[4947]: I1129 06:34:15.258727 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:15 crc kubenswrapper[4947]: I1129 06:34:15.258739 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:15 crc kubenswrapper[4947]: I1129 06:34:15.259038 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:15 crc kubenswrapper[4947]: I1129 06:34:15.259085 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:15 crc kubenswrapper[4947]: I1129 06:34:15.259098 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:15 crc kubenswrapper[4947]: I1129 06:34:15.288752 4947 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 29 06:34:15 crc kubenswrapper[4947]: I1129 06:34:15.956626 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 06:34:16 crc kubenswrapper[4947]: I1129 06:34:16.261574 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 06:34:16 crc kubenswrapper[4947]: I1129 06:34:16.262303 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 06:34:16 crc kubenswrapper[4947]: I1129 06:34:16.263502 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:16 crc kubenswrapper[4947]: I1129 06:34:16.263558 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:16 crc kubenswrapper[4947]: I1129 06:34:16.263571 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:16 crc kubenswrapper[4947]: I1129 06:34:16.263856 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:16 crc kubenswrapper[4947]: I1129 06:34:16.263902 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:16 crc kubenswrapper[4947]: I1129 06:34:16.263918 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:16 crc kubenswrapper[4947]: I1129 06:34:16.609898 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 06:34:16 crc kubenswrapper[4947]: I1129 06:34:16.689519 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 06:34:16 crc kubenswrapper[4947]: I1129 06:34:16.689747 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 06:34:16 crc kubenswrapper[4947]: I1129 06:34:16.692327 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:16 crc kubenswrapper[4947]: I1129 06:34:16.692369 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:16 crc kubenswrapper[4947]: I1129 06:34:16.692380 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:17 crc kubenswrapper[4947]: I1129 06:34:17.264090 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 06:34:17 crc kubenswrapper[4947]: I1129 06:34:17.265485 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:17 crc kubenswrapper[4947]: I1129 06:34:17.265540 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:17 crc kubenswrapper[4947]: I1129 06:34:17.265550 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:17 crc kubenswrapper[4947]: I1129 06:34:17.381898 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 29 06:34:17 crc kubenswrapper[4947]: I1129 06:34:17.382317 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 06:34:17 crc kubenswrapper[4947]: I1129 06:34:17.384013 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:17 crc kubenswrapper[4947]: I1129 06:34:17.384078 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:17 crc kubenswrapper[4947]: I1129 06:34:17.384097 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:17 crc kubenswrapper[4947]: I1129 06:34:17.922570 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 29 06:34:17 crc kubenswrapper[4947]: I1129 06:34:17.922933 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 06:34:17 crc kubenswrapper[4947]: I1129 06:34:17.924913 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:17 crc kubenswrapper[4947]: I1129 06:34:17.924969 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:17 crc kubenswrapper[4947]: I1129 06:34:17.924983 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:18 crc kubenswrapper[4947]: I1129 06:34:18.151064 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 29 06:34:18 crc kubenswrapper[4947]: I1129 06:34:18.266886 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 06:34:18 crc kubenswrapper[4947]: I1129 06:34:18.268282 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:18 crc kubenswrapper[4947]: I1129 06:34:18.268333 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:18 crc kubenswrapper[4947]: I1129 06:34:18.268352 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:19 crc kubenswrapper[4947]: E1129 06:34:19.257863 4947 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 29 06:34:19 crc kubenswrapper[4947]: I1129 06:34:19.689731 4947 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 29 06:34:19 crc kubenswrapper[4947]: I1129 06:34:19.689831 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 29 06:34:20 crc kubenswrapper[4947]: I1129 06:34:20.124284 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 06:34:20 crc kubenswrapper[4947]: I1129 06:34:20.124480 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 06:34:20 crc kubenswrapper[4947]: I1129 06:34:20.125717 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:20 crc kubenswrapper[4947]: I1129 06:34:20.125756 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:20 crc kubenswrapper[4947]: I1129 06:34:20.125768 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:20 crc kubenswrapper[4947]: I1129 06:34:20.129372 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 06:34:20 crc kubenswrapper[4947]: I1129 06:34:20.271292 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 06:34:20 crc kubenswrapper[4947]: I1129 06:34:20.272260 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:20 crc kubenswrapper[4947]: I1129 06:34:20.272306 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:20 crc kubenswrapper[4947]: I1129 06:34:20.272317 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:22 crc kubenswrapper[4947]: I1129 06:34:22.130531 4947 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Nov 29 06:34:22 crc kubenswrapper[4947]: E1129 06:34:22.140933 4947 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Nov 29 06:34:22 crc kubenswrapper[4947]: E1129 06:34:22.368466 4947 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Nov 29 06:34:22 crc kubenswrapper[4947]: W1129 06:34:22.901312 4947 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 29 06:34:22 crc kubenswrapper[4947]: I1129 06:34:22.901436 4947 trace.go:236] Trace[1648791988]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Nov-2025 06:34:12.899) (total time: 10002ms): Nov 29 06:34:22 crc kubenswrapper[4947]: Trace[1648791988]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (06:34:22.901) Nov 29 06:34:22 crc kubenswrapper[4947]: Trace[1648791988]: [10.002371069s] [10.002371069s] END Nov 29 06:34:22 crc kubenswrapper[4947]: E1129 06:34:22.901469 4947 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 29 06:34:23 crc kubenswrapper[4947]: W1129 06:34:23.125631 4947 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 29 06:34:23 crc kubenswrapper[4947]: I1129 06:34:23.125759 4947 trace.go:236] Trace[706627518]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Nov-2025 06:34:13.123) (total time: 10002ms): Nov 29 06:34:23 crc kubenswrapper[4947]: Trace[706627518]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (06:34:23.125) Nov 29 06:34:23 crc kubenswrapper[4947]: Trace[706627518]: [10.00214467s] [10.00214467s] END Nov 29 06:34:23 crc kubenswrapper[4947]: E1129 06:34:23.125782 4947 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 29 06:34:23 crc kubenswrapper[4947]: W1129 06:34:23.280165 4947 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 29 06:34:23 crc kubenswrapper[4947]: I1129 06:34:23.280267 4947 trace.go:236] Trace[823094556]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Nov-2025 06:34:13.279) (total time: 10001ms): Nov 29 06:34:23 crc kubenswrapper[4947]: Trace[823094556]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (06:34:23.280) Nov 29 06:34:23 crc kubenswrapper[4947]: Trace[823094556]: [10.001105844s] [10.001105844s] END Nov 29 06:34:23 crc kubenswrapper[4947]: E1129 06:34:23.280287 4947 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 29 06:34:23 crc kubenswrapper[4947]: I1129 06:34:23.360263 4947 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 29 06:34:23 crc kubenswrapper[4947]: I1129 06:34:23.360360 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 29 06:34:23 crc kubenswrapper[4947]: I1129 06:34:23.366849 4947 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 29 06:34:23 crc kubenswrapper[4947]: I1129 06:34:23.366942 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 29 06:34:25 crc kubenswrapper[4947]: I1129 06:34:25.568933 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 06:34:25 crc kubenswrapper[4947]: I1129 06:34:25.570275 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:25 crc kubenswrapper[4947]: I1129 06:34:25.570344 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:25 crc kubenswrapper[4947]: I1129 06:34:25.570363 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:25 crc kubenswrapper[4947]: I1129 06:34:25.570401 4947 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 29 06:34:25 crc kubenswrapper[4947]: E1129 06:34:25.575772 4947 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Nov 29 06:34:26 crc kubenswrapper[4947]: I1129 06:34:26.618592 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 06:34:26 crc kubenswrapper[4947]: I1129 06:34:26.618751 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 06:34:26 crc kubenswrapper[4947]: I1129 06:34:26.619938 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:26 crc kubenswrapper[4947]: I1129 06:34:26.619968 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:26 crc kubenswrapper[4947]: I1129 06:34:26.619978 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:26 crc kubenswrapper[4947]: I1129 06:34:26.622663 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 06:34:27 crc kubenswrapper[4947]: I1129 06:34:27.289161 4947 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 29 06:34:27 crc kubenswrapper[4947]: I1129 06:34:27.289240 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 06:34:27 crc kubenswrapper[4947]: I1129 06:34:27.290280 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:27 crc kubenswrapper[4947]: I1129 06:34:27.290341 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:27 crc kubenswrapper[4947]: I1129 06:34:27.290358 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:27 crc kubenswrapper[4947]: I1129 06:34:27.378161 4947 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 29 06:34:27 crc kubenswrapper[4947]: I1129 06:34:27.439259 4947 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 29 06:34:27 crc kubenswrapper[4947]: I1129 06:34:27.531508 4947 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.136126 4947 apiserver.go:52] "Watching apiserver" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.139249 4947 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.139567 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h"] Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.140137 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.140327 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.140362 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.140331 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.140406 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.140412 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 29 06:34:28 crc kubenswrapper[4947]: E1129 06:34:28.140478 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 06:34:28 crc kubenswrapper[4947]: E1129 06:34:28.140640 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 06:34:28 crc kubenswrapper[4947]: E1129 06:34:28.140761 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.141794 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.142686 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.142963 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.143170 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.143276 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.143417 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.143460 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.143421 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.144353 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.170325 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.176090 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.182825 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.191809 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.193378 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.194701 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.202342 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.212212 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.222911 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.238087 4947 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.238585 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.249869 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.269139 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.300449 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.311265 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.332011 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"268b8d19-01a0-4696-8b9e-0efed4129d56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06bdbb1b45cdc328f5c4345797e222ac33d9f1ea052011c183a229c02134cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa666baf286a9b54dbafbe35f35c3be377481392a4e3811f72cf189d1adbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ef81ddc4f773e06ee9c296ecc2e67ce76bf795fbd8357236cad6c0462d7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360e1f2c76ee825b275816919d9ab1473c6db5da5ab22b43134738150306f87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4712b1e44979f37b926f325313325a788ecb9e01b30583bf8210ab6c53d71d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.345262 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.356120 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.359886 4947 trace.go:236] Trace[1144228837]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Nov-2025 06:34:13.568) (total time: 14791ms): Nov 29 06:34:28 crc kubenswrapper[4947]: Trace[1144228837]: ---"Objects listed" error: 14791ms (06:34:28.359) Nov 29 06:34:28 crc kubenswrapper[4947]: Trace[1144228837]: [14.791540573s] [14.791540573s] END Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.359920 4947 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.362448 4947 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.367326 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.368442 4947 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.381435 4947 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:60074->192.168.126.11:17697: read: connection reset by peer" start-of-body= Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.381504 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:60074->192.168.126.11:17697: read: connection reset by peer" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.381814 4947 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.381866 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.387033 4947 csr.go:261] certificate signing request csr-z7fps is approved, waiting to be issued Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.430981 4947 csr.go:257] certificate signing request csr-z7fps is issued Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.463316 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.463371 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.463398 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.463421 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.463446 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.463467 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.463490 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.463514 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.463540 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.463561 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.463590 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.463623 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.463656 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.463682 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.463704 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.463728 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.463750 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.463775 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.463799 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.463844 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.463810 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.463897 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.463905 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.463924 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.464005 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.464026 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.464047 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.464067 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.464088 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.464092 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.464109 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.464133 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.464155 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.464178 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.464200 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.464249 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.464307 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.464330 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.464351 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.464393 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.464419 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.464422 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.464444 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.464420 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.464472 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.464499 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.464527 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.464553 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.464651 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.464712 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.464740 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.464765 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.464790 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.464851 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.464876 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.464896 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.464917 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.464936 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.464957 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.464980 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.465000 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.465022 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.465056 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.465082 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.465106 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.465127 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.465149 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.465173 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.465194 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.465233 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.465259 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.464424 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.464513 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.464533 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.464585 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.464664 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.465318 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.464772 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.464828 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.464943 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.464986 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.465025 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.465276 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.465407 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.465477 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.465614 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.465824 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.465830 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.466652 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.466692 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.466835 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.466850 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.466900 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: E1129 06:34:28.467005 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:34:28.966980294 +0000 UTC m=+20.011362395 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.465281 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.467137 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.467171 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.467199 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.467246 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.467277 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.467303 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.467330 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.467356 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.467379 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.467402 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.467427 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.467452 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.467470 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.467488 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.467505 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.467524 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.467544 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.467560 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.467577 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.467595 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.467612 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.467632 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.467649 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.467665 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.467680 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.467696 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.467713 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.467727 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.467743 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.467759 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.467773 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.467789 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.467807 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.467823 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.467841 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.467855 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.467870 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.467883 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.467924 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.467942 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.467959 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.467974 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.467994 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.468010 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.468026 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.468041 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.468057 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.468072 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.468089 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.468104 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.468119 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.468135 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.468153 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.468172 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.468194 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.468232 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.468343 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.468370 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.468393 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.468416 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.468444 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.468466 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.468491 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.468518 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.468543 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.468568 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.468593 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.468619 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.468642 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.469310 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.469371 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.469396 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.469419 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.469438 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.469457 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.469477 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.469494 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.469517 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.469538 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.469557 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.469576 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.469592 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.470195 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.470269 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.470296 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.470321 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.470344 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.470377 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.470408 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.470433 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.470458 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.470481 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.470580 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.470611 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.470646 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.470671 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.470699 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.470726 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.470750 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.470777 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.470808 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.470834 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.470858 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.470883 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.470935 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.470963 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.470988 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.471014 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.471040 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.471064 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.471087 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.471110 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.471138 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.471162 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.471186 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.471210 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.471258 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.471282 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.471305 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.471329 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.471355 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.471381 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.471408 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.471433 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.471463 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.471488 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.471511 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.471534 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.471559 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.471583 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.471648 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.472469 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.472523 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.472552 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.472576 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.472603 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.472633 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.472661 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.472687 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.472709 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.472738 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.472760 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.472781 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.472800 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.472860 4947 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.473456 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.473487 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.473503 4947 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.473517 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.473534 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.473749 4947 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.473836 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.474703 4947 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.479523 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.479610 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.479772 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.479793 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.479806 4947 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.479820 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.479832 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.479845 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.479856 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.479867 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.479879 4947 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.479891 4947 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.479903 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.479916 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.479926 4947 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.479939 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.479954 4947 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.479968 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.479984 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.479995 4947 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.480008 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.480022 4947 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.480711 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.480782 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.485633 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.472993 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.473204 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.473591 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.473670 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.473718 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.473960 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: E1129 06:34:28.473969 4947 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 06:34:28 crc kubenswrapper[4947]: E1129 06:34:28.491855 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 06:34:28.991838325 +0000 UTC m=+20.036220406 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.492012 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: E1129 06:34:28.474006 4947 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 06:34:28 crc kubenswrapper[4947]: E1129 06:34:28.492107 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 06:34:28.992099301 +0000 UTC m=+20.036481382 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.474127 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.474243 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.474485 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.474858 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.475073 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.475028 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.475111 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.475138 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.475333 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.476029 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.476069 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.476660 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.477009 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.477431 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.477758 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.478711 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.479082 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.479451 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.479459 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.479737 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.481661 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.481932 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.482104 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.482623 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.482961 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.483184 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.483327 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.483574 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.483790 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.484066 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.484089 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.484163 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.484242 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.484525 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.484731 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.484935 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.485082 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.485106 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.485168 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.485494 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.485630 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.492440 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.485879 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.485888 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.485996 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.486163 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.486273 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.486261 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.486415 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.486616 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.486743 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.486845 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.487110 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.487182 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.487523 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.487706 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.487970 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.488122 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.488259 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.488490 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.488648 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.488748 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.488762 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.489076 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.489124 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.489730 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.489635 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.490441 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.491045 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.492439 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.492568 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.492869 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.492941 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.493151 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.493309 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.495528 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.495666 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.495988 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.496567 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.496965 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.497017 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.497521 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.497552 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.497935 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.498102 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.498125 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.498329 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.498518 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.498828 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.499497 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.499406 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.500580 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.500915 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.501316 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.501285 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.502123 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.502028 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.502316 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.502431 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.502845 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.504114 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.504487 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.504786 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.504858 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.505046 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.505680 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.505944 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.494923 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.511771 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.514398 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.514842 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: E1129 06:34:28.515518 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 06:34:28 crc kubenswrapper[4947]: E1129 06:34:28.515559 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 06:34:28 crc kubenswrapper[4947]: E1129 06:34:28.515574 4947 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 06:34:28 crc kubenswrapper[4947]: E1129 06:34:28.515660 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-29 06:34:29.01562197 +0000 UTC m=+20.060004041 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.516039 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.516662 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.521755 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.522161 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.522584 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.523267 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.524069 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.524386 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.524804 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.525787 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.526101 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.526545 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.526677 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: E1129 06:34:28.528120 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 06:34:28 crc kubenswrapper[4947]: E1129 06:34:28.528168 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 06:34:28 crc kubenswrapper[4947]: E1129 06:34:28.528189 4947 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 06:34:28 crc kubenswrapper[4947]: E1129 06:34:28.528261 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-29 06:34:29.028241395 +0000 UTC m=+20.072623476 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.528703 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.528846 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.528875 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.529486 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.529546 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.529603 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.529905 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.530195 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.530347 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.531190 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.531509 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.531988 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.532623 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.532717 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.531892 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.533015 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.533188 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.533541 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.533831 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.534385 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.537852 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.538538 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.538834 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.538881 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.540286 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.541123 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.541161 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.545589 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-sxdk5"] Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.545851 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.545889 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-sxdk5" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.547192 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.547542 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.547904 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.547969 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.548030 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.548158 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.548519 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.549656 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.550618 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.550769 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"268b8d19-01a0-4696-8b9e-0efed4129d56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06bdbb1b45cdc328f5c4345797e222ac33d9f1ea052011c183a229c02134cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa666baf286a9b54dbafbe35f35c3be377481392a4e3811f72cf189d1adbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ef81ddc4f773e06ee9c296ecc2e67ce76bf795fbd8357236cad6c0462d7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360e1f2c76ee825b275816919d9ab1473c6db5da5ab22b43134738150306f87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4712b1e44979f37b926f325313325a788ecb9e01b30583bf8210ab6c53d71d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.551166 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.551282 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.551460 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.557848 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.558598 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.559076 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.560938 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.572458 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.574326 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.583538 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.583707 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.583927 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.584146 4947 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.584255 4947 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.584336 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.584432 4947 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.584520 4947 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.584607 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.584690 4947 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.585293 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.585399 4947 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.585485 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.585559 4947 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.585624 4947 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.585689 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.585744 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.585797 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.585850 4947 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.585904 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.585956 4947 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.586015 4947 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.586070 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.586123 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.586175 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.585044 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.586282 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.586563 4947 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.586636 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.586694 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.586752 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.586829 4947 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.586890 4947 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.586949 4947 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.587020 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.587079 4947 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.587154 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.587210 4947 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.587308 4947 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.587370 4947 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.587423 4947 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.587479 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.587546 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.587602 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.587656 4947 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.587709 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.587766 4947 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.587892 4947 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.587954 4947 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.588010 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.588070 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.588124 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.588175 4947 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.588963 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.589035 4947 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.589090 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.589153 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.589246 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.589328 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.589409 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.589475 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.589540 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.589605 4947 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.589670 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.589742 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.589819 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.589886 4947 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.589950 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.590019 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.590094 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.590168 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.590853 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.590937 4947 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.590996 4947 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591070 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591126 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591186 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.585267 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591265 4947 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591385 4947 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591406 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591419 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591430 4947 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591442 4947 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591452 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591462 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591470 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591482 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591492 4947 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591501 4947 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591509 4947 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591519 4947 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591527 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591536 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591546 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591556 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591566 4947 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591575 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591585 4947 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591594 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591605 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591614 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591623 4947 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591632 4947 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591642 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591653 4947 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591662 4947 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591671 4947 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591680 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591689 4947 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591698 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591709 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591719 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591728 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591749 4947 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591758 4947 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591767 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591776 4947 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591787 4947 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591796 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591805 4947 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591817 4947 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591827 4947 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591836 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591844 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591854 4947 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591862 4947 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591873 4947 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591881 4947 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591892 4947 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591901 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591910 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591919 4947 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591930 4947 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591939 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591948 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591958 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591967 4947 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591978 4947 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591987 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.591996 4947 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.592005 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.592017 4947 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.592025 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.592037 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.592046 4947 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.592056 4947 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.592065 4947 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.592074 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.592083 4947 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.592093 4947 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.592106 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.592115 4947 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.592124 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.592134 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.592142 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.592151 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.592159 4947 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.592169 4947 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.592177 4947 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.592188 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.592196 4947 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.592205 4947 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.592231 4947 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.592244 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.592254 4947 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.592263 4947 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.592272 4947 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.592282 4947 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.592291 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.583756 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.599660 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.610746 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.621528 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.635844 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.645799 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.655096 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.662689 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sxdk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f180a1-2fb0-4b96-85ed-1116677a7c99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swgjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sxdk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.682958 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"268b8d19-01a0-4696-8b9e-0efed4129d56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06bdbb1b45cdc328f5c4345797e222ac33d9f1ea052011c183a229c02134cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa666baf286a9b54dbafbe35f35c3be377481392a4e3811f72cf189d1adbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ef81ddc4f773e06ee9c296ecc2e67ce76bf795fbd8357236cad6c0462d7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360e1f2c76ee825b275816919d9ab1473c6db5da5ab22b43134738150306f87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4712b1e44979f37b926f325313325a788ecb9e01b30583bf8210ab6c53d71d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.693416 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c8f180a1-2fb0-4b96-85ed-1116677a7c99-hosts-file\") pod \"node-resolver-sxdk5\" (UID: \"c8f180a1-2fb0-4b96-85ed-1116677a7c99\") " pod="openshift-dns/node-resolver-sxdk5" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.693487 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swgjz\" (UniqueName: \"kubernetes.io/projected/c8f180a1-2fb0-4b96-85ed-1116677a7c99-kube-api-access-swgjz\") pod \"node-resolver-sxdk5\" (UID: \"c8f180a1-2fb0-4b96-85ed-1116677a7c99\") " pod="openshift-dns/node-resolver-sxdk5" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.709386 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.729989 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.740619 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21d5af32-2f1c-4aa8-a9b3-c436a5b3cb32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8225fda5f45611099716e891065acf0ffa2db6d2c9b79192b5955c1bac77daf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d79edf1f405f23d4ec3ff1a52d9583fc26d31d1fb690337a7b081868c397d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfae5cdad70e7948b6383df2f934a477b237c38b94cf3d3eda22d11fe02826e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20235c8af3fdd17aa70bd0744d8faf37aa7781f63ef194f6e467721a47e388d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.753856 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.762922 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 29 06:34:28 crc kubenswrapper[4947]: W1129 06:34:28.765035 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-3b7637e57457eb99158f350102347512eb2a567d96f3c98101566d6923448a34 WatchSource:0}: Error finding container 3b7637e57457eb99158f350102347512eb2a567d96f3c98101566d6923448a34: Status 404 returned error can't find the container with id 3b7637e57457eb99158f350102347512eb2a567d96f3c98101566d6923448a34 Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.769061 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 29 06:34:28 crc kubenswrapper[4947]: W1129 06:34:28.784146 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-1b583f45784fb2cc8662ec1a0b0af4552dec61811b8ff000b16f858fcc1e7df1 WatchSource:0}: Error finding container 1b583f45784fb2cc8662ec1a0b0af4552dec61811b8ff000b16f858fcc1e7df1: Status 404 returned error can't find the container with id 1b583f45784fb2cc8662ec1a0b0af4552dec61811b8ff000b16f858fcc1e7df1 Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.794867 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swgjz\" (UniqueName: \"kubernetes.io/projected/c8f180a1-2fb0-4b96-85ed-1116677a7c99-kube-api-access-swgjz\") pod \"node-resolver-sxdk5\" (UID: \"c8f180a1-2fb0-4b96-85ed-1116677a7c99\") " pod="openshift-dns/node-resolver-sxdk5" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.794927 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c8f180a1-2fb0-4b96-85ed-1116677a7c99-hosts-file\") pod \"node-resolver-sxdk5\" (UID: \"c8f180a1-2fb0-4b96-85ed-1116677a7c99\") " pod="openshift-dns/node-resolver-sxdk5" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.795018 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c8f180a1-2fb0-4b96-85ed-1116677a7c99-hosts-file\") pod \"node-resolver-sxdk5\" (UID: \"c8f180a1-2fb0-4b96-85ed-1116677a7c99\") " pod="openshift-dns/node-resolver-sxdk5" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.814317 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swgjz\" (UniqueName: \"kubernetes.io/projected/c8f180a1-2fb0-4b96-85ed-1116677a7c99-kube-api-access-swgjz\") pod \"node-resolver-sxdk5\" (UID: \"c8f180a1-2fb0-4b96-85ed-1116677a7c99\") " pod="openshift-dns/node-resolver-sxdk5" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.873348 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-sxdk5" Nov 29 06:34:28 crc kubenswrapper[4947]: W1129 06:34:28.891552 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8f180a1_2fb0_4b96_85ed_1116677a7c99.slice/crio-ad75b284d7c4617941909797f734920f4c55c91e8f2d917e8ae953c939b364da WatchSource:0}: Error finding container ad75b284d7c4617941909797f734920f4c55c91e8f2d917e8ae953c939b364da: Status 404 returned error can't find the container with id ad75b284d7c4617941909797f734920f4c55c91e8f2d917e8ae953c939b364da Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.996919 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.997042 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:34:28 crc kubenswrapper[4947]: I1129 06:34:28.997069 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:34:28 crc kubenswrapper[4947]: E1129 06:34:28.997165 4947 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 06:34:28 crc kubenswrapper[4947]: E1129 06:34:28.997241 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 06:34:29.997207146 +0000 UTC m=+21.041589237 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 06:34:28 crc kubenswrapper[4947]: E1129 06:34:28.997293 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:34:29.997287138 +0000 UTC m=+21.041669219 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:34:28 crc kubenswrapper[4947]: E1129 06:34:28.997359 4947 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 06:34:28 crc kubenswrapper[4947]: E1129 06:34:28.997405 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 06:34:29.997396751 +0000 UTC m=+21.041778832 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.066073 4947 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Nov 29 06:34:29 crc kubenswrapper[4947]: W1129 06:34:29.066334 4947 reflector.go:484] object-"openshift-network-node-identity"/"network-node-identity-cert": watch of *v1.Secret ended with: very short watch: object-"openshift-network-node-identity"/"network-node-identity-cert": Unexpected watch close - watch lasted less than a second and no items received Nov 29 06:34:29 crc kubenswrapper[4947]: W1129 06:34:29.066364 4947 reflector.go:484] object-"openshift-network-operator"/"iptables-alerter-script": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"iptables-alerter-script": Unexpected watch close - watch lasted less than a second and no items received Nov 29 06:34:29 crc kubenswrapper[4947]: W1129 06:34:29.066366 4947 reflector.go:484] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": watch of *v1.Secret ended with: very short watch: object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": Unexpected watch close - watch lasted less than a second and no items received Nov 29 06:34:29 crc kubenswrapper[4947]: E1129 06:34:29.066380 4947 controller.go:145] "Failed to ensure lease exists, will retry" err="Post \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases?timeout=10s\": read tcp 38.102.83.47:45268->38.102.83.47:6443: use of closed network connection" interval="6.4s" Nov 29 06:34:29 crc kubenswrapper[4947]: E1129 06:34:29.066349 4947 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events\": read tcp 38.102.83.47:45268->38.102.83.47:6443: use of closed network connection" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187c66ad8fd396ad openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-29 06:34:09.650775725 +0000 UTC m=+0.695157806,LastTimestamp:2025-11-29 06:34:09.650775725 +0000 UTC m=+0.695157806,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 29 06:34:29 crc kubenswrapper[4947]: W1129 06:34:29.066434 4947 reflector.go:484] object-"openshift-network-node-identity"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Nov 29 06:34:29 crc kubenswrapper[4947]: W1129 06:34:29.066474 4947 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Nov 29 06:34:29 crc kubenswrapper[4947]: W1129 06:34:29.066483 4947 reflector.go:484] object-"openshift-network-node-identity"/"ovnkube-identity-cm": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"ovnkube-identity-cm": Unexpected watch close - watch lasted less than a second and no items received Nov 29 06:34:29 crc kubenswrapper[4947]: W1129 06:34:29.066495 4947 reflector.go:484] object-"openshift-network-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Nov 29 06:34:29 crc kubenswrapper[4947]: W1129 06:34:29.066505 4947 reflector.go:484] object-"openshift-network-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Nov 29 06:34:29 crc kubenswrapper[4947]: W1129 06:34:29.066529 4947 reflector.go:484] object-"openshift-network-node-identity"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Nov 29 06:34:29 crc kubenswrapper[4947]: W1129 06:34:29.066535 4947 reflector.go:484] object-"openshift-network-operator"/"metrics-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-network-operator"/"metrics-tls": Unexpected watch close - watch lasted less than a second and no items received Nov 29 06:34:29 crc kubenswrapper[4947]: W1129 06:34:29.066334 4947 reflector.go:484] object-"openshift-network-node-identity"/"env-overrides": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"env-overrides": Unexpected watch close - watch lasted less than a second and no items received Nov 29 06:34:29 crc kubenswrapper[4947]: W1129 06:34:29.066555 4947 reflector.go:484] object-"openshift-dns"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Nov 29 06:34:29 crc kubenswrapper[4947]: W1129 06:34:29.068233 4947 reflector.go:484] object-"openshift-dns"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.097729 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.099203 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:34:29 crc kubenswrapper[4947]: E1129 06:34:29.097931 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 06:34:29 crc kubenswrapper[4947]: E1129 06:34:29.099370 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 06:34:29 crc kubenswrapper[4947]: E1129 06:34:29.099383 4947 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 06:34:29 crc kubenswrapper[4947]: E1129 06:34:29.099426 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-29 06:34:30.099415086 +0000 UTC m=+21.143797167 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 06:34:29 crc kubenswrapper[4947]: E1129 06:34:29.099457 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 06:34:29 crc kubenswrapper[4947]: E1129 06:34:29.099465 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 06:34:29 crc kubenswrapper[4947]: E1129 06:34:29.099471 4947 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 06:34:29 crc kubenswrapper[4947]: E1129 06:34:29.099507 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-29 06:34:30.099501188 +0000 UTC m=+21.143883269 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.178693 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:34:29 crc kubenswrapper[4947]: E1129 06:34:29.178815 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.183394 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.183916 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.184835 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.185621 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.186384 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.186944 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.188992 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.189666 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.190715 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.191420 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.192759 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.193675 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.194686 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.195261 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.195776 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.198767 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.199384 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.201146 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.201750 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.202360 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.203419 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.203979 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.204426 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.205563 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.205994 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.211481 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.212642 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.213367 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.217142 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.217853 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.219088 4947 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.219319 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.221354 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.222313 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.222811 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.224915 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.225566 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"268b8d19-01a0-4696-8b9e-0efed4129d56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06bdbb1b45cdc328f5c4345797e222ac33d9f1ea052011c183a229c02134cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa666baf286a9b54dbafbe35f35c3be377481392a4e3811f72cf189d1adbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ef81ddc4f773e06ee9c296ecc2e67ce76bf795fbd8357236cad6c0462d7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360e1f2c76ee825b275816919d9ab1473c6db5da5ab22b43134738150306f87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4712b1e44979f37b926f325313325a788ecb9e01b30583bf8210ab6c53d71d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.226577 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.227136 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.228863 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.230010 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.230654 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.232179 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.233739 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.234579 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.235568 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.236403 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.237723 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.238943 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.241095 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.241205 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.241774 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.242452 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.245406 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.246523 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.247737 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.252714 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.278165 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.294354 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"92f526fea4a10baf89765a67868ca0fe47ce19bbde73cd538bd65add7578f000"} Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.294417 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3934144663bb1471d168783c160c42654884d04d2510cbe1e5f2f6e2cfb94f4d"} Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.294433 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"8e9a2c933eeca27b0c31f4e3c4b2486ffaeff81a935129b70687af7b2873c8ef"} Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.298234 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"de17371c2f08b16e96df9f76ce3eaf0981cc8590aca0784ea95598b7842812eb"} Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.298294 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"3b7637e57457eb99158f350102347512eb2a567d96f3c98101566d6923448a34"} Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.300288 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.301985 4947 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ce17a0c77733e53cc77e1ced6944f39b53419840d78420bc7cf4ad34fec82e8b" exitCode=255 Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.302069 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"ce17a0c77733e53cc77e1ced6944f39b53419840d78420bc7cf4ad34fec82e8b"} Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.308763 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-sxdk5" event={"ID":"c8f180a1-2fb0-4b96-85ed-1116677a7c99","Type":"ContainerStarted","Data":"16b80d32b0abeeab39bc9fd8c24510ec865e957006ecac31da11f86b6e0fb983"} Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.308821 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-sxdk5" event={"ID":"c8f180a1-2fb0-4b96-85ed-1116677a7c99","Type":"ContainerStarted","Data":"ad75b284d7c4617941909797f734920f4c55c91e8f2d917e8ae953c939b364da"} Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.309518 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.311902 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"1b583f45784fb2cc8662ec1a0b0af4552dec61811b8ff000b16f858fcc1e7df1"} Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.323336 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sxdk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f180a1-2fb0-4b96-85ed-1116677a7c99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swgjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sxdk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 06:34:29 crc kubenswrapper[4947]: E1129 06:34:29.325747 4947 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.335371 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.335548 4947 scope.go:117] "RemoveContainer" containerID="ce17a0c77733e53cc77e1ced6944f39b53419840d78420bc7cf4ad34fec82e8b" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.353096 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21d5af32-2f1c-4aa8-a9b3-c436a5b3cb32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8225fda5f45611099716e891065acf0ffa2db6d2c9b79192b5955c1bac77daf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d79edf1f405f23d4ec3ff1a52d9583fc26d31d1fb690337a7b081868c397d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfae5cdad70e7948b6383df2f934a477b237c38b94cf3d3eda22d11fe02826e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20235c8af3fdd17aa70bd0744d8faf37aa7781f63ef194f6e467721a47e388d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.371525 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.386597 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:29Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.401902 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21d5af32-2f1c-4aa8-a9b3-c436a5b3cb32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8225fda5f45611099716e891065acf0ffa2db6d2c9b79192b5955c1bac77daf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d79edf1f405f23d4ec3ff1a52d9583fc26d31d1fb690337a7b081868c397d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfae5cdad70e7948b6383df2f934a477b237c38b94cf3d3eda22d11fe02826e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20235c8af3fdd17aa70bd0744d8faf37aa7781f63ef194f6e467721a47e388d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:29Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.418159 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:29Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.432636 4947 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-11-29 06:29:28 +0000 UTC, rotation deadline is 2026-09-01 19:25:57.302922138 +0000 UTC Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.432732 4947 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6636h51m27.870193829s for next certificate rotation Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.433729 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:29Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.458240 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"268b8d19-01a0-4696-8b9e-0efed4129d56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06bdbb1b45cdc328f5c4345797e222ac33d9f1ea052011c183a229c02134cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa666baf286a9b54dbafbe35f35c3be377481392a4e3811f72cf189d1adbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ef81ddc4f773e06ee9c296ecc2e67ce76bf795fbd8357236cad6c0462d7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360e1f2c76ee825b275816919d9ab1473c6db5da5ab22b43134738150306f87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4712b1e44979f37b926f325313325a788ecb9e01b30583bf8210ab6c53d71d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:29Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.473634 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabaff26-a896-4929-8b32-6e32efe02ffc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e518bb57f917b37cc08b0f099557c012da1b6c08a168ff3acb262abc2c9d6983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73fed24d3da601900cf1f54e0adc59daba09bd9a50813cc39508f2d8c66ae5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187288a1894fa8a8aaf843f1889edb54adeb0771fb0a5759a1346d1ae04f9970\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce17a0c77733e53cc77e1ced6944f39b53419840d78420bc7cf4ad34fec82e8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce17a0c77733e53cc77e1ced6944f39b53419840d78420bc7cf4ad34fec82e8b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 06:34:22.756831 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 06:34:22.759210 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196230451/tls.crt::/tmp/serving-cert-4196230451/tls.key\\\\\\\"\\\\nI1129 06:34:28.364401 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 06:34:28.369411 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 06:34:28.369441 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 06:34:28.369468 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 06:34:28.369474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 06:34:28.374098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 06:34:28.374134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374148 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 06:34:28.374153 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 06:34:28.374157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 06:34:28.374161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 06:34:28.374166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 06:34:28.376189 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ce7bb31402783842d6a20f7d0667426065729bdee1f7cf4fcbe636173e79da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:29Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.492027 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de17371c2f08b16e96df9f76ce3eaf0981cc8590aca0784ea95598b7842812eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:29Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.508724 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:29Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.530077 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92f526fea4a10baf89765a67868ca0fe47ce19bbde73cd538bd65add7578f000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3934144663bb1471d168783c160c42654884d04d2510cbe1e5f2f6e2cfb94f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:29Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.547082 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:29Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.568941 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sxdk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f180a1-2fb0-4b96-85ed-1116677a7c99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b80d32b0abeeab39bc9fd8c24510ec865e957006ecac31da11f86b6e0fb983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swgjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sxdk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:29Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.656380 4947 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.880320 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.929107 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.950351 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 29 06:34:29 crc kubenswrapper[4947]: I1129 06:34:29.979339 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.006459 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.006569 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.006597 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:34:30 crc kubenswrapper[4947]: E1129 06:34:30.006663 4947 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 06:34:30 crc kubenswrapper[4947]: E1129 06:34:30.006667 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:34:32.006638316 +0000 UTC m=+23.051020397 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:34:30 crc kubenswrapper[4947]: E1129 06:34:30.006721 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 06:34:32.006705148 +0000 UTC m=+23.051087229 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 06:34:30 crc kubenswrapper[4947]: E1129 06:34:30.006756 4947 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 06:34:30 crc kubenswrapper[4947]: E1129 06:34:30.006810 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 06:34:32.00680013 +0000 UTC m=+23.051182211 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.107556 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.107918 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:34:30 crc kubenswrapper[4947]: E1129 06:34:30.107783 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 06:34:30 crc kubenswrapper[4947]: E1129 06:34:30.108105 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 06:34:30 crc kubenswrapper[4947]: E1129 06:34:30.108126 4947 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 06:34:30 crc kubenswrapper[4947]: E1129 06:34:30.108024 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 06:34:30 crc kubenswrapper[4947]: E1129 06:34:30.108190 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-29 06:34:32.1081703 +0000 UTC m=+23.152552381 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 06:34:30 crc kubenswrapper[4947]: E1129 06:34:30.108189 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 06:34:30 crc kubenswrapper[4947]: E1129 06:34:30.108205 4947 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 06:34:30 crc kubenswrapper[4947]: E1129 06:34:30.108238 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-29 06:34:32.108232651 +0000 UTC m=+23.152614732 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.178250 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.178247 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:34:30 crc kubenswrapper[4947]: E1129 06:34:30.178390 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 06:34:30 crc kubenswrapper[4947]: E1129 06:34:30.178448 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.257920 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.316172 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.318092 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3fb63d3fae21b626041036425ac0dbe4cc0fba792aa82812037eb0cf64036984"} Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.326981 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.327170 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-xlg45"] Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.327581 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-xlg45" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.330286 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-zb4cv"] Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.330943 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-zb4cv" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.336314 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.336358 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.337169 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.337314 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.337750 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.341989 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.343616 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.355625 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de17371c2f08b16e96df9f76ce3eaf0981cc8590aca0784ea95598b7842812eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:30Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.381691 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:30Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.395209 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92f526fea4a10baf89765a67868ca0fe47ce19bbde73cd538bd65add7578f000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3934144663bb1471d168783c160c42654884d04d2510cbe1e5f2f6e2cfb94f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:30Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.408713 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:30Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.411168 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5bjq\" (UniqueName: \"kubernetes.io/projected/6063fc55-4365-4f22-a005-bfac3812fdce-kube-api-access-k5bjq\") pod \"multus-additional-cni-plugins-zb4cv\" (UID: \"6063fc55-4365-4f22-a005-bfac3812fdce\") " pod="openshift-multus/multus-additional-cni-plugins-zb4cv" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.411240 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6063fc55-4365-4f22-a005-bfac3812fdce-system-cni-dir\") pod \"multus-additional-cni-plugins-zb4cv\" (UID: \"6063fc55-4365-4f22-a005-bfac3812fdce\") " pod="openshift-multus/multus-additional-cni-plugins-zb4cv" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.411277 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rndhc\" (UniqueName: \"kubernetes.io/projected/2cbb3532-a15b-4cca-bde1-aa1ae20698f1-kube-api-access-rndhc\") pod \"multus-xlg45\" (UID: \"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\") " pod="openshift-multus/multus-xlg45" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.411300 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2cbb3532-a15b-4cca-bde1-aa1ae20698f1-multus-cni-dir\") pod \"multus-xlg45\" (UID: \"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\") " pod="openshift-multus/multus-xlg45" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.411319 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2cbb3532-a15b-4cca-bde1-aa1ae20698f1-multus-daemon-config\") pod \"multus-xlg45\" (UID: \"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\") " pod="openshift-multus/multus-xlg45" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.411336 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6063fc55-4365-4f22-a005-bfac3812fdce-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-zb4cv\" (UID: \"6063fc55-4365-4f22-a005-bfac3812fdce\") " pod="openshift-multus/multus-additional-cni-plugins-zb4cv" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.411454 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2cbb3532-a15b-4cca-bde1-aa1ae20698f1-cnibin\") pod \"multus-xlg45\" (UID: \"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\") " pod="openshift-multus/multus-xlg45" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.411541 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2cbb3532-a15b-4cca-bde1-aa1ae20698f1-cni-binary-copy\") pod \"multus-xlg45\" (UID: \"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\") " pod="openshift-multus/multus-xlg45" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.411597 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2cbb3532-a15b-4cca-bde1-aa1ae20698f1-host-run-netns\") pod \"multus-xlg45\" (UID: \"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\") " pod="openshift-multus/multus-xlg45" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.411636 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2cbb3532-a15b-4cca-bde1-aa1ae20698f1-host-var-lib-kubelet\") pod \"multus-xlg45\" (UID: \"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\") " pod="openshift-multus/multus-xlg45" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.411655 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6063fc55-4365-4f22-a005-bfac3812fdce-cni-binary-copy\") pod \"multus-additional-cni-plugins-zb4cv\" (UID: \"6063fc55-4365-4f22-a005-bfac3812fdce\") " pod="openshift-multus/multus-additional-cni-plugins-zb4cv" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.411678 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6063fc55-4365-4f22-a005-bfac3812fdce-os-release\") pod \"multus-additional-cni-plugins-zb4cv\" (UID: \"6063fc55-4365-4f22-a005-bfac3812fdce\") " pod="openshift-multus/multus-additional-cni-plugins-zb4cv" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.411697 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6063fc55-4365-4f22-a005-bfac3812fdce-tuning-conf-dir\") pod \"multus-additional-cni-plugins-zb4cv\" (UID: \"6063fc55-4365-4f22-a005-bfac3812fdce\") " pod="openshift-multus/multus-additional-cni-plugins-zb4cv" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.411718 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2cbb3532-a15b-4cca-bde1-aa1ae20698f1-host-run-multus-certs\") pod \"multus-xlg45\" (UID: \"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\") " pod="openshift-multus/multus-xlg45" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.411739 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6063fc55-4365-4f22-a005-bfac3812fdce-cnibin\") pod \"multus-additional-cni-plugins-zb4cv\" (UID: \"6063fc55-4365-4f22-a005-bfac3812fdce\") " pod="openshift-multus/multus-additional-cni-plugins-zb4cv" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.411785 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2cbb3532-a15b-4cca-bde1-aa1ae20698f1-os-release\") pod \"multus-xlg45\" (UID: \"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\") " pod="openshift-multus/multus-xlg45" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.411807 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2cbb3532-a15b-4cca-bde1-aa1ae20698f1-multus-socket-dir-parent\") pod \"multus-xlg45\" (UID: \"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\") " pod="openshift-multus/multus-xlg45" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.411833 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2cbb3532-a15b-4cca-bde1-aa1ae20698f1-hostroot\") pod \"multus-xlg45\" (UID: \"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\") " pod="openshift-multus/multus-xlg45" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.411876 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2cbb3532-a15b-4cca-bde1-aa1ae20698f1-etc-kubernetes\") pod \"multus-xlg45\" (UID: \"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\") " pod="openshift-multus/multus-xlg45" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.411928 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2cbb3532-a15b-4cca-bde1-aa1ae20698f1-host-run-k8s-cni-cncf-io\") pod \"multus-xlg45\" (UID: \"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\") " pod="openshift-multus/multus-xlg45" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.411950 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2cbb3532-a15b-4cca-bde1-aa1ae20698f1-host-var-lib-cni-bin\") pod \"multus-xlg45\" (UID: \"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\") " pod="openshift-multus/multus-xlg45" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.412011 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2cbb3532-a15b-4cca-bde1-aa1ae20698f1-host-var-lib-cni-multus\") pod \"multus-xlg45\" (UID: \"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\") " pod="openshift-multus/multus-xlg45" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.412063 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2cbb3532-a15b-4cca-bde1-aa1ae20698f1-system-cni-dir\") pod \"multus-xlg45\" (UID: \"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\") " pod="openshift-multus/multus-xlg45" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.412090 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2cbb3532-a15b-4cca-bde1-aa1ae20698f1-multus-conf-dir\") pod \"multus-xlg45\" (UID: \"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\") " pod="openshift-multus/multus-xlg45" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.418925 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sxdk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f180a1-2fb0-4b96-85ed-1116677a7c99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b80d32b0abeeab39bc9fd8c24510ec865e957006ecac31da11f86b6e0fb983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swgjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sxdk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:30Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.427188 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.437254 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"268b8d19-01a0-4696-8b9e-0efed4129d56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06bdbb1b45cdc328f5c4345797e222ac33d9f1ea052011c183a229c02134cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa666baf286a9b54dbafbe35f35c3be377481392a4e3811f72cf189d1adbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ef81ddc4f773e06ee9c296ecc2e67ce76bf795fbd8357236cad6c0462d7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360e1f2c76ee825b275816919d9ab1473c6db5da5ab22b43134738150306f87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4712b1e44979f37b926f325313325a788ecb9e01b30583bf8210ab6c53d71d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:30Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.450749 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabaff26-a896-4929-8b32-6e32efe02ffc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e518bb57f917b37cc08b0f099557c012da1b6c08a168ff3acb262abc2c9d6983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73fed24d3da601900cf1f54e0adc59daba09bd9a50813cc39508f2d8c66ae5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187288a1894fa8a8aaf843f1889edb54adeb0771fb0a5759a1346d1ae04f9970\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb63d3fae21b626041036425ac0dbe4cc0fba792aa82812037eb0cf64036984\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce17a0c77733e53cc77e1ced6944f39b53419840d78420bc7cf4ad34fec82e8b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 06:34:22.756831 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 06:34:22.759210 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196230451/tls.crt::/tmp/serving-cert-4196230451/tls.key\\\\\\\"\\\\nI1129 06:34:28.364401 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 06:34:28.369411 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 06:34:28.369441 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 06:34:28.369468 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 06:34:28.369474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 06:34:28.374098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 06:34:28.374134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374148 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 06:34:28.374153 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 06:34:28.374157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 06:34:28.374161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 06:34:28.374166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 06:34:28.376189 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ce7bb31402783842d6a20f7d0667426065729bdee1f7cf4fcbe636173e79da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:30Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.465496 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:30Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.474175 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.480669 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:30Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.496254 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21d5af32-2f1c-4aa8-a9b3-c436a5b3cb32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8225fda5f45611099716e891065acf0ffa2db6d2c9b79192b5955c1bac77daf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d79edf1f405f23d4ec3ff1a52d9583fc26d31d1fb690337a7b081868c397d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfae5cdad70e7948b6383df2f934a477b237c38b94cf3d3eda22d11fe02826e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20235c8af3fdd17aa70bd0744d8faf37aa7781f63ef194f6e467721a47e388d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:30Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.510048 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zb4cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6063fc55-4365-4f22-a005-bfac3812fdce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zb4cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:30Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.512751 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6063fc55-4365-4f22-a005-bfac3812fdce-system-cni-dir\") pod \"multus-additional-cni-plugins-zb4cv\" (UID: \"6063fc55-4365-4f22-a005-bfac3812fdce\") " pod="openshift-multus/multus-additional-cni-plugins-zb4cv" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.512802 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2cbb3532-a15b-4cca-bde1-aa1ae20698f1-multus-cni-dir\") pod \"multus-xlg45\" (UID: \"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\") " pod="openshift-multus/multus-xlg45" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.512830 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2cbb3532-a15b-4cca-bde1-aa1ae20698f1-multus-daemon-config\") pod \"multus-xlg45\" (UID: \"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\") " pod="openshift-multus/multus-xlg45" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.512854 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rndhc\" (UniqueName: \"kubernetes.io/projected/2cbb3532-a15b-4cca-bde1-aa1ae20698f1-kube-api-access-rndhc\") pod \"multus-xlg45\" (UID: \"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\") " pod="openshift-multus/multus-xlg45" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.512875 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6063fc55-4365-4f22-a005-bfac3812fdce-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-zb4cv\" (UID: \"6063fc55-4365-4f22-a005-bfac3812fdce\") " pod="openshift-multus/multus-additional-cni-plugins-zb4cv" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.512910 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2cbb3532-a15b-4cca-bde1-aa1ae20698f1-cnibin\") pod \"multus-xlg45\" (UID: \"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\") " pod="openshift-multus/multus-xlg45" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.512939 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2cbb3532-a15b-4cca-bde1-aa1ae20698f1-cni-binary-copy\") pod \"multus-xlg45\" (UID: \"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\") " pod="openshift-multus/multus-xlg45" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.512960 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2cbb3532-a15b-4cca-bde1-aa1ae20698f1-host-run-netns\") pod \"multus-xlg45\" (UID: \"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\") " pod="openshift-multus/multus-xlg45" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.512981 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2cbb3532-a15b-4cca-bde1-aa1ae20698f1-host-var-lib-kubelet\") pod \"multus-xlg45\" (UID: \"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\") " pod="openshift-multus/multus-xlg45" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.513003 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6063fc55-4365-4f22-a005-bfac3812fdce-cni-binary-copy\") pod \"multus-additional-cni-plugins-zb4cv\" (UID: \"6063fc55-4365-4f22-a005-bfac3812fdce\") " pod="openshift-multus/multus-additional-cni-plugins-zb4cv" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.513029 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6063fc55-4365-4f22-a005-bfac3812fdce-os-release\") pod \"multus-additional-cni-plugins-zb4cv\" (UID: \"6063fc55-4365-4f22-a005-bfac3812fdce\") " pod="openshift-multus/multus-additional-cni-plugins-zb4cv" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.513053 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6063fc55-4365-4f22-a005-bfac3812fdce-tuning-conf-dir\") pod \"multus-additional-cni-plugins-zb4cv\" (UID: \"6063fc55-4365-4f22-a005-bfac3812fdce\") " pod="openshift-multus/multus-additional-cni-plugins-zb4cv" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.513075 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2cbb3532-a15b-4cca-bde1-aa1ae20698f1-host-run-multus-certs\") pod \"multus-xlg45\" (UID: \"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\") " pod="openshift-multus/multus-xlg45" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.513092 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6063fc55-4365-4f22-a005-bfac3812fdce-cnibin\") pod \"multus-additional-cni-plugins-zb4cv\" (UID: \"6063fc55-4365-4f22-a005-bfac3812fdce\") " pod="openshift-multus/multus-additional-cni-plugins-zb4cv" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.513112 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2cbb3532-a15b-4cca-bde1-aa1ae20698f1-os-release\") pod \"multus-xlg45\" (UID: \"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\") " pod="openshift-multus/multus-xlg45" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.513132 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2cbb3532-a15b-4cca-bde1-aa1ae20698f1-multus-socket-dir-parent\") pod \"multus-xlg45\" (UID: \"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\") " pod="openshift-multus/multus-xlg45" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.513155 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2cbb3532-a15b-4cca-bde1-aa1ae20698f1-hostroot\") pod \"multus-xlg45\" (UID: \"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\") " pod="openshift-multus/multus-xlg45" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.513175 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2cbb3532-a15b-4cca-bde1-aa1ae20698f1-etc-kubernetes\") pod \"multus-xlg45\" (UID: \"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\") " pod="openshift-multus/multus-xlg45" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.513198 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2cbb3532-a15b-4cca-bde1-aa1ae20698f1-host-run-k8s-cni-cncf-io\") pod \"multus-xlg45\" (UID: \"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\") " pod="openshift-multus/multus-xlg45" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.513246 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2cbb3532-a15b-4cca-bde1-aa1ae20698f1-host-var-lib-cni-bin\") pod \"multus-xlg45\" (UID: \"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\") " pod="openshift-multus/multus-xlg45" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.513275 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2cbb3532-a15b-4cca-bde1-aa1ae20698f1-host-var-lib-cni-multus\") pod \"multus-xlg45\" (UID: \"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\") " pod="openshift-multus/multus-xlg45" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.513291 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2cbb3532-a15b-4cca-bde1-aa1ae20698f1-host-var-lib-kubelet\") pod \"multus-xlg45\" (UID: \"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\") " pod="openshift-multus/multus-xlg45" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.513300 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2cbb3532-a15b-4cca-bde1-aa1ae20698f1-system-cni-dir\") pod \"multus-xlg45\" (UID: \"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\") " pod="openshift-multus/multus-xlg45" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.513385 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2cbb3532-a15b-4cca-bde1-aa1ae20698f1-multus-conf-dir\") pod \"multus-xlg45\" (UID: \"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\") " pod="openshift-multus/multus-xlg45" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.513457 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5bjq\" (UniqueName: \"kubernetes.io/projected/6063fc55-4365-4f22-a005-bfac3812fdce-kube-api-access-k5bjq\") pod \"multus-additional-cni-plugins-zb4cv\" (UID: \"6063fc55-4365-4f22-a005-bfac3812fdce\") " pod="openshift-multus/multus-additional-cni-plugins-zb4cv" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.513606 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2cbb3532-a15b-4cca-bde1-aa1ae20698f1-multus-socket-dir-parent\") pod \"multus-xlg45\" (UID: \"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\") " pod="openshift-multus/multus-xlg45" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.513680 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2cbb3532-a15b-4cca-bde1-aa1ae20698f1-host-run-multus-certs\") pod \"multus-xlg45\" (UID: \"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\") " pod="openshift-multus/multus-xlg45" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.513703 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6063fc55-4365-4f22-a005-bfac3812fdce-cnibin\") pod \"multus-additional-cni-plugins-zb4cv\" (UID: \"6063fc55-4365-4f22-a005-bfac3812fdce\") " pod="openshift-multus/multus-additional-cni-plugins-zb4cv" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.513716 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2cbb3532-a15b-4cca-bde1-aa1ae20698f1-multus-cni-dir\") pod \"multus-xlg45\" (UID: \"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\") " pod="openshift-multus/multus-xlg45" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.513813 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2cbb3532-a15b-4cca-bde1-aa1ae20698f1-host-run-netns\") pod \"multus-xlg45\" (UID: \"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\") " pod="openshift-multus/multus-xlg45" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.513390 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2cbb3532-a15b-4cca-bde1-aa1ae20698f1-system-cni-dir\") pod \"multus-xlg45\" (UID: \"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\") " pod="openshift-multus/multus-xlg45" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.513954 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2cbb3532-a15b-4cca-bde1-aa1ae20698f1-hostroot\") pod \"multus-xlg45\" (UID: \"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\") " pod="openshift-multus/multus-xlg45" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.513955 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2cbb3532-a15b-4cca-bde1-aa1ae20698f1-host-var-lib-cni-bin\") pod \"multus-xlg45\" (UID: \"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\") " pod="openshift-multus/multus-xlg45" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.514004 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2cbb3532-a15b-4cca-bde1-aa1ae20698f1-host-run-k8s-cni-cncf-io\") pod \"multus-xlg45\" (UID: \"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\") " pod="openshift-multus/multus-xlg45" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.514019 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2cbb3532-a15b-4cca-bde1-aa1ae20698f1-multus-conf-dir\") pod \"multus-xlg45\" (UID: \"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\") " pod="openshift-multus/multus-xlg45" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.514038 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2cbb3532-a15b-4cca-bde1-aa1ae20698f1-cnibin\") pod \"multus-xlg45\" (UID: \"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\") " pod="openshift-multus/multus-xlg45" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.514054 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2cbb3532-a15b-4cca-bde1-aa1ae20698f1-host-var-lib-cni-multus\") pod \"multus-xlg45\" (UID: \"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\") " pod="openshift-multus/multus-xlg45" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.514103 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6063fc55-4365-4f22-a005-bfac3812fdce-system-cni-dir\") pod \"multus-additional-cni-plugins-zb4cv\" (UID: \"6063fc55-4365-4f22-a005-bfac3812fdce\") " pod="openshift-multus/multus-additional-cni-plugins-zb4cv" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.514351 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6063fc55-4365-4f22-a005-bfac3812fdce-cni-binary-copy\") pod \"multus-additional-cni-plugins-zb4cv\" (UID: \"6063fc55-4365-4f22-a005-bfac3812fdce\") " pod="openshift-multus/multus-additional-cni-plugins-zb4cv" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.514359 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6063fc55-4365-4f22-a005-bfac3812fdce-os-release\") pod \"multus-additional-cni-plugins-zb4cv\" (UID: \"6063fc55-4365-4f22-a005-bfac3812fdce\") " pod="openshift-multus/multus-additional-cni-plugins-zb4cv" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.514409 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6063fc55-4365-4f22-a005-bfac3812fdce-tuning-conf-dir\") pod \"multus-additional-cni-plugins-zb4cv\" (UID: \"6063fc55-4365-4f22-a005-bfac3812fdce\") " pod="openshift-multus/multus-additional-cni-plugins-zb4cv" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.514447 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2cbb3532-a15b-4cca-bde1-aa1ae20698f1-etc-kubernetes\") pod \"multus-xlg45\" (UID: \"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\") " pod="openshift-multus/multus-xlg45" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.514495 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2cbb3532-a15b-4cca-bde1-aa1ae20698f1-os-release\") pod \"multus-xlg45\" (UID: \"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\") " pod="openshift-multus/multus-xlg45" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.514683 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2cbb3532-a15b-4cca-bde1-aa1ae20698f1-multus-daemon-config\") pod \"multus-xlg45\" (UID: \"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\") " pod="openshift-multus/multus-xlg45" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.514752 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2cbb3532-a15b-4cca-bde1-aa1ae20698f1-cni-binary-copy\") pod \"multus-xlg45\" (UID: \"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\") " pod="openshift-multus/multus-xlg45" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.515105 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6063fc55-4365-4f22-a005-bfac3812fdce-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-zb4cv\" (UID: \"6063fc55-4365-4f22-a005-bfac3812fdce\") " pod="openshift-multus/multus-additional-cni-plugins-zb4cv" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.516289 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.529239 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:30Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.533030 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rndhc\" (UniqueName: \"kubernetes.io/projected/2cbb3532-a15b-4cca-bde1-aa1ae20698f1-kube-api-access-rndhc\") pod \"multus-xlg45\" (UID: \"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\") " pod="openshift-multus/multus-xlg45" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.537884 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5bjq\" (UniqueName: \"kubernetes.io/projected/6063fc55-4365-4f22-a005-bfac3812fdce-kube-api-access-k5bjq\") pod \"multus-additional-cni-plugins-zb4cv\" (UID: \"6063fc55-4365-4f22-a005-bfac3812fdce\") " pod="openshift-multus/multus-additional-cni-plugins-zb4cv" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.545095 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:30Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.555078 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sxdk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f180a1-2fb0-4b96-85ed-1116677a7c99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b80d32b0abeeab39bc9fd8c24510ec865e957006ecac31da11f86b6e0fb983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swgjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sxdk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:30Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.566922 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.567381 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlg45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rndhc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlg45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:30Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.584963 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"268b8d19-01a0-4696-8b9e-0efed4129d56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06bdbb1b45cdc328f5c4345797e222ac33d9f1ea052011c183a229c02134cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa666baf286a9b54dbafbe35f35c3be377481392a4e3811f72cf189d1adbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ef81ddc4f773e06ee9c296ecc2e67ce76bf795fbd8357236cad6c0462d7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360e1f2c76ee825b275816919d9ab1473c6db5da5ab22b43134738150306f87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4712b1e44979f37b926f325313325a788ecb9e01b30583bf8210ab6c53d71d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:30Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.597867 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92f526fea4a10baf89765a67868ca0fe47ce19bbde73cd538bd65add7578f000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3934144663bb1471d168783c160c42654884d04d2510cbe1e5f2f6e2cfb94f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:30Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.610379 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.611533 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:30Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.629349 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21d5af32-2f1c-4aa8-a9b3-c436a5b3cb32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8225fda5f45611099716e891065acf0ffa2db6d2c9b79192b5955c1bac77daf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d79edf1f405f23d4ec3ff1a52d9583fc26d31d1fb690337a7b081868c397d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfae5cdad70e7948b6383df2f934a477b237c38b94cf3d3eda22d11fe02826e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20235c8af3fdd17aa70bd0744d8faf37aa7781f63ef194f6e467721a47e388d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:30Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.640006 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-xlg45" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.646514 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:30Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.648452 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-zb4cv" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.652362 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 29 06:34:30 crc kubenswrapper[4947]: W1129 06:34:30.653174 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2cbb3532_a15b_4cca_bde1_aa1ae20698f1.slice/crio-bd40a8b7e73b324066bee1e5bfd655641cd16aea25b8784598ad21e4eb8f8711 WatchSource:0}: Error finding container bd40a8b7e73b324066bee1e5bfd655641cd16aea25b8784598ad21e4eb8f8711: Status 404 returned error can't find the container with id bd40a8b7e73b324066bee1e5bfd655641cd16aea25b8784598ad21e4eb8f8711 Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.662514 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabaff26-a896-4929-8b32-6e32efe02ffc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e518bb57f917b37cc08b0f099557c012da1b6c08a168ff3acb262abc2c9d6983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73fed24d3da601900cf1f54e0adc59daba09bd9a50813cc39508f2d8c66ae5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187288a1894fa8a8aaf843f1889edb54adeb0771fb0a5759a1346d1ae04f9970\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb63d3fae21b626041036425ac0dbe4cc0fba792aa82812037eb0cf64036984\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce17a0c77733e53cc77e1ced6944f39b53419840d78420bc7cf4ad34fec82e8b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 06:34:22.756831 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 06:34:22.759210 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196230451/tls.crt::/tmp/serving-cert-4196230451/tls.key\\\\\\\"\\\\nI1129 06:34:28.364401 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 06:34:28.369411 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 06:34:28.369441 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 06:34:28.369468 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 06:34:28.369474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 06:34:28.374098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 06:34:28.374134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374148 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 06:34:28.374153 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 06:34:28.374157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 06:34:28.374161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 06:34:28.374166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 06:34:28.376189 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ce7bb31402783842d6a20f7d0667426065729bdee1f7cf4fcbe636173e79da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:30Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:30 crc kubenswrapper[4947]: W1129 06:34:30.673122 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6063fc55_4365_4f22_a005_bfac3812fdce.slice/crio-4012cc332000af8ad9e28d61da006f521661b7397d39b1bb60f87927eb141b3c WatchSource:0}: Error finding container 4012cc332000af8ad9e28d61da006f521661b7397d39b1bb60f87927eb141b3c: Status 404 returned error can't find the container with id 4012cc332000af8ad9e28d61da006f521661b7397d39b1bb60f87927eb141b3c Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.680208 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de17371c2f08b16e96df9f76ce3eaf0981cc8590aca0784ea95598b7842812eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:30Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.729335 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-5zgvc"] Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.729860 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-z4rxq"] Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.730093 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.731165 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.736364 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.736624 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.738321 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.738501 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.738640 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.739145 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.739189 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.739370 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.739379 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.739434 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.739562 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.739854 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.753884 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabaff26-a896-4929-8b32-6e32efe02ffc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e518bb57f917b37cc08b0f099557c012da1b6c08a168ff3acb262abc2c9d6983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73fed24d3da601900cf1f54e0adc59daba09bd9a50813cc39508f2d8c66ae5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187288a1894fa8a8aaf843f1889edb54adeb0771fb0a5759a1346d1ae04f9970\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb63d3fae21b626041036425ac0dbe4cc0fba792aa82812037eb0cf64036984\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce17a0c77733e53cc77e1ced6944f39b53419840d78420bc7cf4ad34fec82e8b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 06:34:22.756831 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 06:34:22.759210 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196230451/tls.crt::/tmp/serving-cert-4196230451/tls.key\\\\\\\"\\\\nI1129 06:34:28.364401 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 06:34:28.369411 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 06:34:28.369441 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 06:34:28.369468 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 06:34:28.369474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 06:34:28.374098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 06:34:28.374134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374148 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 06:34:28.374153 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 06:34:28.374157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 06:34:28.374161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 06:34:28.374166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 06:34:28.376189 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ce7bb31402783842d6a20f7d0667426065729bdee1f7cf4fcbe636173e79da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:30Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.766918 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de17371c2f08b16e96df9f76ce3eaf0981cc8590aca0784ea95598b7842812eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:30Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.782098 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:30Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.795140 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:30Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.810355 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zb4cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6063fc55-4365-4f22-a005-bfac3812fdce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zb4cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:30Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.816373 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-systemd-units\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.816434 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grbf6\" (UniqueName: \"kubernetes.io/projected/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-kube-api-access-grbf6\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.816454 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-host-run-netns\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.816488 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.816520 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-host-run-ovn-kubernetes\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.816537 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-etc-openvswitch\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.816553 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5f4d791f-bb61-4aaa-a09c-3007b59645a7-proxy-tls\") pod \"machine-config-daemon-5zgvc\" (UID: \"5f4d791f-bb61-4aaa-a09c-3007b59645a7\") " pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.816575 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-run-ovn\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.816587 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-node-log\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.816604 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-host-cni-bin\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.816619 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-var-lib-openvswitch\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.816640 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-ovnkube-config\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.816670 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-ovn-node-metrics-cert\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.816688 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-host-kubelet\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.816708 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-run-systemd\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.816729 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5f4d791f-bb61-4aaa-a09c-3007b59645a7-mcd-auth-proxy-config\") pod \"machine-config-daemon-5zgvc\" (UID: \"5f4d791f-bb61-4aaa-a09c-3007b59645a7\") " pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.816754 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-env-overrides\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.816773 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-host-cni-netd\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.816805 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-host-slash\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.816822 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-run-openvswitch\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.816835 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5f4d791f-bb61-4aaa-a09c-3007b59645a7-rootfs\") pod \"machine-config-daemon-5zgvc\" (UID: \"5f4d791f-bb61-4aaa-a09c-3007b59645a7\") " pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.816853 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-ovnkube-script-lib\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.816866 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knbh9\" (UniqueName: \"kubernetes.io/projected/5f4d791f-bb61-4aaa-a09c-3007b59645a7-kube-api-access-knbh9\") pod \"machine-config-daemon-5zgvc\" (UID: \"5f4d791f-bb61-4aaa-a09c-3007b59645a7\") " pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.816898 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-log-socket\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.823663 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f4d791f-bb61-4aaa-a09c-3007b59645a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5zgvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:30Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.845616 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"268b8d19-01a0-4696-8b9e-0efed4129d56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06bdbb1b45cdc328f5c4345797e222ac33d9f1ea052011c183a229c02134cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa666baf286a9b54dbafbe35f35c3be377481392a4e3811f72cf189d1adbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ef81ddc4f773e06ee9c296ecc2e67ce76bf795fbd8357236cad6c0462d7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360e1f2c76ee825b275816919d9ab1473c6db5da5ab22b43134738150306f87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4712b1e44979f37b926f325313325a788ecb9e01b30583bf8210ab6c53d71d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:30Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.860023 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92f526fea4a10baf89765a67868ca0fe47ce19bbde73cd538bd65add7578f000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3934144663bb1471d168783c160c42654884d04d2510cbe1e5f2f6e2cfb94f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:30Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.871898 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:30Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.881134 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sxdk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f180a1-2fb0-4b96-85ed-1116677a7c99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b80d32b0abeeab39bc9fd8c24510ec865e957006ecac31da11f86b6e0fb983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swgjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sxdk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:30Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.892580 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlg45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rndhc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlg45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:30Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.903277 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21d5af32-2f1c-4aa8-a9b3-c436a5b3cb32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8225fda5f45611099716e891065acf0ffa2db6d2c9b79192b5955c1bac77daf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d79edf1f405f23d4ec3ff1a52d9583fc26d31d1fb690337a7b081868c397d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfae5cdad70e7948b6383df2f934a477b237c38b94cf3d3eda22d11fe02826e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20235c8af3fdd17aa70bd0744d8faf37aa7781f63ef194f6e467721a47e388d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:30Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.914926 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:30Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.917931 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-ovnkube-script-lib\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.918006 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knbh9\" (UniqueName: \"kubernetes.io/projected/5f4d791f-bb61-4aaa-a09c-3007b59645a7-kube-api-access-knbh9\") pod \"machine-config-daemon-5zgvc\" (UID: \"5f4d791f-bb61-4aaa-a09c-3007b59645a7\") " pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.918058 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-log-socket\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.918135 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-systemd-units\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.918166 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grbf6\" (UniqueName: \"kubernetes.io/projected/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-kube-api-access-grbf6\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.918188 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-host-run-netns\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.918237 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.918293 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-host-run-ovn-kubernetes\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.918337 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-etc-openvswitch\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.918375 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5f4d791f-bb61-4aaa-a09c-3007b59645a7-proxy-tls\") pod \"machine-config-daemon-5zgvc\" (UID: \"5f4d791f-bb61-4aaa-a09c-3007b59645a7\") " pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.918429 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-run-ovn\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.918454 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-node-log\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.918479 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-host-cni-bin\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.918503 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-var-lib-openvswitch\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.918530 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-ovnkube-config\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.918553 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-ovn-node-metrics-cert\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.918578 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-host-kubelet\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.918600 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-run-systemd\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.918628 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5f4d791f-bb61-4aaa-a09c-3007b59645a7-mcd-auth-proxy-config\") pod \"machine-config-daemon-5zgvc\" (UID: \"5f4d791f-bb61-4aaa-a09c-3007b59645a7\") " pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.918654 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-env-overrides\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.918677 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-host-cni-netd\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.918701 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-host-slash\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.918724 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-run-openvswitch\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.918747 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5f4d791f-bb61-4aaa-a09c-3007b59645a7-rootfs\") pod \"machine-config-daemon-5zgvc\" (UID: \"5f4d791f-bb61-4aaa-a09c-3007b59645a7\") " pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.918830 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5f4d791f-bb61-4aaa-a09c-3007b59645a7-rootfs\") pod \"machine-config-daemon-5zgvc\" (UID: \"5f4d791f-bb61-4aaa-a09c-3007b59645a7\") " pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.919693 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-ovnkube-script-lib\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.919952 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-log-socket\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.919995 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-systemd-units\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.920123 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-host-run-netns\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.920166 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.920194 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-host-run-ovn-kubernetes\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.920247 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-etc-openvswitch\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.922391 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-host-kubelet\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.922413 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-host-cni-netd\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.922462 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-host-slash\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.922413 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-run-openvswitch\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.922496 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-node-log\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.922512 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-run-ovn\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.922553 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-run-systemd\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.923139 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-env-overrides\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.923156 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-host-cni-bin\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.923173 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-var-lib-openvswitch\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.924319 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5f4d791f-bb61-4aaa-a09c-3007b59645a7-proxy-tls\") pod \"machine-config-daemon-5zgvc\" (UID: \"5f4d791f-bb61-4aaa-a09c-3007b59645a7\") " pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.924342 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5f4d791f-bb61-4aaa-a09c-3007b59645a7-mcd-auth-proxy-config\") pod \"machine-config-daemon-5zgvc\" (UID: \"5f4d791f-bb61-4aaa-a09c-3007b59645a7\") " pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.925084 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-ovnkube-config\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.925842 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-ovn-node-metrics-cert\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.936914 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zb4cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6063fc55-4365-4f22-a005-bfac3812fdce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zb4cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:30Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.945455 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grbf6\" (UniqueName: \"kubernetes.io/projected/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-kube-api-access-grbf6\") pod \"ovnkube-node-z4rxq\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.947141 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knbh9\" (UniqueName: \"kubernetes.io/projected/5f4d791f-bb61-4aaa-a09c-3007b59645a7-kube-api-access-knbh9\") pod \"machine-config-daemon-5zgvc\" (UID: \"5f4d791f-bb61-4aaa-a09c-3007b59645a7\") " pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.947428 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f4d791f-bb61-4aaa-a09c-3007b59645a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5zgvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:30Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.958566 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:30Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.974147 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:30Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:30 crc kubenswrapper[4947]: I1129 06:34:30.984688 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sxdk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f180a1-2fb0-4b96-85ed-1116677a7c99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b80d32b0abeeab39bc9fd8c24510ec865e957006ecac31da11f86b6e0fb983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swgjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sxdk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:30Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:31 crc kubenswrapper[4947]: I1129 06:34:31.029935 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlg45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rndhc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlg45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:31Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:31 crc kubenswrapper[4947]: I1129 06:34:31.052312 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" Nov 29 06:34:31 crc kubenswrapper[4947]: I1129 06:34:31.057320 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:31 crc kubenswrapper[4947]: I1129 06:34:31.080126 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"268b8d19-01a0-4696-8b9e-0efed4129d56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06bdbb1b45cdc328f5c4345797e222ac33d9f1ea052011c183a229c02134cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa666baf286a9b54dbafbe35f35c3be377481392a4e3811f72cf189d1adbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ef81ddc4f773e06ee9c296ecc2e67ce76bf795fbd8357236cad6c0462d7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360e1f2c76ee825b275816919d9ab1473c6db5da5ab22b43134738150306f87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4712b1e44979f37b926f325313325a788ecb9e01b30583bf8210ab6c53d71d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:31Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:31 crc kubenswrapper[4947]: I1129 06:34:31.111105 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92f526fea4a10baf89765a67868ca0fe47ce19bbde73cd538bd65add7578f000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3934144663bb1471d168783c160c42654884d04d2510cbe1e5f2f6e2cfb94f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:31Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:31 crc kubenswrapper[4947]: I1129 06:34:31.147276 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:31Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:31 crc kubenswrapper[4947]: I1129 06:34:31.178414 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:34:31 crc kubenswrapper[4947]: E1129 06:34:31.178900 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 06:34:31 crc kubenswrapper[4947]: I1129 06:34:31.197473 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21d5af32-2f1c-4aa8-a9b3-c436a5b3cb32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8225fda5f45611099716e891065acf0ffa2db6d2c9b79192b5955c1bac77daf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d79edf1f405f23d4ec3ff1a52d9583fc26d31d1fb690337a7b081868c397d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfae5cdad70e7948b6383df2f934a477b237c38b94cf3d3eda22d11fe02826e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20235c8af3fdd17aa70bd0744d8faf37aa7781f63ef194f6e467721a47e388d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:31Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:31 crc kubenswrapper[4947]: I1129 06:34:31.243686 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:31Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:31 crc kubenswrapper[4947]: I1129 06:34:31.282648 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z4rxq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:31Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:31 crc kubenswrapper[4947]: I1129 06:34:31.308015 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabaff26-a896-4929-8b32-6e32efe02ffc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e518bb57f917b37cc08b0f099557c012da1b6c08a168ff3acb262abc2c9d6983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73fed24d3da601900cf1f54e0adc59daba09bd9a50813cc39508f2d8c66ae5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187288a1894fa8a8aaf843f1889edb54adeb0771fb0a5759a1346d1ae04f9970\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb63d3fae21b626041036425ac0dbe4cc0fba792aa82812037eb0cf64036984\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce17a0c77733e53cc77e1ced6944f39b53419840d78420bc7cf4ad34fec82e8b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 06:34:22.756831 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 06:34:22.759210 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196230451/tls.crt::/tmp/serving-cert-4196230451/tls.key\\\\\\\"\\\\nI1129 06:34:28.364401 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 06:34:28.369411 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 06:34:28.369441 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 06:34:28.369468 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 06:34:28.369474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 06:34:28.374098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 06:34:28.374134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374148 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 06:34:28.374153 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 06:34:28.374157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 06:34:28.374161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 06:34:28.374166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 06:34:28.376189 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ce7bb31402783842d6a20f7d0667426065729bdee1f7cf4fcbe636173e79da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:31Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:31 crc kubenswrapper[4947]: I1129 06:34:31.321620 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" event={"ID":"5f4d791f-bb61-4aaa-a09c-3007b59645a7","Type":"ContainerStarted","Data":"bade16afee9bcb203991caf838d5e9e5302dd0b35462b93b45c34cf98e89c7b8"} Nov 29 06:34:31 crc kubenswrapper[4947]: I1129 06:34:31.321665 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" event={"ID":"5f4d791f-bb61-4aaa-a09c-3007b59645a7","Type":"ContainerStarted","Data":"99a9a3742cee6ef09eff29f3ba013403c4af647ada61065fab9463b291936302"} Nov 29 06:34:31 crc kubenswrapper[4947]: I1129 06:34:31.323383 4947 generic.go:334] "Generic (PLEG): container finished" podID="6063fc55-4365-4f22-a005-bfac3812fdce" containerID="4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a" exitCode=0 Nov 29 06:34:31 crc kubenswrapper[4947]: I1129 06:34:31.323438 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zb4cv" event={"ID":"6063fc55-4365-4f22-a005-bfac3812fdce","Type":"ContainerDied","Data":"4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a"} Nov 29 06:34:31 crc kubenswrapper[4947]: I1129 06:34:31.323454 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zb4cv" event={"ID":"6063fc55-4365-4f22-a005-bfac3812fdce","Type":"ContainerStarted","Data":"4012cc332000af8ad9e28d61da006f521661b7397d39b1bb60f87927eb141b3c"} Nov 29 06:34:31 crc kubenswrapper[4947]: I1129 06:34:31.324980 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xlg45" event={"ID":"2cbb3532-a15b-4cca-bde1-aa1ae20698f1","Type":"ContainerStarted","Data":"35c22792f1661e18f407880f3a743a03890141d9f5c6062e8ba6b06204b4e3da"} Nov 29 06:34:31 crc kubenswrapper[4947]: I1129 06:34:31.325005 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xlg45" event={"ID":"2cbb3532-a15b-4cca-bde1-aa1ae20698f1","Type":"ContainerStarted","Data":"bd40a8b7e73b324066bee1e5bfd655641cd16aea25b8784598ad21e4eb8f8711"} Nov 29 06:34:31 crc kubenswrapper[4947]: I1129 06:34:31.326794 4947 generic.go:334] "Generic (PLEG): container finished" podID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" containerID="965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1" exitCode=0 Nov 29 06:34:31 crc kubenswrapper[4947]: I1129 06:34:31.326876 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" event={"ID":"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0","Type":"ContainerDied","Data":"965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1"} Nov 29 06:34:31 crc kubenswrapper[4947]: I1129 06:34:31.326919 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" event={"ID":"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0","Type":"ContainerStarted","Data":"8d5ea209885265916fa06511390b1479de47f6dd6633c4c2ae8ff7c6685d1d4d"} Nov 29 06:34:31 crc kubenswrapper[4947]: I1129 06:34:31.327191 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 06:34:31 crc kubenswrapper[4947]: I1129 06:34:31.349351 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de17371c2f08b16e96df9f76ce3eaf0981cc8590aca0784ea95598b7842812eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:31Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:31 crc kubenswrapper[4947]: I1129 06:34:31.387613 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:31Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:31 crc kubenswrapper[4947]: I1129 06:34:31.427710 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21d5af32-2f1c-4aa8-a9b3-c436a5b3cb32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8225fda5f45611099716e891065acf0ffa2db6d2c9b79192b5955c1bac77daf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d79edf1f405f23d4ec3ff1a52d9583fc26d31d1fb690337a7b081868c397d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfae5cdad70e7948b6383df2f934a477b237c38b94cf3d3eda22d11fe02826e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20235c8af3fdd17aa70bd0744d8faf37aa7781f63ef194f6e467721a47e388d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:31Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:31 crc kubenswrapper[4947]: I1129 06:34:31.468818 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:31Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:31 crc kubenswrapper[4947]: I1129 06:34:31.515512 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z4rxq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:31Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:31 crc kubenswrapper[4947]: I1129 06:34:31.553245 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabaff26-a896-4929-8b32-6e32efe02ffc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e518bb57f917b37cc08b0f099557c012da1b6c08a168ff3acb262abc2c9d6983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73fed24d3da601900cf1f54e0adc59daba09bd9a50813cc39508f2d8c66ae5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187288a1894fa8a8aaf843f1889edb54adeb0771fb0a5759a1346d1ae04f9970\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb63d3fae21b626041036425ac0dbe4cc0fba792aa82812037eb0cf64036984\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce17a0c77733e53cc77e1ced6944f39b53419840d78420bc7cf4ad34fec82e8b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 06:34:22.756831 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 06:34:22.759210 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196230451/tls.crt::/tmp/serving-cert-4196230451/tls.key\\\\\\\"\\\\nI1129 06:34:28.364401 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 06:34:28.369411 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 06:34:28.369441 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 06:34:28.369468 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 06:34:28.369474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 06:34:28.374098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 06:34:28.374134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374148 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 06:34:28.374153 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 06:34:28.374157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 06:34:28.374161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 06:34:28.374166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 06:34:28.376189 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ce7bb31402783842d6a20f7d0667426065729bdee1f7cf4fcbe636173e79da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:31Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:31 crc kubenswrapper[4947]: I1129 06:34:31.591283 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de17371c2f08b16e96df9f76ce3eaf0981cc8590aca0784ea95598b7842812eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:31Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:31 crc kubenswrapper[4947]: I1129 06:34:31.630298 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zb4cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6063fc55-4365-4f22-a005-bfac3812fdce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zb4cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:31Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:31 crc kubenswrapper[4947]: I1129 06:34:31.670794 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f4d791f-bb61-4aaa-a09c-3007b59645a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5zgvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:31Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:31 crc kubenswrapper[4947]: I1129 06:34:31.708871 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:31Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:31 crc kubenswrapper[4947]: I1129 06:34:31.747603 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:31Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:31 crc kubenswrapper[4947]: I1129 06:34:31.787994 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sxdk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f180a1-2fb0-4b96-85ed-1116677a7c99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b80d32b0abeeab39bc9fd8c24510ec865e957006ecac31da11f86b6e0fb983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swgjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sxdk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:31Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:31 crc kubenswrapper[4947]: I1129 06:34:31.831849 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlg45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c22792f1661e18f407880f3a743a03890141d9f5c6062e8ba6b06204b4e3da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rndhc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlg45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:31Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:31 crc kubenswrapper[4947]: I1129 06:34:31.879133 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"268b8d19-01a0-4696-8b9e-0efed4129d56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06bdbb1b45cdc328f5c4345797e222ac33d9f1ea052011c183a229c02134cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa666baf286a9b54dbafbe35f35c3be377481392a4e3811f72cf189d1adbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ef81ddc4f773e06ee9c296ecc2e67ce76bf795fbd8357236cad6c0462d7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360e1f2c76ee825b275816919d9ab1473c6db5da5ab22b43134738150306f87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4712b1e44979f37b926f325313325a788ecb9e01b30583bf8210ab6c53d71d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:31Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:31 crc kubenswrapper[4947]: I1129 06:34:31.909848 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92f526fea4a10baf89765a67868ca0fe47ce19bbde73cd538bd65add7578f000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3934144663bb1471d168783c160c42654884d04d2510cbe1e5f2f6e2cfb94f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:31Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:31 crc kubenswrapper[4947]: I1129 06:34:31.976344 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 06:34:31 crc kubenswrapper[4947]: I1129 06:34:31.979389 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:31 crc kubenswrapper[4947]: I1129 06:34:31.979478 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:31 crc kubenswrapper[4947]: I1129 06:34:31.979507 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:31 crc kubenswrapper[4947]: I1129 06:34:31.979692 4947 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 29 06:34:31 crc kubenswrapper[4947]: I1129 06:34:31.994055 4947 kubelet_node_status.go:115] "Node was previously registered" node="crc" Nov 29 06:34:31 crc kubenswrapper[4947]: I1129 06:34:31.994416 4947 kubelet_node_status.go:79] "Successfully registered node" node="crc" Nov 29 06:34:31 crc kubenswrapper[4947]: I1129 06:34:31.995504 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:31 crc kubenswrapper[4947]: I1129 06:34:31.995532 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:31 crc kubenswrapper[4947]: I1129 06:34:31.995545 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:31 crc kubenswrapper[4947]: I1129 06:34:31.995567 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:31 crc kubenswrapper[4947]: I1129 06:34:31.995581 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:31Z","lastTransitionTime":"2025-11-29T06:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:32 crc kubenswrapper[4947]: E1129 06:34:32.015995 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ead809c-947a-4269-a0f3-b817113e9662\\\",\\\"systemUUID\\\":\\\"f0748469-4a41-446c-a5c3-776c2ca32148\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:32Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.020191 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.020258 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.020269 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.020293 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.020308 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:32Z","lastTransitionTime":"2025-11-29T06:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.031181 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:34:32 crc kubenswrapper[4947]: E1129 06:34:32.031508 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:34:36.031389651 +0000 UTC m=+27.075771732 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.031589 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.031658 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:34:32 crc kubenswrapper[4947]: E1129 06:34:32.031892 4947 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 06:34:32 crc kubenswrapper[4947]: E1129 06:34:32.031965 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 06:34:36.031953095 +0000 UTC m=+27.076335176 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 06:34:32 crc kubenswrapper[4947]: E1129 06:34:32.032038 4947 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 06:34:32 crc kubenswrapper[4947]: E1129 06:34:32.032172 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 06:34:36.032143709 +0000 UTC m=+27.076525960 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 06:34:32 crc kubenswrapper[4947]: E1129 06:34:32.041601 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ead809c-947a-4269-a0f3-b817113e9662\\\",\\\"systemUUID\\\":\\\"f0748469-4a41-446c-a5c3-776c2ca32148\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:32Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.045871 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.045904 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.045917 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.045934 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.045944 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:32Z","lastTransitionTime":"2025-11-29T06:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:32 crc kubenswrapper[4947]: E1129 06:34:32.057555 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ead809c-947a-4269-a0f3-b817113e9662\\\",\\\"systemUUID\\\":\\\"f0748469-4a41-446c-a5c3-776c2ca32148\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:32Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.061997 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.062024 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.062035 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.062052 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.062063 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:32Z","lastTransitionTime":"2025-11-29T06:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:32 crc kubenswrapper[4947]: E1129 06:34:32.080834 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ead809c-947a-4269-a0f3-b817113e9662\\\",\\\"systemUUID\\\":\\\"f0748469-4a41-446c-a5c3-776c2ca32148\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:32Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.085903 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.085958 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.085973 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.085995 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.086050 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:32Z","lastTransitionTime":"2025-11-29T06:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:32 crc kubenswrapper[4947]: E1129 06:34:32.102570 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ead809c-947a-4269-a0f3-b817113e9662\\\",\\\"systemUUID\\\":\\\"f0748469-4a41-446c-a5c3-776c2ca32148\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:32Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:32 crc kubenswrapper[4947]: E1129 06:34:32.102752 4947 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.104618 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.104684 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.104698 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.104717 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.104766 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:32Z","lastTransitionTime":"2025-11-29T06:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.133005 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.133058 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:34:32 crc kubenswrapper[4947]: E1129 06:34:32.133194 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 06:34:32 crc kubenswrapper[4947]: E1129 06:34:32.133211 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 06:34:32 crc kubenswrapper[4947]: E1129 06:34:32.133255 4947 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 06:34:32 crc kubenswrapper[4947]: E1129 06:34:32.133300 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-29 06:34:36.133288163 +0000 UTC m=+27.177670244 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 06:34:32 crc kubenswrapper[4947]: E1129 06:34:32.133352 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 06:34:32 crc kubenswrapper[4947]: E1129 06:34:32.133362 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 06:34:32 crc kubenswrapper[4947]: E1129 06:34:32.133369 4947 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 06:34:32 crc kubenswrapper[4947]: E1129 06:34:32.133387 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-29 06:34:36.133381385 +0000 UTC m=+27.177763456 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.178072 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.178072 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:34:32 crc kubenswrapper[4947]: E1129 06:34:32.178323 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 06:34:32 crc kubenswrapper[4947]: E1129 06:34:32.178322 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.208192 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.208255 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.208267 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.208284 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.208295 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:32Z","lastTransitionTime":"2025-11-29T06:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.310994 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.311033 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.311043 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.311059 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.311074 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:32Z","lastTransitionTime":"2025-11-29T06:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.331842 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"84be169c8b603ce62c3c60b6c67eea559c20cbcfc7ba2d0965bcf6acfb00e99b"} Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.334591 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" event={"ID":"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0","Type":"ContainerStarted","Data":"26f82f9be693d04a2b2536773e664c7e2525ea3cf90f6135e5ed8a41e0def93c"} Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.334653 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" event={"ID":"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0","Type":"ContainerStarted","Data":"ba937f20fd42fdc9c935084319c2d3c455813e35002ed22a4be631cc74fab26c"} Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.334888 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" event={"ID":"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0","Type":"ContainerStarted","Data":"9e5372e6932fe776f4e4d18124f45a05be3f374919ce6df4426d89a023353836"} Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.336114 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" event={"ID":"5f4d791f-bb61-4aaa-a09c-3007b59645a7","Type":"ContainerStarted","Data":"247b6579653599785a44fc9e8e0d47f93802f01f2002465a9ba3825d487a324b"} Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.337535 4947 generic.go:334] "Generic (PLEG): container finished" podID="6063fc55-4365-4f22-a005-bfac3812fdce" containerID="80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477" exitCode=0 Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.337578 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zb4cv" event={"ID":"6063fc55-4365-4f22-a005-bfac3812fdce","Type":"ContainerDied","Data":"80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477"} Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.348846 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabaff26-a896-4929-8b32-6e32efe02ffc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e518bb57f917b37cc08b0f099557c012da1b6c08a168ff3acb262abc2c9d6983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73fed24d3da601900cf1f54e0adc59daba09bd9a50813cc39508f2d8c66ae5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187288a1894fa8a8aaf843f1889edb54adeb0771fb0a5759a1346d1ae04f9970\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb63d3fae21b626041036425ac0dbe4cc0fba792aa82812037eb0cf64036984\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce17a0c77733e53cc77e1ced6944f39b53419840d78420bc7cf4ad34fec82e8b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 06:34:22.756831 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 06:34:22.759210 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196230451/tls.crt::/tmp/serving-cert-4196230451/tls.key\\\\\\\"\\\\nI1129 06:34:28.364401 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 06:34:28.369411 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 06:34:28.369441 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 06:34:28.369468 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 06:34:28.369474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 06:34:28.374098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 06:34:28.374134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374148 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 06:34:28.374153 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 06:34:28.374157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 06:34:28.374161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 06:34:28.374166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 06:34:28.376189 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ce7bb31402783842d6a20f7d0667426065729bdee1f7cf4fcbe636173e79da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:32Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.361784 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de17371c2f08b16e96df9f76ce3eaf0981cc8590aca0784ea95598b7842812eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:32Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.373521 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:32Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.391662 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z4rxq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:32Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.406250 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84be169c8b603ce62c3c60b6c67eea559c20cbcfc7ba2d0965bcf6acfb00e99b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:32Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.414329 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.414371 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.414384 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.414403 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.414416 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:32Z","lastTransitionTime":"2025-11-29T06:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.421754 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zb4cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6063fc55-4365-4f22-a005-bfac3812fdce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zb4cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:32Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.437022 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f4d791f-bb61-4aaa-a09c-3007b59645a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5zgvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:32Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.460595 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"268b8d19-01a0-4696-8b9e-0efed4129d56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06bdbb1b45cdc328f5c4345797e222ac33d9f1ea052011c183a229c02134cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa666baf286a9b54dbafbe35f35c3be377481392a4e3811f72cf189d1adbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ef81ddc4f773e06ee9c296ecc2e67ce76bf795fbd8357236cad6c0462d7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360e1f2c76ee825b275816919d9ab1473c6db5da5ab22b43134738150306f87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4712b1e44979f37b926f325313325a788ecb9e01b30583bf8210ab6c53d71d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:32Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.474584 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92f526fea4a10baf89765a67868ca0fe47ce19bbde73cd538bd65add7578f000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3934144663bb1471d168783c160c42654884d04d2510cbe1e5f2f6e2cfb94f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:32Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.490838 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:32Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.505067 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sxdk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f180a1-2fb0-4b96-85ed-1116677a7c99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b80d32b0abeeab39bc9fd8c24510ec865e957006ecac31da11f86b6e0fb983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swgjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sxdk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:32Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.517180 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.517226 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.517235 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.517249 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.517261 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:32Z","lastTransitionTime":"2025-11-29T06:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.517570 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlg45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c22792f1661e18f407880f3a743a03890141d9f5c6062e8ba6b06204b4e3da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rndhc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlg45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:32Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.532326 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21d5af32-2f1c-4aa8-a9b3-c436a5b3cb32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8225fda5f45611099716e891065acf0ffa2db6d2c9b79192b5955c1bac77daf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d79edf1f405f23d4ec3ff1a52d9583fc26d31d1fb690337a7b081868c397d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfae5cdad70e7948b6383df2f934a477b237c38b94cf3d3eda22d11fe02826e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20235c8af3fdd17aa70bd0744d8faf37aa7781f63ef194f6e467721a47e388d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:32Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.545328 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:32Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.567712 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"268b8d19-01a0-4696-8b9e-0efed4129d56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06bdbb1b45cdc328f5c4345797e222ac33d9f1ea052011c183a229c02134cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa666baf286a9b54dbafbe35f35c3be377481392a4e3811f72cf189d1adbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ef81ddc4f773e06ee9c296ecc2e67ce76bf795fbd8357236cad6c0462d7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360e1f2c76ee825b275816919d9ab1473c6db5da5ab22b43134738150306f87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4712b1e44979f37b926f325313325a788ecb9e01b30583bf8210ab6c53d71d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:32Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.582125 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-ttw9v"] Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.582568 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-ttw9v" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.590198 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92f526fea4a10baf89765a67868ca0fe47ce19bbde73cd538bd65add7578f000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3934144663bb1471d168783c160c42654884d04d2510cbe1e5f2f6e2cfb94f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:32Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.599026 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.618880 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.619424 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.619447 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.619457 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.619472 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.619482 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:32Z","lastTransitionTime":"2025-11-29T06:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.637632 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.658080 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.708296 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:32Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.722380 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.722429 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.722442 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.722464 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.722481 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:32Z","lastTransitionTime":"2025-11-29T06:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.737532 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8440d6ae-a357-461e-a91f-a48625b4a9da-host\") pod \"node-ca-ttw9v\" (UID: \"8440d6ae-a357-461e-a91f-a48625b4a9da\") " pod="openshift-image-registry/node-ca-ttw9v" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.737567 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/8440d6ae-a357-461e-a91f-a48625b4a9da-serviceca\") pod \"node-ca-ttw9v\" (UID: \"8440d6ae-a357-461e-a91f-a48625b4a9da\") " pod="openshift-image-registry/node-ca-ttw9v" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.737610 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mxr9\" (UniqueName: \"kubernetes.io/projected/8440d6ae-a357-461e-a91f-a48625b4a9da-kube-api-access-6mxr9\") pod \"node-ca-ttw9v\" (UID: \"8440d6ae-a357-461e-a91f-a48625b4a9da\") " pod="openshift-image-registry/node-ca-ttw9v" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.746088 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sxdk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f180a1-2fb0-4b96-85ed-1116677a7c99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b80d32b0abeeab39bc9fd8c24510ec865e957006ecac31da11f86b6e0fb983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swgjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sxdk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:32Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.788327 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlg45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c22792f1661e18f407880f3a743a03890141d9f5c6062e8ba6b06204b4e3da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rndhc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlg45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:32Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.825507 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.825546 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.825556 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.825571 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.825580 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:32Z","lastTransitionTime":"2025-11-29T06:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.833714 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21d5af32-2f1c-4aa8-a9b3-c436a5b3cb32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8225fda5f45611099716e891065acf0ffa2db6d2c9b79192b5955c1bac77daf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d79edf1f405f23d4ec3ff1a52d9583fc26d31d1fb690337a7b081868c397d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfae5cdad70e7948b6383df2f934a477b237c38b94cf3d3eda22d11fe02826e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20235c8af3fdd17aa70bd0744d8faf37aa7781f63ef194f6e467721a47e388d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:32Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.838538 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mxr9\" (UniqueName: \"kubernetes.io/projected/8440d6ae-a357-461e-a91f-a48625b4a9da-kube-api-access-6mxr9\") pod \"node-ca-ttw9v\" (UID: \"8440d6ae-a357-461e-a91f-a48625b4a9da\") " pod="openshift-image-registry/node-ca-ttw9v" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.838606 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8440d6ae-a357-461e-a91f-a48625b4a9da-host\") pod \"node-ca-ttw9v\" (UID: \"8440d6ae-a357-461e-a91f-a48625b4a9da\") " pod="openshift-image-registry/node-ca-ttw9v" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.838639 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/8440d6ae-a357-461e-a91f-a48625b4a9da-serviceca\") pod \"node-ca-ttw9v\" (UID: \"8440d6ae-a357-461e-a91f-a48625b4a9da\") " pod="openshift-image-registry/node-ca-ttw9v" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.838794 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8440d6ae-a357-461e-a91f-a48625b4a9da-host\") pod \"node-ca-ttw9v\" (UID: \"8440d6ae-a357-461e-a91f-a48625b4a9da\") " pod="openshift-image-registry/node-ca-ttw9v" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.840311 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/8440d6ae-a357-461e-a91f-a48625b4a9da-serviceca\") pod \"node-ca-ttw9v\" (UID: \"8440d6ae-a357-461e-a91f-a48625b4a9da\") " pod="openshift-image-registry/node-ca-ttw9v" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.876577 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mxr9\" (UniqueName: \"kubernetes.io/projected/8440d6ae-a357-461e-a91f-a48625b4a9da-kube-api-access-6mxr9\") pod \"node-ca-ttw9v\" (UID: \"8440d6ae-a357-461e-a91f-a48625b4a9da\") " pod="openshift-image-registry/node-ca-ttw9v" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.894896 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:32Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.922200 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-ttw9v" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.927630 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabaff26-a896-4929-8b32-6e32efe02ffc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e518bb57f917b37cc08b0f099557c012da1b6c08a168ff3acb262abc2c9d6983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73fed24d3da601900cf1f54e0adc59daba09bd9a50813cc39508f2d8c66ae5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187288a1894fa8a8aaf843f1889edb54adeb0771fb0a5759a1346d1ae04f9970\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb63d3fae21b626041036425ac0dbe4cc0fba792aa82812037eb0cf64036984\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce17a0c77733e53cc77e1ced6944f39b53419840d78420bc7cf4ad34fec82e8b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 06:34:22.756831 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 06:34:22.759210 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196230451/tls.crt::/tmp/serving-cert-4196230451/tls.key\\\\\\\"\\\\nI1129 06:34:28.364401 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 06:34:28.369411 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 06:34:28.369441 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 06:34:28.369468 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 06:34:28.369474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 06:34:28.374098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 06:34:28.374134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374148 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 06:34:28.374153 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 06:34:28.374157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 06:34:28.374161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 06:34:28.374166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 06:34:28.376189 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ce7bb31402783842d6a20f7d0667426065729bdee1f7cf4fcbe636173e79da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:32Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.928368 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.928410 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.928421 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.928440 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.928452 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:32Z","lastTransitionTime":"2025-11-29T06:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:32 crc kubenswrapper[4947]: W1129 06:34:32.935707 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8440d6ae_a357_461e_a91f_a48625b4a9da.slice/crio-0d5df03ab339dcf8adc8b3013286e7190e88875ce507a98ace449d78582352f4 WatchSource:0}: Error finding container 0d5df03ab339dcf8adc8b3013286e7190e88875ce507a98ace449d78582352f4: Status 404 returned error can't find the container with id 0d5df03ab339dcf8adc8b3013286e7190e88875ce507a98ace449d78582352f4 Nov 29 06:34:32 crc kubenswrapper[4947]: I1129 06:34:32.967308 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de17371c2f08b16e96df9f76ce3eaf0981cc8590aca0784ea95598b7842812eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:32Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.008908 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:33Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.032124 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.032182 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.032195 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.032239 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.032253 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:33Z","lastTransitionTime":"2025-11-29T06:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.051623 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z4rxq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:33Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.090597 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84be169c8b603ce62c3c60b6c67eea559c20cbcfc7ba2d0965bcf6acfb00e99b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:33Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.132865 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zb4cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6063fc55-4365-4f22-a005-bfac3812fdce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zb4cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:33Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.134285 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.134328 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.134338 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.134353 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.134365 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:33Z","lastTransitionTime":"2025-11-29T06:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.164939 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f4d791f-bb61-4aaa-a09c-3007b59645a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247b6579653599785a44fc9e8e0d47f93802f01f2002465a9ba3825d487a324b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bade16afee9bcb203991caf838d5e9e5302dd0b35462b93b45c34cf98e89c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5zgvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:33Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.178191 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:34:33 crc kubenswrapper[4947]: E1129 06:34:33.178390 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.209002 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabaff26-a896-4929-8b32-6e32efe02ffc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e518bb57f917b37cc08b0f099557c012da1b6c08a168ff3acb262abc2c9d6983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73fed24d3da601900cf1f54e0adc59daba09bd9a50813cc39508f2d8c66ae5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187288a1894fa8a8aaf843f1889edb54adeb0771fb0a5759a1346d1ae04f9970\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb63d3fae21b626041036425ac0dbe4cc0fba792aa82812037eb0cf64036984\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce17a0c77733e53cc77e1ced6944f39b53419840d78420bc7cf4ad34fec82e8b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 06:34:22.756831 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 06:34:22.759210 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196230451/tls.crt::/tmp/serving-cert-4196230451/tls.key\\\\\\\"\\\\nI1129 06:34:28.364401 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 06:34:28.369411 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 06:34:28.369441 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 06:34:28.369468 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 06:34:28.369474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 06:34:28.374098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 06:34:28.374134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374148 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 06:34:28.374153 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 06:34:28.374157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 06:34:28.374161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 06:34:28.374166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 06:34:28.376189 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ce7bb31402783842d6a20f7d0667426065729bdee1f7cf4fcbe636173e79da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:33Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.236453 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.236491 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.236502 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.236519 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.236530 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:33Z","lastTransitionTime":"2025-11-29T06:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.247433 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de17371c2f08b16e96df9f76ce3eaf0981cc8590aca0784ea95598b7842812eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:33Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.291264 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:33Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.333886 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z4rxq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:33Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.339063 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.339100 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.339112 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.339129 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.339139 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:33Z","lastTransitionTime":"2025-11-29T06:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.341494 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-ttw9v" event={"ID":"8440d6ae-a357-461e-a91f-a48625b4a9da","Type":"ContainerStarted","Data":"0d5df03ab339dcf8adc8b3013286e7190e88875ce507a98ace449d78582352f4"} Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.345146 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" event={"ID":"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0","Type":"ContainerStarted","Data":"b624f494cbf18c5d0c15ee7d074273ebb685465bc73aeebae77758840c958415"} Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.345186 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" event={"ID":"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0","Type":"ContainerStarted","Data":"03aa7fc02c19732bcc2f291ae3e1eb8ee53ac2f8e4b6138a24c73eb69201fa5e"} Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.345199 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" event={"ID":"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0","Type":"ContainerStarted","Data":"ffeb939b1928f4e71530473d93e5b142eb70a09d929bee5bea36c407c0fb44a2"} Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.347508 4947 generic.go:334] "Generic (PLEG): container finished" podID="6063fc55-4365-4f22-a005-bfac3812fdce" containerID="406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa" exitCode=0 Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.347622 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zb4cv" event={"ID":"6063fc55-4365-4f22-a005-bfac3812fdce","Type":"ContainerDied","Data":"406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa"} Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.368442 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84be169c8b603ce62c3c60b6c67eea559c20cbcfc7ba2d0965bcf6acfb00e99b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:33Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.409758 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zb4cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6063fc55-4365-4f22-a005-bfac3812fdce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zb4cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:33Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.441552 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.441596 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.441604 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.441619 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.441629 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:33Z","lastTransitionTime":"2025-11-29T06:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.447943 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f4d791f-bb61-4aaa-a09c-3007b59645a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247b6579653599785a44fc9e8e0d47f93802f01f2002465a9ba3825d487a324b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bade16afee9bcb203991caf838d5e9e5302dd0b35462b93b45c34cf98e89c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5zgvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:33Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.485983 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ttw9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8440d6ae-a357-461e-a91f-a48625b4a9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mxr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ttw9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:33Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.539027 4947 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.543894 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"268b8d19-01a0-4696-8b9e-0efed4129d56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06bdbb1b45cdc328f5c4345797e222ac33d9f1ea052011c183a229c02134cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa666baf286a9b54dbafbe35f35c3be377481392a4e3811f72cf189d1adbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ef81ddc4f773e06ee9c296ecc2e67ce76bf795fbd8357236cad6c0462d7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360e1f2c76ee825b275816919d9ab1473c6db5da5ab22b43134738150306f87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4712b1e44979f37b926f325313325a788ecb9e01b30583bf8210ab6c53d71d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:33Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.544960 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.544995 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.545008 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.545030 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.545044 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:33Z","lastTransitionTime":"2025-11-29T06:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.595157 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92f526fea4a10baf89765a67868ca0fe47ce19bbde73cd538bd65add7578f000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3934144663bb1471d168783c160c42654884d04d2510cbe1e5f2f6e2cfb94f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:33Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.628308 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:33Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.648121 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.648153 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.648161 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.648179 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.648189 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:33Z","lastTransitionTime":"2025-11-29T06:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.665140 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sxdk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f180a1-2fb0-4b96-85ed-1116677a7c99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b80d32b0abeeab39bc9fd8c24510ec865e957006ecac31da11f86b6e0fb983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swgjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sxdk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:33Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.708658 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlg45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c22792f1661e18f407880f3a743a03890141d9f5c6062e8ba6b06204b4e3da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rndhc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlg45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:33Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.750498 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.750544 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.750557 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.750576 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.750589 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:33Z","lastTransitionTime":"2025-11-29T06:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.752892 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21d5af32-2f1c-4aa8-a9b3-c436a5b3cb32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8225fda5f45611099716e891065acf0ffa2db6d2c9b79192b5955c1bac77daf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d79edf1f405f23d4ec3ff1a52d9583fc26d31d1fb690337a7b081868c397d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfae5cdad70e7948b6383df2f934a477b237c38b94cf3d3eda22d11fe02826e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20235c8af3fdd17aa70bd0744d8faf37aa7781f63ef194f6e467721a47e388d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:33Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.790017 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:33Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.840015 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"268b8d19-01a0-4696-8b9e-0efed4129d56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06bdbb1b45cdc328f5c4345797e222ac33d9f1ea052011c183a229c02134cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa666baf286a9b54dbafbe35f35c3be377481392a4e3811f72cf189d1adbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ef81ddc4f773e06ee9c296ecc2e67ce76bf795fbd8357236cad6c0462d7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360e1f2c76ee825b275816919d9ab1473c6db5da5ab22b43134738150306f87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4712b1e44979f37b926f325313325a788ecb9e01b30583bf8210ab6c53d71d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:33Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.852942 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.852984 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.852992 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.853012 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.853020 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:33Z","lastTransitionTime":"2025-11-29T06:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.869337 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92f526fea4a10baf89765a67868ca0fe47ce19bbde73cd538bd65add7578f000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3934144663bb1471d168783c160c42654884d04d2510cbe1e5f2f6e2cfb94f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:33Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.911908 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:33Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.949481 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sxdk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f180a1-2fb0-4b96-85ed-1116677a7c99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b80d32b0abeeab39bc9fd8c24510ec865e957006ecac31da11f86b6e0fb983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swgjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sxdk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:33Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.955804 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.955865 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.955881 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.955903 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.955920 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:33Z","lastTransitionTime":"2025-11-29T06:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:33 crc kubenswrapper[4947]: I1129 06:34:33.994748 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlg45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c22792f1661e18f407880f3a743a03890141d9f5c6062e8ba6b06204b4e3da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rndhc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlg45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:33Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.035923 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21d5af32-2f1c-4aa8-a9b3-c436a5b3cb32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8225fda5f45611099716e891065acf0ffa2db6d2c9b79192b5955c1bac77daf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d79edf1f405f23d4ec3ff1a52d9583fc26d31d1fb690337a7b081868c397d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfae5cdad70e7948b6383df2f934a477b237c38b94cf3d3eda22d11fe02826e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20235c8af3fdd17aa70bd0744d8faf37aa7781f63ef194f6e467721a47e388d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:34Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.058590 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.058640 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.058653 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.058673 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.058686 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:34Z","lastTransitionTime":"2025-11-29T06:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.073000 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:34Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.108698 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabaff26-a896-4929-8b32-6e32efe02ffc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e518bb57f917b37cc08b0f099557c012da1b6c08a168ff3acb262abc2c9d6983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73fed24d3da601900cf1f54e0adc59daba09bd9a50813cc39508f2d8c66ae5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187288a1894fa8a8aaf843f1889edb54adeb0771fb0a5759a1346d1ae04f9970\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb63d3fae21b626041036425ac0dbe4cc0fba792aa82812037eb0cf64036984\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce17a0c77733e53cc77e1ced6944f39b53419840d78420bc7cf4ad34fec82e8b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 06:34:22.756831 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 06:34:22.759210 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196230451/tls.crt::/tmp/serving-cert-4196230451/tls.key\\\\\\\"\\\\nI1129 06:34:28.364401 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 06:34:28.369411 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 06:34:28.369441 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 06:34:28.369468 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 06:34:28.369474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 06:34:28.374098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 06:34:28.374134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374148 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 06:34:28.374153 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 06:34:28.374157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 06:34:28.374161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 06:34:28.374166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 06:34:28.376189 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ce7bb31402783842d6a20f7d0667426065729bdee1f7cf4fcbe636173e79da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:34Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.154890 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de17371c2f08b16e96df9f76ce3eaf0981cc8590aca0784ea95598b7842812eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:34Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.161246 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.161296 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.161308 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.161324 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.161337 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:34Z","lastTransitionTime":"2025-11-29T06:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.178370 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.178456 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:34:34 crc kubenswrapper[4947]: E1129 06:34:34.178569 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 06:34:34 crc kubenswrapper[4947]: E1129 06:34:34.178633 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.188715 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:34Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.237070 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z4rxq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:34Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.264087 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.264158 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.264173 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.264195 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.264207 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:34Z","lastTransitionTime":"2025-11-29T06:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.269719 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84be169c8b603ce62c3c60b6c67eea559c20cbcfc7ba2d0965bcf6acfb00e99b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:34Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.310924 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zb4cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6063fc55-4365-4f22-a005-bfac3812fdce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zb4cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:34Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.349278 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f4d791f-bb61-4aaa-a09c-3007b59645a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247b6579653599785a44fc9e8e0d47f93802f01f2002465a9ba3825d487a324b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bade16afee9bcb203991caf838d5e9e5302dd0b35462b93b45c34cf98e89c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5zgvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:34Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.354291 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-ttw9v" event={"ID":"8440d6ae-a357-461e-a91f-a48625b4a9da","Type":"ContainerStarted","Data":"6c526243829deb889e7afc49647d8bf9960f886b6abc9aa7cba8a69c8d5b3ab3"} Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.357616 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zb4cv" event={"ID":"6063fc55-4365-4f22-a005-bfac3812fdce","Type":"ContainerStarted","Data":"a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353"} Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.366967 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.367005 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.367014 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.367053 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.367063 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:34Z","lastTransitionTime":"2025-11-29T06:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.386786 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ttw9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8440d6ae-a357-461e-a91f-a48625b4a9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mxr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ttw9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:34Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.430432 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlg45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c22792f1661e18f407880f3a743a03890141d9f5c6062e8ba6b06204b4e3da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rndhc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlg45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:34Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.471255 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.471324 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.471337 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.471360 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.471374 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:34Z","lastTransitionTime":"2025-11-29T06:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.477950 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"268b8d19-01a0-4696-8b9e-0efed4129d56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06bdbb1b45cdc328f5c4345797e222ac33d9f1ea052011c183a229c02134cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa666baf286a9b54dbafbe35f35c3be377481392a4e3811f72cf189d1adbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ef81ddc4f773e06ee9c296ecc2e67ce76bf795fbd8357236cad6c0462d7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360e1f2c76ee825b275816919d9ab1473c6db5da5ab22b43134738150306f87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4712b1e44979f37b926f325313325a788ecb9e01b30583bf8210ab6c53d71d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:34Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.511200 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92f526fea4a10baf89765a67868ca0fe47ce19bbde73cd538bd65add7578f000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3934144663bb1471d168783c160c42654884d04d2510cbe1e5f2f6e2cfb94f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:34Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.551864 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:34Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.574248 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.574298 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.574313 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.574337 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.574350 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:34Z","lastTransitionTime":"2025-11-29T06:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.590759 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sxdk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f180a1-2fb0-4b96-85ed-1116677a7c99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b80d32b0abeeab39bc9fd8c24510ec865e957006ecac31da11f86b6e0fb983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swgjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sxdk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:34Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.631107 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21d5af32-2f1c-4aa8-a9b3-c436a5b3cb32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8225fda5f45611099716e891065acf0ffa2db6d2c9b79192b5955c1bac77daf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d79edf1f405f23d4ec3ff1a52d9583fc26d31d1fb690337a7b081868c397d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfae5cdad70e7948b6383df2f934a477b237c38b94cf3d3eda22d11fe02826e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20235c8af3fdd17aa70bd0744d8faf37aa7781f63ef194f6e467721a47e388d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:34Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.670932 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:34Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.677681 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.677740 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.677760 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.677788 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.677810 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:34Z","lastTransitionTime":"2025-11-29T06:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.712354 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabaff26-a896-4929-8b32-6e32efe02ffc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e518bb57f917b37cc08b0f099557c012da1b6c08a168ff3acb262abc2c9d6983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73fed24d3da601900cf1f54e0adc59daba09bd9a50813cc39508f2d8c66ae5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187288a1894fa8a8aaf843f1889edb54adeb0771fb0a5759a1346d1ae04f9970\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb63d3fae21b626041036425ac0dbe4cc0fba792aa82812037eb0cf64036984\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce17a0c77733e53cc77e1ced6944f39b53419840d78420bc7cf4ad34fec82e8b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 06:34:22.756831 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 06:34:22.759210 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196230451/tls.crt::/tmp/serving-cert-4196230451/tls.key\\\\\\\"\\\\nI1129 06:34:28.364401 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 06:34:28.369411 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 06:34:28.369441 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 06:34:28.369468 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 06:34:28.369474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 06:34:28.374098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 06:34:28.374134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374148 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 06:34:28.374153 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 06:34:28.374157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 06:34:28.374161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 06:34:28.374166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 06:34:28.376189 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ce7bb31402783842d6a20f7d0667426065729bdee1f7cf4fcbe636173e79da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:34Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.752603 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de17371c2f08b16e96df9f76ce3eaf0981cc8590aca0784ea95598b7842812eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:34Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.780856 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.780919 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.780935 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.780965 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.780984 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:34Z","lastTransitionTime":"2025-11-29T06:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.793779 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:34Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.832997 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z4rxq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:34Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.869655 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ttw9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8440d6ae-a357-461e-a91f-a48625b4a9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c526243829deb889e7afc49647d8bf9960f886b6abc9aa7cba8a69c8d5b3ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mxr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ttw9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:34Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.883613 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.883655 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.883666 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.883686 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.883700 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:34Z","lastTransitionTime":"2025-11-29T06:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.908476 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84be169c8b603ce62c3c60b6c67eea559c20cbcfc7ba2d0965bcf6acfb00e99b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:34Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.951578 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zb4cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6063fc55-4365-4f22-a005-bfac3812fdce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zb4cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:34Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.986962 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.987053 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.987078 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.987127 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.987153 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:34Z","lastTransitionTime":"2025-11-29T06:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:34 crc kubenswrapper[4947]: I1129 06:34:34.989895 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f4d791f-bb61-4aaa-a09c-3007b59645a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247b6579653599785a44fc9e8e0d47f93802f01f2002465a9ba3825d487a324b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bade16afee9bcb203991caf838d5e9e5302dd0b35462b93b45c34cf98e89c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5zgvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:34Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.090258 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.090304 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.090315 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.090332 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.090343 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:35Z","lastTransitionTime":"2025-11-29T06:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.178171 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:34:35 crc kubenswrapper[4947]: E1129 06:34:35.178350 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.192387 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.192434 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.192447 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.192465 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.192477 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:35Z","lastTransitionTime":"2025-11-29T06:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.294478 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.294523 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.294533 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.294573 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.294588 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:35Z","lastTransitionTime":"2025-11-29T06:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.364184 4947 generic.go:334] "Generic (PLEG): container finished" podID="6063fc55-4365-4f22-a005-bfac3812fdce" containerID="a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353" exitCode=0 Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.364285 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zb4cv" event={"ID":"6063fc55-4365-4f22-a005-bfac3812fdce","Type":"ContainerDied","Data":"a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353"} Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.370444 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" event={"ID":"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0","Type":"ContainerStarted","Data":"fadf23d78dff09f3edcb6159171e1f55f5a84b42d3dbc6804a2e1764d6c71797"} Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.391755 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"268b8d19-01a0-4696-8b9e-0efed4129d56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06bdbb1b45cdc328f5c4345797e222ac33d9f1ea052011c183a229c02134cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa666baf286a9b54dbafbe35f35c3be377481392a4e3811f72cf189d1adbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ef81ddc4f773e06ee9c296ecc2e67ce76bf795fbd8357236cad6c0462d7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360e1f2c76ee825b275816919d9ab1473c6db5da5ab22b43134738150306f87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4712b1e44979f37b926f325313325a788ecb9e01b30583bf8210ab6c53d71d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:35Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.397295 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.397349 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.397360 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.397378 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.397390 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:35Z","lastTransitionTime":"2025-11-29T06:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.409199 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92f526fea4a10baf89765a67868ca0fe47ce19bbde73cd538bd65add7578f000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3934144663bb1471d168783c160c42654884d04d2510cbe1e5f2f6e2cfb94f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:35Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.423614 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:35Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.435401 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sxdk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f180a1-2fb0-4b96-85ed-1116677a7c99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b80d32b0abeeab39bc9fd8c24510ec865e957006ecac31da11f86b6e0fb983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swgjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sxdk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:35Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.447938 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlg45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c22792f1661e18f407880f3a743a03890141d9f5c6062e8ba6b06204b4e3da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rndhc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlg45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:35Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.489555 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21d5af32-2f1c-4aa8-a9b3-c436a5b3cb32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8225fda5f45611099716e891065acf0ffa2db6d2c9b79192b5955c1bac77daf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d79edf1f405f23d4ec3ff1a52d9583fc26d31d1fb690337a7b081868c397d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfae5cdad70e7948b6383df2f934a477b237c38b94cf3d3eda22d11fe02826e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20235c8af3fdd17aa70bd0744d8faf37aa7781f63ef194f6e467721a47e388d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:35Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.505815 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.505857 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.505868 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.505889 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.505906 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:35Z","lastTransitionTime":"2025-11-29T06:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.506573 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:35Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.525545 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabaff26-a896-4929-8b32-6e32efe02ffc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e518bb57f917b37cc08b0f099557c012da1b6c08a168ff3acb262abc2c9d6983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73fed24d3da601900cf1f54e0adc59daba09bd9a50813cc39508f2d8c66ae5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187288a1894fa8a8aaf843f1889edb54adeb0771fb0a5759a1346d1ae04f9970\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb63d3fae21b626041036425ac0dbe4cc0fba792aa82812037eb0cf64036984\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce17a0c77733e53cc77e1ced6944f39b53419840d78420bc7cf4ad34fec82e8b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 06:34:22.756831 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 06:34:22.759210 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196230451/tls.crt::/tmp/serving-cert-4196230451/tls.key\\\\\\\"\\\\nI1129 06:34:28.364401 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 06:34:28.369411 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 06:34:28.369441 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 06:34:28.369468 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 06:34:28.369474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 06:34:28.374098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 06:34:28.374134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374148 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 06:34:28.374153 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 06:34:28.374157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 06:34:28.374161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 06:34:28.374166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 06:34:28.376189 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ce7bb31402783842d6a20f7d0667426065729bdee1f7cf4fcbe636173e79da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:35Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.541856 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de17371c2f08b16e96df9f76ce3eaf0981cc8590aca0784ea95598b7842812eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:35Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.556424 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:35Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.578865 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z4rxq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:35Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.596122 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84be169c8b603ce62c3c60b6c67eea559c20cbcfc7ba2d0965bcf6acfb00e99b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:35Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.608211 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.608253 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.608263 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.608278 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.608286 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:35Z","lastTransitionTime":"2025-11-29T06:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.612658 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zb4cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6063fc55-4365-4f22-a005-bfac3812fdce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zb4cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:35Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.623940 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f4d791f-bb61-4aaa-a09c-3007b59645a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247b6579653599785a44fc9e8e0d47f93802f01f2002465a9ba3825d487a324b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bade16afee9bcb203991caf838d5e9e5302dd0b35462b93b45c34cf98e89c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5zgvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:35Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.634687 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ttw9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8440d6ae-a357-461e-a91f-a48625b4a9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c526243829deb889e7afc49647d8bf9960f886b6abc9aa7cba8a69c8d5b3ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mxr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ttw9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:35Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.710439 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.710510 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.710531 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.710558 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.710576 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:35Z","lastTransitionTime":"2025-11-29T06:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.813080 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.813119 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.813129 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.813150 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.813161 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:35Z","lastTransitionTime":"2025-11-29T06:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.916162 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.916257 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.916278 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.916306 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:35 crc kubenswrapper[4947]: I1129 06:34:35.916323 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:35Z","lastTransitionTime":"2025-11-29T06:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.021274 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.021749 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.021879 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.021976 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.022054 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:36Z","lastTransitionTime":"2025-11-29T06:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.091831 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:34:36 crc kubenswrapper[4947]: E1129 06:34:36.092177 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:34:44.092157651 +0000 UTC m=+35.136539732 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.092207 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.092255 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:34:36 crc kubenswrapper[4947]: E1129 06:34:36.092312 4947 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 06:34:36 crc kubenswrapper[4947]: E1129 06:34:36.092331 4947 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 06:34:36 crc kubenswrapper[4947]: E1129 06:34:36.092354 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 06:34:44.092346295 +0000 UTC m=+35.136728376 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 06:34:36 crc kubenswrapper[4947]: E1129 06:34:36.092368 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 06:34:44.092361196 +0000 UTC m=+35.136743277 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.124889 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.124926 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.124936 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.124953 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.124962 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:36Z","lastTransitionTime":"2025-11-29T06:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.177925 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:34:36 crc kubenswrapper[4947]: E1129 06:34:36.178338 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.177986 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:34:36 crc kubenswrapper[4947]: E1129 06:34:36.178590 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.193999 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.194075 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:34:36 crc kubenswrapper[4947]: E1129 06:34:36.194335 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 06:34:36 crc kubenswrapper[4947]: E1129 06:34:36.194384 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 06:34:36 crc kubenswrapper[4947]: E1129 06:34:36.194411 4947 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 06:34:36 crc kubenswrapper[4947]: E1129 06:34:36.194492 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-29 06:34:44.194465323 +0000 UTC m=+35.238847434 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 06:34:36 crc kubenswrapper[4947]: E1129 06:34:36.194805 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 06:34:36 crc kubenswrapper[4947]: E1129 06:34:36.194925 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 06:34:36 crc kubenswrapper[4947]: E1129 06:34:36.195022 4947 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 06:34:36 crc kubenswrapper[4947]: E1129 06:34:36.195171 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-29 06:34:44.195146899 +0000 UTC m=+35.239529150 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.227759 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.227804 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.227822 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.227839 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.227855 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:36Z","lastTransitionTime":"2025-11-29T06:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.330956 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.331672 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.331786 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.331886 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.331953 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:36Z","lastTransitionTime":"2025-11-29T06:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.378606 4947 generic.go:334] "Generic (PLEG): container finished" podID="6063fc55-4365-4f22-a005-bfac3812fdce" containerID="2fe3a8d1d8d729e919ab73a034943a441b5bc44409af32b5671d73b80e2bd6ee" exitCode=0 Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.378659 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zb4cv" event={"ID":"6063fc55-4365-4f22-a005-bfac3812fdce","Type":"ContainerDied","Data":"2fe3a8d1d8d729e919ab73a034943a441b5bc44409af32b5671d73b80e2bd6ee"} Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.403448 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"268b8d19-01a0-4696-8b9e-0efed4129d56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06bdbb1b45cdc328f5c4345797e222ac33d9f1ea052011c183a229c02134cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa666baf286a9b54dbafbe35f35c3be377481392a4e3811f72cf189d1adbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ef81ddc4f773e06ee9c296ecc2e67ce76bf795fbd8357236cad6c0462d7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360e1f2c76ee825b275816919d9ab1473c6db5da5ab22b43134738150306f87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4712b1e44979f37b926f325313325a788ecb9e01b30583bf8210ab6c53d71d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:36Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.422012 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92f526fea4a10baf89765a67868ca0fe47ce19bbde73cd538bd65add7578f000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3934144663bb1471d168783c160c42654884d04d2510cbe1e5f2f6e2cfb94f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:36Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.434565 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.434609 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.434621 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.434642 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.434657 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:36Z","lastTransitionTime":"2025-11-29T06:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.438167 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:36Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.450928 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sxdk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f180a1-2fb0-4b96-85ed-1116677a7c99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b80d32b0abeeab39bc9fd8c24510ec865e957006ecac31da11f86b6e0fb983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swgjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sxdk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:36Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.468138 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlg45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c22792f1661e18f407880f3a743a03890141d9f5c6062e8ba6b06204b4e3da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rndhc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlg45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:36Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.485979 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21d5af32-2f1c-4aa8-a9b3-c436a5b3cb32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8225fda5f45611099716e891065acf0ffa2db6d2c9b79192b5955c1bac77daf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d79edf1f405f23d4ec3ff1a52d9583fc26d31d1fb690337a7b081868c397d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfae5cdad70e7948b6383df2f934a477b237c38b94cf3d3eda22d11fe02826e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20235c8af3fdd17aa70bd0744d8faf37aa7781f63ef194f6e467721a47e388d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:36Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.503616 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:36Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.525578 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabaff26-a896-4929-8b32-6e32efe02ffc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e518bb57f917b37cc08b0f099557c012da1b6c08a168ff3acb262abc2c9d6983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73fed24d3da601900cf1f54e0adc59daba09bd9a50813cc39508f2d8c66ae5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187288a1894fa8a8aaf843f1889edb54adeb0771fb0a5759a1346d1ae04f9970\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb63d3fae21b626041036425ac0dbe4cc0fba792aa82812037eb0cf64036984\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce17a0c77733e53cc77e1ced6944f39b53419840d78420bc7cf4ad34fec82e8b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 06:34:22.756831 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 06:34:22.759210 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196230451/tls.crt::/tmp/serving-cert-4196230451/tls.key\\\\\\\"\\\\nI1129 06:34:28.364401 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 06:34:28.369411 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 06:34:28.369441 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 06:34:28.369468 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 06:34:28.369474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 06:34:28.374098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 06:34:28.374134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374148 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 06:34:28.374153 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 06:34:28.374157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 06:34:28.374161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 06:34:28.374166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 06:34:28.376189 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ce7bb31402783842d6a20f7d0667426065729bdee1f7cf4fcbe636173e79da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:36Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.538089 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.538137 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.538150 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.538170 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.538183 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:36Z","lastTransitionTime":"2025-11-29T06:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.542511 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de17371c2f08b16e96df9f76ce3eaf0981cc8590aca0784ea95598b7842812eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:36Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.559541 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:36Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.584366 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z4rxq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:36Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.601665 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84be169c8b603ce62c3c60b6c67eea559c20cbcfc7ba2d0965bcf6acfb00e99b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:36Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.618310 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zb4cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6063fc55-4365-4f22-a005-bfac3812fdce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fe3a8d1d8d729e919ab73a034943a441b5bc44409af32b5671d73b80e2bd6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe3a8d1d8d729e919ab73a034943a441b5bc44409af32b5671d73b80e2bd6ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zb4cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:36Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.631433 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f4d791f-bb61-4aaa-a09c-3007b59645a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247b6579653599785a44fc9e8e0d47f93802f01f2002465a9ba3825d487a324b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bade16afee9bcb203991caf838d5e9e5302dd0b35462b93b45c34cf98e89c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5zgvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:36Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.641124 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.641170 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.641183 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.641205 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.641237 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:36Z","lastTransitionTime":"2025-11-29T06:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.642627 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ttw9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8440d6ae-a357-461e-a91f-a48625b4a9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c526243829deb889e7afc49647d8bf9960f886b6abc9aa7cba8a69c8d5b3ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mxr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ttw9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:36Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.744569 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.744614 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.744626 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.744643 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.744655 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:36Z","lastTransitionTime":"2025-11-29T06:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.848117 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.848160 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.848169 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.848185 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.848194 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:36Z","lastTransitionTime":"2025-11-29T06:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.952057 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.952099 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.952110 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.952126 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:36 crc kubenswrapper[4947]: I1129 06:34:36.952137 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:36Z","lastTransitionTime":"2025-11-29T06:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:37 crc kubenswrapper[4947]: I1129 06:34:37.054670 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:37 crc kubenswrapper[4947]: I1129 06:34:37.054748 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:37 crc kubenswrapper[4947]: I1129 06:34:37.054770 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:37 crc kubenswrapper[4947]: I1129 06:34:37.054799 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:37 crc kubenswrapper[4947]: I1129 06:34:37.054823 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:37Z","lastTransitionTime":"2025-11-29T06:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:37 crc kubenswrapper[4947]: I1129 06:34:37.157958 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:37 crc kubenswrapper[4947]: I1129 06:34:37.158007 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:37 crc kubenswrapper[4947]: I1129 06:34:37.158020 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:37 crc kubenswrapper[4947]: I1129 06:34:37.158036 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:37 crc kubenswrapper[4947]: I1129 06:34:37.158046 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:37Z","lastTransitionTime":"2025-11-29T06:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:37 crc kubenswrapper[4947]: I1129 06:34:37.178380 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:34:37 crc kubenswrapper[4947]: E1129 06:34:37.178586 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 06:34:37 crc kubenswrapper[4947]: I1129 06:34:37.260781 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:37 crc kubenswrapper[4947]: I1129 06:34:37.261036 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:37 crc kubenswrapper[4947]: I1129 06:34:37.261079 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:37 crc kubenswrapper[4947]: I1129 06:34:37.261098 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:37 crc kubenswrapper[4947]: I1129 06:34:37.261111 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:37Z","lastTransitionTime":"2025-11-29T06:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:37 crc kubenswrapper[4947]: I1129 06:34:37.364003 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:37 crc kubenswrapper[4947]: I1129 06:34:37.364055 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:37 crc kubenswrapper[4947]: I1129 06:34:37.364067 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:37 crc kubenswrapper[4947]: I1129 06:34:37.364085 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:37 crc kubenswrapper[4947]: I1129 06:34:37.364097 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:37Z","lastTransitionTime":"2025-11-29T06:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:37 crc kubenswrapper[4947]: I1129 06:34:37.467401 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:37 crc kubenswrapper[4947]: I1129 06:34:37.467469 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:37 crc kubenswrapper[4947]: I1129 06:34:37.467486 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:37 crc kubenswrapper[4947]: I1129 06:34:37.467512 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:37 crc kubenswrapper[4947]: I1129 06:34:37.467528 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:37Z","lastTransitionTime":"2025-11-29T06:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:37 crc kubenswrapper[4947]: I1129 06:34:37.571061 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:37 crc kubenswrapper[4947]: I1129 06:34:37.571131 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:37 crc kubenswrapper[4947]: I1129 06:34:37.571154 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:37 crc kubenswrapper[4947]: I1129 06:34:37.571186 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:37 crc kubenswrapper[4947]: I1129 06:34:37.571212 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:37Z","lastTransitionTime":"2025-11-29T06:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:37 crc kubenswrapper[4947]: I1129 06:34:37.674840 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:37 crc kubenswrapper[4947]: I1129 06:34:37.674878 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:37 crc kubenswrapper[4947]: I1129 06:34:37.674889 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:37 crc kubenswrapper[4947]: I1129 06:34:37.674906 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:37 crc kubenswrapper[4947]: I1129 06:34:37.674918 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:37Z","lastTransitionTime":"2025-11-29T06:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:37 crc kubenswrapper[4947]: I1129 06:34:37.778190 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:37 crc kubenswrapper[4947]: I1129 06:34:37.778319 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:37 crc kubenswrapper[4947]: I1129 06:34:37.778347 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:37 crc kubenswrapper[4947]: I1129 06:34:37.778377 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:37 crc kubenswrapper[4947]: I1129 06:34:37.778400 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:37Z","lastTransitionTime":"2025-11-29T06:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:37 crc kubenswrapper[4947]: I1129 06:34:37.881205 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:37 crc kubenswrapper[4947]: I1129 06:34:37.881275 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:37 crc kubenswrapper[4947]: I1129 06:34:37.881287 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:37 crc kubenswrapper[4947]: I1129 06:34:37.881305 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:37 crc kubenswrapper[4947]: I1129 06:34:37.881316 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:37Z","lastTransitionTime":"2025-11-29T06:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:37 crc kubenswrapper[4947]: I1129 06:34:37.984675 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:37 crc kubenswrapper[4947]: I1129 06:34:37.984738 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:37 crc kubenswrapper[4947]: I1129 06:34:37.984756 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:37 crc kubenswrapper[4947]: I1129 06:34:37.984782 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:37 crc kubenswrapper[4947]: I1129 06:34:37.984799 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:37Z","lastTransitionTime":"2025-11-29T06:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.087739 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.087811 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.087831 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.087861 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.087887 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:38Z","lastTransitionTime":"2025-11-29T06:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.178687 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.178794 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:34:38 crc kubenswrapper[4947]: E1129 06:34:38.178853 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 06:34:38 crc kubenswrapper[4947]: E1129 06:34:38.179001 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.191159 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.191207 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.191239 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.191265 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.191280 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:38Z","lastTransitionTime":"2025-11-29T06:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.302242 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.302285 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.302297 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.302315 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.302325 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:38Z","lastTransitionTime":"2025-11-29T06:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.389608 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" event={"ID":"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0","Type":"ContainerStarted","Data":"b232ce88d09a01c32778e258576e6dc2f9ae180513aaf31ffa19ab6e892fefaf"} Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.389851 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.397351 4947 generic.go:334] "Generic (PLEG): container finished" podID="6063fc55-4365-4f22-a005-bfac3812fdce" containerID="a49d19e2f799deb4cf6b8154a2f3c0485c06ad44d0b3e0bac5b2b85f16228f38" exitCode=0 Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.397448 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zb4cv" event={"ID":"6063fc55-4365-4f22-a005-bfac3812fdce","Type":"ContainerDied","Data":"a49d19e2f799deb4cf6b8154a2f3c0485c06ad44d0b3e0bac5b2b85f16228f38"} Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.404758 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.404845 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.404874 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.404908 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.404931 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:38Z","lastTransitionTime":"2025-11-29T06:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.412271 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabaff26-a896-4929-8b32-6e32efe02ffc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e518bb57f917b37cc08b0f099557c012da1b6c08a168ff3acb262abc2c9d6983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73fed24d3da601900cf1f54e0adc59daba09bd9a50813cc39508f2d8c66ae5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187288a1894fa8a8aaf843f1889edb54adeb0771fb0a5759a1346d1ae04f9970\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb63d3fae21b626041036425ac0dbe4cc0fba792aa82812037eb0cf64036984\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce17a0c77733e53cc77e1ced6944f39b53419840d78420bc7cf4ad34fec82e8b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 06:34:22.756831 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 06:34:22.759210 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196230451/tls.crt::/tmp/serving-cert-4196230451/tls.key\\\\\\\"\\\\nI1129 06:34:28.364401 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 06:34:28.369411 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 06:34:28.369441 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 06:34:28.369468 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 06:34:28.369474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 06:34:28.374098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 06:34:28.374134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374148 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 06:34:28.374153 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 06:34:28.374157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 06:34:28.374161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 06:34:28.374166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 06:34:28.376189 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ce7bb31402783842d6a20f7d0667426065729bdee1f7cf4fcbe636173e79da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:38Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.429583 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de17371c2f08b16e96df9f76ce3eaf0981cc8590aca0784ea95598b7842812eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:38Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.434468 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.445281 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:38Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.466858 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26f82f9be693d04a2b2536773e664c7e2525ea3cf90f6135e5ed8a41e0def93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffeb939b1928f4e71530473d93e5b142eb70a09d929bee5bea36c407c0fb44a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b624f494cbf18c5d0c15ee7d074273ebb685465bc73aeebae77758840c958415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03aa7fc02c19732bcc2f291ae3e1eb8ee53ac2f8e4b6138a24c73eb69201fa5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba937f20fd42fdc9c935084319c2d3c455813e35002ed22a4be631cc74fab26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5372e6932fe776f4e4d18124f45a05be3f374919ce6df4426d89a023353836\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b232ce88d09a01c32778e258576e6dc2f9ae180513aaf31ffa19ab6e892fefaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fadf23d78dff09f3edcb6159171e1f55f5a84b42d3dbc6804a2e1764d6c71797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z4rxq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:38Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.483955 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84be169c8b603ce62c3c60b6c67eea559c20cbcfc7ba2d0965bcf6acfb00e99b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:38Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.504273 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zb4cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6063fc55-4365-4f22-a005-bfac3812fdce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fe3a8d1d8d729e919ab73a034943a441b5bc44409af32b5671d73b80e2bd6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe3a8d1d8d729e919ab73a034943a441b5bc44409af32b5671d73b80e2bd6ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zb4cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:38Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.509004 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.509048 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.509058 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.509077 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.509086 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:38Z","lastTransitionTime":"2025-11-29T06:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.524010 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f4d791f-bb61-4aaa-a09c-3007b59645a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247b6579653599785a44fc9e8e0d47f93802f01f2002465a9ba3825d487a324b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bade16afee9bcb203991caf838d5e9e5302dd0b35462b93b45c34cf98e89c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5zgvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:38Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.537391 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ttw9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8440d6ae-a357-461e-a91f-a48625b4a9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c526243829deb889e7afc49647d8bf9960f886b6abc9aa7cba8a69c8d5b3ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mxr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ttw9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:38Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.559364 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"268b8d19-01a0-4696-8b9e-0efed4129d56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06bdbb1b45cdc328f5c4345797e222ac33d9f1ea052011c183a229c02134cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa666baf286a9b54dbafbe35f35c3be377481392a4e3811f72cf189d1adbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ef81ddc4f773e06ee9c296ecc2e67ce76bf795fbd8357236cad6c0462d7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360e1f2c76ee825b275816919d9ab1473c6db5da5ab22b43134738150306f87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4712b1e44979f37b926f325313325a788ecb9e01b30583bf8210ab6c53d71d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:38Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.574797 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92f526fea4a10baf89765a67868ca0fe47ce19bbde73cd538bd65add7578f000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3934144663bb1471d168783c160c42654884d04d2510cbe1e5f2f6e2cfb94f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:38Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.595200 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:38Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.609854 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sxdk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f180a1-2fb0-4b96-85ed-1116677a7c99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b80d32b0abeeab39bc9fd8c24510ec865e957006ecac31da11f86b6e0fb983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swgjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sxdk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:38Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.612555 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.612598 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.612609 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.612626 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.612638 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:38Z","lastTransitionTime":"2025-11-29T06:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.623075 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlg45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c22792f1661e18f407880f3a743a03890141d9f5c6062e8ba6b06204b4e3da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rndhc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlg45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:38Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.637553 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21d5af32-2f1c-4aa8-a9b3-c436a5b3cb32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8225fda5f45611099716e891065acf0ffa2db6d2c9b79192b5955c1bac77daf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d79edf1f405f23d4ec3ff1a52d9583fc26d31d1fb690337a7b081868c397d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfae5cdad70e7948b6383df2f934a477b237c38b94cf3d3eda22d11fe02826e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20235c8af3fdd17aa70bd0744d8faf37aa7781f63ef194f6e467721a47e388d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:38Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.655546 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:38Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.674199 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21d5af32-2f1c-4aa8-a9b3-c436a5b3cb32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8225fda5f45611099716e891065acf0ffa2db6d2c9b79192b5955c1bac77daf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d79edf1f405f23d4ec3ff1a52d9583fc26d31d1fb690337a7b081868c397d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfae5cdad70e7948b6383df2f934a477b237c38b94cf3d3eda22d11fe02826e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20235c8af3fdd17aa70bd0744d8faf37aa7781f63ef194f6e467721a47e388d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:38Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.688630 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:38Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.704111 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabaff26-a896-4929-8b32-6e32efe02ffc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e518bb57f917b37cc08b0f099557c012da1b6c08a168ff3acb262abc2c9d6983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73fed24d3da601900cf1f54e0adc59daba09bd9a50813cc39508f2d8c66ae5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187288a1894fa8a8aaf843f1889edb54adeb0771fb0a5759a1346d1ae04f9970\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb63d3fae21b626041036425ac0dbe4cc0fba792aa82812037eb0cf64036984\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce17a0c77733e53cc77e1ced6944f39b53419840d78420bc7cf4ad34fec82e8b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 06:34:22.756831 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 06:34:22.759210 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196230451/tls.crt::/tmp/serving-cert-4196230451/tls.key\\\\\\\"\\\\nI1129 06:34:28.364401 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 06:34:28.369411 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 06:34:28.369441 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 06:34:28.369468 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 06:34:28.369474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 06:34:28.374098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 06:34:28.374134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374148 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 06:34:28.374153 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 06:34:28.374157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 06:34:28.374161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 06:34:28.374166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 06:34:28.376189 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ce7bb31402783842d6a20f7d0667426065729bdee1f7cf4fcbe636173e79da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:38Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.716073 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.716108 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.716119 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.716134 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.716145 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:38Z","lastTransitionTime":"2025-11-29T06:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.723079 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de17371c2f08b16e96df9f76ce3eaf0981cc8590aca0784ea95598b7842812eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:38Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.736904 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:38Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.758272 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26f82f9be693d04a2b2536773e664c7e2525ea3cf90f6135e5ed8a41e0def93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffeb939b1928f4e71530473d93e5b142eb70a09d929bee5bea36c407c0fb44a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b624f494cbf18c5d0c15ee7d074273ebb685465bc73aeebae77758840c958415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03aa7fc02c19732bcc2f291ae3e1eb8ee53ac2f8e4b6138a24c73eb69201fa5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba937f20fd42fdc9c935084319c2d3c455813e35002ed22a4be631cc74fab26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5372e6932fe776f4e4d18124f45a05be3f374919ce6df4426d89a023353836\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b232ce88d09a01c32778e258576e6dc2f9ae180513aaf31ffa19ab6e892fefaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fadf23d78dff09f3edcb6159171e1f55f5a84b42d3dbc6804a2e1764d6c71797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z4rxq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:38Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.789964 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84be169c8b603ce62c3c60b6c67eea559c20cbcfc7ba2d0965bcf6acfb00e99b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:38Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.818861 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.818898 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.818907 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.818924 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.818935 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:38Z","lastTransitionTime":"2025-11-29T06:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.832830 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zb4cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6063fc55-4365-4f22-a005-bfac3812fdce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fe3a8d1d8d729e919ab73a034943a441b5bc44409af32b5671d73b80e2bd6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe3a8d1d8d729e919ab73a034943a441b5bc44409af32b5671d73b80e2bd6ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49d19e2f799deb4cf6b8154a2f3c0485c06ad44d0b3e0bac5b2b85f16228f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a49d19e2f799deb4cf6b8154a2f3c0485c06ad44d0b3e0bac5b2b85f16228f38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zb4cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:38Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.851647 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f4d791f-bb61-4aaa-a09c-3007b59645a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247b6579653599785a44fc9e8e0d47f93802f01f2002465a9ba3825d487a324b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bade16afee9bcb203991caf838d5e9e5302dd0b35462b93b45c34cf98e89c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5zgvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:38Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.864167 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ttw9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8440d6ae-a357-461e-a91f-a48625b4a9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c526243829deb889e7afc49647d8bf9960f886b6abc9aa7cba8a69c8d5b3ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mxr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ttw9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:38Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.888007 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"268b8d19-01a0-4696-8b9e-0efed4129d56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06bdbb1b45cdc328f5c4345797e222ac33d9f1ea052011c183a229c02134cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa666baf286a9b54dbafbe35f35c3be377481392a4e3811f72cf189d1adbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ef81ddc4f773e06ee9c296ecc2e67ce76bf795fbd8357236cad6c0462d7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360e1f2c76ee825b275816919d9ab1473c6db5da5ab22b43134738150306f87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4712b1e44979f37b926f325313325a788ecb9e01b30583bf8210ab6c53d71d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:38Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.905198 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92f526fea4a10baf89765a67868ca0fe47ce19bbde73cd538bd65add7578f000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3934144663bb1471d168783c160c42654884d04d2510cbe1e5f2f6e2cfb94f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:38Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.919075 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:38Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.921336 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.921368 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.921378 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.921395 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.921406 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:38Z","lastTransitionTime":"2025-11-29T06:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.932143 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sxdk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f180a1-2fb0-4b96-85ed-1116677a7c99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b80d32b0abeeab39bc9fd8c24510ec865e957006ecac31da11f86b6e0fb983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swgjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sxdk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:38Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:38 crc kubenswrapper[4947]: I1129 06:34:38.950721 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlg45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c22792f1661e18f407880f3a743a03890141d9f5c6062e8ba6b06204b4e3da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rndhc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlg45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:38Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.023578 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.023645 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.023665 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.023696 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.023720 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:39Z","lastTransitionTime":"2025-11-29T06:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.127421 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.127531 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.127541 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.127559 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.127568 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:39Z","lastTransitionTime":"2025-11-29T06:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.178633 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:34:39 crc kubenswrapper[4947]: E1129 06:34:39.178826 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.199710 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21d5af32-2f1c-4aa8-a9b3-c436a5b3cb32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8225fda5f45611099716e891065acf0ffa2db6d2c9b79192b5955c1bac77daf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d79edf1f405f23d4ec3ff1a52d9583fc26d31d1fb690337a7b081868c397d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfae5cdad70e7948b6383df2f934a477b237c38b94cf3d3eda22d11fe02826e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20235c8af3fdd17aa70bd0744d8faf37aa7781f63ef194f6e467721a47e388d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:39Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.218026 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:39Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.231277 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.231373 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.231402 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.231439 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.231469 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:39Z","lastTransitionTime":"2025-11-29T06:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.239534 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabaff26-a896-4929-8b32-6e32efe02ffc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e518bb57f917b37cc08b0f099557c012da1b6c08a168ff3acb262abc2c9d6983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73fed24d3da601900cf1f54e0adc59daba09bd9a50813cc39508f2d8c66ae5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187288a1894fa8a8aaf843f1889edb54adeb0771fb0a5759a1346d1ae04f9970\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb63d3fae21b626041036425ac0dbe4cc0fba792aa82812037eb0cf64036984\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce17a0c77733e53cc77e1ced6944f39b53419840d78420bc7cf4ad34fec82e8b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 06:34:22.756831 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 06:34:22.759210 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196230451/tls.crt::/tmp/serving-cert-4196230451/tls.key\\\\\\\"\\\\nI1129 06:34:28.364401 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 06:34:28.369411 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 06:34:28.369441 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 06:34:28.369468 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 06:34:28.369474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 06:34:28.374098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 06:34:28.374134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374148 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 06:34:28.374153 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 06:34:28.374157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 06:34:28.374161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 06:34:28.374166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 06:34:28.376189 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ce7bb31402783842d6a20f7d0667426065729bdee1f7cf4fcbe636173e79da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:39Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.259821 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de17371c2f08b16e96df9f76ce3eaf0981cc8590aca0784ea95598b7842812eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:39Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.274730 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:39Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.304211 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26f82f9be693d04a2b2536773e664c7e2525ea3cf90f6135e5ed8a41e0def93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffeb939b1928f4e71530473d93e5b142eb70a09d929bee5bea36c407c0fb44a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b624f494cbf18c5d0c15ee7d074273ebb685465bc73aeebae77758840c958415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03aa7fc02c19732bcc2f291ae3e1eb8ee53ac2f8e4b6138a24c73eb69201fa5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba937f20fd42fdc9c935084319c2d3c455813e35002ed22a4be631cc74fab26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5372e6932fe776f4e4d18124f45a05be3f374919ce6df4426d89a023353836\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b232ce88d09a01c32778e258576e6dc2f9ae180513aaf31ffa19ab6e892fefaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fadf23d78dff09f3edcb6159171e1f55f5a84b42d3dbc6804a2e1764d6c71797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z4rxq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:39Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.318658 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ttw9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8440d6ae-a357-461e-a91f-a48625b4a9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c526243829deb889e7afc49647d8bf9960f886b6abc9aa7cba8a69c8d5b3ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mxr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ttw9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:39Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.330903 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84be169c8b603ce62c3c60b6c67eea559c20cbcfc7ba2d0965bcf6acfb00e99b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:39Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.334360 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.334431 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.334444 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.334462 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.334475 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:39Z","lastTransitionTime":"2025-11-29T06:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.355385 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zb4cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6063fc55-4365-4f22-a005-bfac3812fdce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fe3a8d1d8d729e919ab73a034943a441b5bc44409af32b5671d73b80e2bd6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe3a8d1d8d729e919ab73a034943a441b5bc44409af32b5671d73b80e2bd6ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49d19e2f799deb4cf6b8154a2f3c0485c06ad44d0b3e0bac5b2b85f16228f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a49d19e2f799deb4cf6b8154a2f3c0485c06ad44d0b3e0bac5b2b85f16228f38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zb4cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:39Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.372579 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f4d791f-bb61-4aaa-a09c-3007b59645a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247b6579653599785a44fc9e8e0d47f93802f01f2002465a9ba3825d487a324b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bade16afee9bcb203991caf838d5e9e5302dd0b35462b93b45c34cf98e89c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5zgvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:39Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.392808 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlg45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c22792f1661e18f407880f3a743a03890141d9f5c6062e8ba6b06204b4e3da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rndhc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlg45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:39Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.403317 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zb4cv" event={"ID":"6063fc55-4365-4f22-a005-bfac3812fdce","Type":"ContainerStarted","Data":"7c5fd1426004597dc139d078e4f9b5bb7fec8ab12162ca6b052f5eb43025b6d6"} Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.403451 4947 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.403912 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.416375 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"268b8d19-01a0-4696-8b9e-0efed4129d56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06bdbb1b45cdc328f5c4345797e222ac33d9f1ea052011c183a229c02134cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa666baf286a9b54dbafbe35f35c3be377481392a4e3811f72cf189d1adbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ef81ddc4f773e06ee9c296ecc2e67ce76bf795fbd8357236cad6c0462d7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360e1f2c76ee825b275816919d9ab1473c6db5da5ab22b43134738150306f87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4712b1e44979f37b926f325313325a788ecb9e01b30583bf8210ab6c53d71d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:39Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.430990 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.436715 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92f526fea4a10baf89765a67868ca0fe47ce19bbde73cd538bd65add7578f000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3934144663bb1471d168783c160c42654884d04d2510cbe1e5f2f6e2cfb94f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:39Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.437779 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.437808 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.437831 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.437853 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.437868 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:39Z","lastTransitionTime":"2025-11-29T06:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.456944 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:39Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.471745 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sxdk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f180a1-2fb0-4b96-85ed-1116677a7c99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b80d32b0abeeab39bc9fd8c24510ec865e957006ecac31da11f86b6e0fb983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swgjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sxdk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:39Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.494288 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabaff26-a896-4929-8b32-6e32efe02ffc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e518bb57f917b37cc08b0f099557c012da1b6c08a168ff3acb262abc2c9d6983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73fed24d3da601900cf1f54e0adc59daba09bd9a50813cc39508f2d8c66ae5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187288a1894fa8a8aaf843f1889edb54adeb0771fb0a5759a1346d1ae04f9970\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb63d3fae21b626041036425ac0dbe4cc0fba792aa82812037eb0cf64036984\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce17a0c77733e53cc77e1ced6944f39b53419840d78420bc7cf4ad34fec82e8b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 06:34:22.756831 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 06:34:22.759210 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196230451/tls.crt::/tmp/serving-cert-4196230451/tls.key\\\\\\\"\\\\nI1129 06:34:28.364401 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 06:34:28.369411 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 06:34:28.369441 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 06:34:28.369468 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 06:34:28.369474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 06:34:28.374098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 06:34:28.374134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374148 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 06:34:28.374153 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 06:34:28.374157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 06:34:28.374161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 06:34:28.374166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 06:34:28.376189 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ce7bb31402783842d6a20f7d0667426065729bdee1f7cf4fcbe636173e79da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:39Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.513287 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de17371c2f08b16e96df9f76ce3eaf0981cc8590aca0784ea95598b7842812eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:39Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.530448 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:39Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.541122 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.541177 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.541191 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.541214 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.541242 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:39Z","lastTransitionTime":"2025-11-29T06:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.556268 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26f82f9be693d04a2b2536773e664c7e2525ea3cf90f6135e5ed8a41e0def93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffeb939b1928f4e71530473d93e5b142eb70a09d929bee5bea36c407c0fb44a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b624f494cbf18c5d0c15ee7d074273ebb685465bc73aeebae77758840c958415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03aa7fc02c19732bcc2f291ae3e1eb8ee53ac2f8e4b6138a24c73eb69201fa5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba937f20fd42fdc9c935084319c2d3c455813e35002ed22a4be631cc74fab26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5372e6932fe776f4e4d18124f45a05be3f374919ce6df4426d89a023353836\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b232ce88d09a01c32778e258576e6dc2f9ae180513aaf31ffa19ab6e892fefaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fadf23d78dff09f3edcb6159171e1f55f5a84b42d3dbc6804a2e1764d6c71797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z4rxq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:39Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.574651 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84be169c8b603ce62c3c60b6c67eea559c20cbcfc7ba2d0965bcf6acfb00e99b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:39Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.589654 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zb4cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6063fc55-4365-4f22-a005-bfac3812fdce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c5fd1426004597dc139d078e4f9b5bb7fec8ab12162ca6b052f5eb43025b6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fe3a8d1d8d729e919ab73a034943a441b5bc44409af32b5671d73b80e2bd6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe3a8d1d8d729e919ab73a034943a441b5bc44409af32b5671d73b80e2bd6ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49d19e2f799deb4cf6b8154a2f3c0485c06ad44d0b3e0bac5b2b85f16228f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a49d19e2f799deb4cf6b8154a2f3c0485c06ad44d0b3e0bac5b2b85f16228f38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zb4cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:39Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.606673 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f4d791f-bb61-4aaa-a09c-3007b59645a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247b6579653599785a44fc9e8e0d47f93802f01f2002465a9ba3825d487a324b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bade16afee9bcb203991caf838d5e9e5302dd0b35462b93b45c34cf98e89c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5zgvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:39Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.619601 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ttw9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8440d6ae-a357-461e-a91f-a48625b4a9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c526243829deb889e7afc49647d8bf9960f886b6abc9aa7cba8a69c8d5b3ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mxr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ttw9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:39Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.643792 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"268b8d19-01a0-4696-8b9e-0efed4129d56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06bdbb1b45cdc328f5c4345797e222ac33d9f1ea052011c183a229c02134cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa666baf286a9b54dbafbe35f35c3be377481392a4e3811f72cf189d1adbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ef81ddc4f773e06ee9c296ecc2e67ce76bf795fbd8357236cad6c0462d7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360e1f2c76ee825b275816919d9ab1473c6db5da5ab22b43134738150306f87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4712b1e44979f37b926f325313325a788ecb9e01b30583bf8210ab6c53d71d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:39Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.644378 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.644427 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.644437 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.644454 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.644465 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:39Z","lastTransitionTime":"2025-11-29T06:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.667273 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92f526fea4a10baf89765a67868ca0fe47ce19bbde73cd538bd65add7578f000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3934144663bb1471d168783c160c42654884d04d2510cbe1e5f2f6e2cfb94f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:39Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.681810 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:39Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.695387 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sxdk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f180a1-2fb0-4b96-85ed-1116677a7c99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b80d32b0abeeab39bc9fd8c24510ec865e957006ecac31da11f86b6e0fb983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swgjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sxdk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:39Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.720234 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlg45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c22792f1661e18f407880f3a743a03890141d9f5c6062e8ba6b06204b4e3da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rndhc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlg45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:39Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.735068 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21d5af32-2f1c-4aa8-a9b3-c436a5b3cb32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8225fda5f45611099716e891065acf0ffa2db6d2c9b79192b5955c1bac77daf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d79edf1f405f23d4ec3ff1a52d9583fc26d31d1fb690337a7b081868c397d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfae5cdad70e7948b6383df2f934a477b237c38b94cf3d3eda22d11fe02826e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20235c8af3fdd17aa70bd0744d8faf37aa7781f63ef194f6e467721a47e388d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:39Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.746954 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:39Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.747413 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.747447 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.747458 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.747487 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.747499 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:39Z","lastTransitionTime":"2025-11-29T06:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.850486 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.850534 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.850549 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.850571 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.850588 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:39Z","lastTransitionTime":"2025-11-29T06:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.952586 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.952649 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.952659 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.952674 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:39 crc kubenswrapper[4947]: I1129 06:34:39.952701 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:39Z","lastTransitionTime":"2025-11-29T06:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.056671 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.056732 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.056749 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.056771 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.056783 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:40Z","lastTransitionTime":"2025-11-29T06:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.160752 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.160807 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.160819 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.160837 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.160849 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:40Z","lastTransitionTime":"2025-11-29T06:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.178427 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.178488 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:34:40 crc kubenswrapper[4947]: E1129 06:34:40.178644 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 06:34:40 crc kubenswrapper[4947]: E1129 06:34:40.178740 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.262934 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.262974 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.263001 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.263017 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.263026 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:40Z","lastTransitionTime":"2025-11-29T06:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.365903 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.365936 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.365944 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.365958 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.365969 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:40Z","lastTransitionTime":"2025-11-29T06:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.408071 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z4rxq_dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0/ovnkube-controller/0.log" Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.410321 4947 generic.go:334] "Generic (PLEG): container finished" podID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" containerID="b232ce88d09a01c32778e258576e6dc2f9ae180513aaf31ffa19ab6e892fefaf" exitCode=1 Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.410366 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" event={"ID":"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0","Type":"ContainerDied","Data":"b232ce88d09a01c32778e258576e6dc2f9ae180513aaf31ffa19ab6e892fefaf"} Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.411153 4947 scope.go:117] "RemoveContainer" containerID="b232ce88d09a01c32778e258576e6dc2f9ae180513aaf31ffa19ab6e892fefaf" Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.426898 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabaff26-a896-4929-8b32-6e32efe02ffc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e518bb57f917b37cc08b0f099557c012da1b6c08a168ff3acb262abc2c9d6983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73fed24d3da601900cf1f54e0adc59daba09bd9a50813cc39508f2d8c66ae5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187288a1894fa8a8aaf843f1889edb54adeb0771fb0a5759a1346d1ae04f9970\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb63d3fae21b626041036425ac0dbe4cc0fba792aa82812037eb0cf64036984\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce17a0c77733e53cc77e1ced6944f39b53419840d78420bc7cf4ad34fec82e8b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 06:34:22.756831 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 06:34:22.759210 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196230451/tls.crt::/tmp/serving-cert-4196230451/tls.key\\\\\\\"\\\\nI1129 06:34:28.364401 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 06:34:28.369411 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 06:34:28.369441 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 06:34:28.369468 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 06:34:28.369474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 06:34:28.374098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 06:34:28.374134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374148 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 06:34:28.374153 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 06:34:28.374157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 06:34:28.374161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 06:34:28.374166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 06:34:28.376189 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ce7bb31402783842d6a20f7d0667426065729bdee1f7cf4fcbe636173e79da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:40Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.442347 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de17371c2f08b16e96df9f76ce3eaf0981cc8590aca0784ea95598b7842812eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:40Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.457799 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:40Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.467961 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.468006 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.468018 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.468034 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.468045 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:40Z","lastTransitionTime":"2025-11-29T06:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.476348 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26f82f9be693d04a2b2536773e664c7e2525ea3cf90f6135e5ed8a41e0def93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffeb939b1928f4e71530473d93e5b142eb70a09d929bee5bea36c407c0fb44a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b624f494cbf18c5d0c15ee7d074273ebb685465bc73aeebae77758840c958415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03aa7fc02c19732bcc2f291ae3e1eb8ee53ac2f8e4b6138a24c73eb69201fa5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba937f20fd42fdc9c935084319c2d3c455813e35002ed22a4be631cc74fab26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5372e6932fe776f4e4d18124f45a05be3f374919ce6df4426d89a023353836\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b232ce88d09a01c32778e258576e6dc2f9ae180513aaf31ffa19ab6e892fefaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b232ce88d09a01c32778e258576e6dc2f9ae180513aaf31ffa19ab6e892fefaf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T06:34:40Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 06:34:40.269714 6222 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1129 06:34:40.269807 6222 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1129 06:34:40.269821 6222 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1129 06:34:40.269846 6222 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1129 06:34:40.269862 6222 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1129 06:34:40.269866 6222 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1129 06:34:40.269884 6222 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1129 06:34:40.269897 6222 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1129 06:34:40.269908 6222 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1129 06:34:40.269910 6222 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1129 06:34:40.269914 6222 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1129 06:34:40.269920 6222 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1129 06:34:40.269931 6222 handler.go:208] Removed *v1.Node event handler 7\\\\nI1129 06:34:40.269965 6222 factory.go:656] Stopping watch factory\\\\nI1129 06:34:40.269981 6222 ovnkube.go:599] Stopped ovnkube\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fadf23d78dff09f3edcb6159171e1f55f5a84b42d3dbc6804a2e1764d6c71797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z4rxq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:40Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.492598 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84be169c8b603ce62c3c60b6c67eea559c20cbcfc7ba2d0965bcf6acfb00e99b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:40Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.509279 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zb4cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6063fc55-4365-4f22-a005-bfac3812fdce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c5fd1426004597dc139d078e4f9b5bb7fec8ab12162ca6b052f5eb43025b6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fe3a8d1d8d729e919ab73a034943a441b5bc44409af32b5671d73b80e2bd6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe3a8d1d8d729e919ab73a034943a441b5bc44409af32b5671d73b80e2bd6ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49d19e2f799deb4cf6b8154a2f3c0485c06ad44d0b3e0bac5b2b85f16228f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a49d19e2f799deb4cf6b8154a2f3c0485c06ad44d0b3e0bac5b2b85f16228f38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zb4cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:40Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.586914 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.587005 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.587014 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.587028 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.587038 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:40Z","lastTransitionTime":"2025-11-29T06:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.594127 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f4d791f-bb61-4aaa-a09c-3007b59645a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247b6579653599785a44fc9e8e0d47f93802f01f2002465a9ba3825d487a324b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bade16afee9bcb203991caf838d5e9e5302dd0b35462b93b45c34cf98e89c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5zgvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:40Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.605730 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ttw9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8440d6ae-a357-461e-a91f-a48625b4a9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c526243829deb889e7afc49647d8bf9960f886b6abc9aa7cba8a69c8d5b3ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mxr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ttw9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:40Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.627484 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"268b8d19-01a0-4696-8b9e-0efed4129d56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06bdbb1b45cdc328f5c4345797e222ac33d9f1ea052011c183a229c02134cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa666baf286a9b54dbafbe35f35c3be377481392a4e3811f72cf189d1adbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ef81ddc4f773e06ee9c296ecc2e67ce76bf795fbd8357236cad6c0462d7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360e1f2c76ee825b275816919d9ab1473c6db5da5ab22b43134738150306f87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4712b1e44979f37b926f325313325a788ecb9e01b30583bf8210ab6c53d71d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:40Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.640172 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92f526fea4a10baf89765a67868ca0fe47ce19bbde73cd538bd65add7578f000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3934144663bb1471d168783c160c42654884d04d2510cbe1e5f2f6e2cfb94f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:40Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.654713 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:40Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.666064 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sxdk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f180a1-2fb0-4b96-85ed-1116677a7c99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b80d32b0abeeab39bc9fd8c24510ec865e957006ecac31da11f86b6e0fb983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swgjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sxdk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:40Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.678827 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlg45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c22792f1661e18f407880f3a743a03890141d9f5c6062e8ba6b06204b4e3da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rndhc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlg45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:40Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.689328 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.689370 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.689380 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.689395 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.689405 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:40Z","lastTransitionTime":"2025-11-29T06:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.692702 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21d5af32-2f1c-4aa8-a9b3-c436a5b3cb32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8225fda5f45611099716e891065acf0ffa2db6d2c9b79192b5955c1bac77daf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d79edf1f405f23d4ec3ff1a52d9583fc26d31d1fb690337a7b081868c397d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfae5cdad70e7948b6383df2f934a477b237c38b94cf3d3eda22d11fe02826e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20235c8af3fdd17aa70bd0744d8faf37aa7781f63ef194f6e467721a47e388d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:40Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.708462 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:40Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.792158 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.792209 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.792238 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.792254 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.792263 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:40Z","lastTransitionTime":"2025-11-29T06:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.894342 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.894389 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.894400 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.894419 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.894431 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:40Z","lastTransitionTime":"2025-11-29T06:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.997032 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.997074 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.997083 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.997099 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:40 crc kubenswrapper[4947]: I1129 06:34:40.997109 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:40Z","lastTransitionTime":"2025-11-29T06:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.100071 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.100115 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.100126 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.100144 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.100153 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:41Z","lastTransitionTime":"2025-11-29T06:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.177882 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:34:41 crc kubenswrapper[4947]: E1129 06:34:41.178042 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.202026 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.202068 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.202079 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.202095 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.202106 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:41Z","lastTransitionTime":"2025-11-29T06:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.304942 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.305002 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.305018 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.305039 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.305051 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:41Z","lastTransitionTime":"2025-11-29T06:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.408015 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.408068 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.408090 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.408112 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.408127 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:41Z","lastTransitionTime":"2025-11-29T06:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.416187 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z4rxq_dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0/ovnkube-controller/0.log" Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.420074 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" event={"ID":"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0","Type":"ContainerStarted","Data":"d8310aad81d0ce5b52b2c8c5d096733fcabcd9f0d0c9cbae9cf7be55a2ca3e0b"} Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.420239 4947 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.440570 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"268b8d19-01a0-4696-8b9e-0efed4129d56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06bdbb1b45cdc328f5c4345797e222ac33d9f1ea052011c183a229c02134cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa666baf286a9b54dbafbe35f35c3be377481392a4e3811f72cf189d1adbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ef81ddc4f773e06ee9c296ecc2e67ce76bf795fbd8357236cad6c0462d7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360e1f2c76ee825b275816919d9ab1473c6db5da5ab22b43134738150306f87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4712b1e44979f37b926f325313325a788ecb9e01b30583bf8210ab6c53d71d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:41Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.454690 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92f526fea4a10baf89765a67868ca0fe47ce19bbde73cd538bd65add7578f000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3934144663bb1471d168783c160c42654884d04d2510cbe1e5f2f6e2cfb94f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:41Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.471413 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:41Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.485337 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sxdk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f180a1-2fb0-4b96-85ed-1116677a7c99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b80d32b0abeeab39bc9fd8c24510ec865e957006ecac31da11f86b6e0fb983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swgjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sxdk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:41Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.505362 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlg45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c22792f1661e18f407880f3a743a03890141d9f5c6062e8ba6b06204b4e3da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rndhc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlg45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:41Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.512016 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.512077 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.512089 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.512111 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.512124 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:41Z","lastTransitionTime":"2025-11-29T06:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.524545 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21d5af32-2f1c-4aa8-a9b3-c436a5b3cb32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8225fda5f45611099716e891065acf0ffa2db6d2c9b79192b5955c1bac77daf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d79edf1f405f23d4ec3ff1a52d9583fc26d31d1fb690337a7b081868c397d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfae5cdad70e7948b6383df2f934a477b237c38b94cf3d3eda22d11fe02826e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20235c8af3fdd17aa70bd0744d8faf37aa7781f63ef194f6e467721a47e388d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:41Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.537624 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:41Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.553701 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabaff26-a896-4929-8b32-6e32efe02ffc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e518bb57f917b37cc08b0f099557c012da1b6c08a168ff3acb262abc2c9d6983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73fed24d3da601900cf1f54e0adc59daba09bd9a50813cc39508f2d8c66ae5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187288a1894fa8a8aaf843f1889edb54adeb0771fb0a5759a1346d1ae04f9970\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb63d3fae21b626041036425ac0dbe4cc0fba792aa82812037eb0cf64036984\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce17a0c77733e53cc77e1ced6944f39b53419840d78420bc7cf4ad34fec82e8b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 06:34:22.756831 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 06:34:22.759210 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196230451/tls.crt::/tmp/serving-cert-4196230451/tls.key\\\\\\\"\\\\nI1129 06:34:28.364401 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 06:34:28.369411 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 06:34:28.369441 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 06:34:28.369468 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 06:34:28.369474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 06:34:28.374098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 06:34:28.374134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374148 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 06:34:28.374153 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 06:34:28.374157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 06:34:28.374161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 06:34:28.374166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 06:34:28.376189 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ce7bb31402783842d6a20f7d0667426065729bdee1f7cf4fcbe636173e79da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:41Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.569103 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de17371c2f08b16e96df9f76ce3eaf0981cc8590aca0784ea95598b7842812eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:41Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.581597 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:41Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.607534 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26f82f9be693d04a2b2536773e664c7e2525ea3cf90f6135e5ed8a41e0def93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffeb939b1928f4e71530473d93e5b142eb70a09d929bee5bea36c407c0fb44a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b624f494cbf18c5d0c15ee7d074273ebb685465bc73aeebae77758840c958415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03aa7fc02c19732bcc2f291ae3e1eb8ee53ac2f8e4b6138a24c73eb69201fa5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba937f20fd42fdc9c935084319c2d3c455813e35002ed22a4be631cc74fab26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5372e6932fe776f4e4d18124f45a05be3f374919ce6df4426d89a023353836\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8310aad81d0ce5b52b2c8c5d096733fcabcd9f0d0c9cbae9cf7be55a2ca3e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b232ce88d09a01c32778e258576e6dc2f9ae180513aaf31ffa19ab6e892fefaf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T06:34:40Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 06:34:40.269714 6222 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1129 06:34:40.269807 6222 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1129 06:34:40.269821 6222 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1129 06:34:40.269846 6222 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1129 06:34:40.269862 6222 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1129 06:34:40.269866 6222 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1129 06:34:40.269884 6222 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1129 06:34:40.269897 6222 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1129 06:34:40.269908 6222 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1129 06:34:40.269910 6222 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1129 06:34:40.269914 6222 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1129 06:34:40.269920 6222 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1129 06:34:40.269931 6222 handler.go:208] Removed *v1.Node event handler 7\\\\nI1129 06:34:40.269965 6222 factory.go:656] Stopping watch factory\\\\nI1129 06:34:40.269981 6222 ovnkube.go:599] Stopped ovnkube\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fadf23d78dff09f3edcb6159171e1f55f5a84b42d3dbc6804a2e1764d6c71797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z4rxq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:41Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.614544 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.614665 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.614751 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.614824 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.614890 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:41Z","lastTransitionTime":"2025-11-29T06:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.621501 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84be169c8b603ce62c3c60b6c67eea559c20cbcfc7ba2d0965bcf6acfb00e99b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:41Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.634619 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zb4cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6063fc55-4365-4f22-a005-bfac3812fdce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c5fd1426004597dc139d078e4f9b5bb7fec8ab12162ca6b052f5eb43025b6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fe3a8d1d8d729e919ab73a034943a441b5bc44409af32b5671d73b80e2bd6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe3a8d1d8d729e919ab73a034943a441b5bc44409af32b5671d73b80e2bd6ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49d19e2f799deb4cf6b8154a2f3c0485c06ad44d0b3e0bac5b2b85f16228f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a49d19e2f799deb4cf6b8154a2f3c0485c06ad44d0b3e0bac5b2b85f16228f38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zb4cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:41Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.645575 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f4d791f-bb61-4aaa-a09c-3007b59645a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247b6579653599785a44fc9e8e0d47f93802f01f2002465a9ba3825d487a324b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bade16afee9bcb203991caf838d5e9e5302dd0b35462b93b45c34cf98e89c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5zgvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:41Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.655798 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ttw9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8440d6ae-a357-461e-a91f-a48625b4a9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c526243829deb889e7afc49647d8bf9960f886b6abc9aa7cba8a69c8d5b3ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mxr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ttw9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:41Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.717443 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.717714 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.717796 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.717873 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.717940 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:41Z","lastTransitionTime":"2025-11-29T06:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.820467 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.820513 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.820527 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.820545 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.820558 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:41Z","lastTransitionTime":"2025-11-29T06:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.923040 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.923082 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.923093 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.923111 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:41 crc kubenswrapper[4947]: I1129 06:34:41.923122 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:41Z","lastTransitionTime":"2025-11-29T06:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.026854 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.027486 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.027657 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.027898 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.028213 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:42Z","lastTransitionTime":"2025-11-29T06:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.253516 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:34:42 crc kubenswrapper[4947]: E1129 06:34:42.253681 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.253753 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:34:42 crc kubenswrapper[4947]: E1129 06:34:42.253793 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.255367 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.255402 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.255411 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.255426 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.255436 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:42Z","lastTransitionTime":"2025-11-29T06:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.297941 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.297975 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.297983 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.297997 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.298006 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:42Z","lastTransitionTime":"2025-11-29T06:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:42 crc kubenswrapper[4947]: E1129 06:34:42.309474 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ead809c-947a-4269-a0f3-b817113e9662\\\",\\\"systemUUID\\\":\\\"f0748469-4a41-446c-a5c3-776c2ca32148\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:42Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.312850 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.312877 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.312888 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.312902 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.312910 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:42Z","lastTransitionTime":"2025-11-29T06:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:42 crc kubenswrapper[4947]: E1129 06:34:42.323549 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ead809c-947a-4269-a0f3-b817113e9662\\\",\\\"systemUUID\\\":\\\"f0748469-4a41-446c-a5c3-776c2ca32148\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:42Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.327964 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.328005 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.328038 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.328060 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.328108 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:42Z","lastTransitionTime":"2025-11-29T06:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:42 crc kubenswrapper[4947]: E1129 06:34:42.345191 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ead809c-947a-4269-a0f3-b817113e9662\\\",\\\"systemUUID\\\":\\\"f0748469-4a41-446c-a5c3-776c2ca32148\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:42Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.349655 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.349725 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.349750 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.349778 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.349799 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:42Z","lastTransitionTime":"2025-11-29T06:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:42 crc kubenswrapper[4947]: E1129 06:34:42.365246 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ead809c-947a-4269-a0f3-b817113e9662\\\",\\\"systemUUID\\\":\\\"f0748469-4a41-446c-a5c3-776c2ca32148\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:42Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.369874 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.369920 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.369933 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.369952 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.369965 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:42Z","lastTransitionTime":"2025-11-29T06:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:42 crc kubenswrapper[4947]: E1129 06:34:42.384066 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ead809c-947a-4269-a0f3-b817113e9662\\\",\\\"systemUUID\\\":\\\"f0748469-4a41-446c-a5c3-776c2ca32148\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:42Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:42 crc kubenswrapper[4947]: E1129 06:34:42.384366 4947 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.385856 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.385901 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.385913 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.385929 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.385939 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:42Z","lastTransitionTime":"2025-11-29T06:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.423321 4947 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.488853 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.488912 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.488936 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.488966 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.488989 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:42Z","lastTransitionTime":"2025-11-29T06:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.591332 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.591400 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.591417 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.591446 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.591466 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:42Z","lastTransitionTime":"2025-11-29T06:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.694781 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.694843 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.694860 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.694884 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.694911 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:42Z","lastTransitionTime":"2025-11-29T06:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.797723 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.797781 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.797798 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.797821 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.797839 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:42Z","lastTransitionTime":"2025-11-29T06:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.900984 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.901041 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.901061 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.901085 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:42 crc kubenswrapper[4947]: I1129 06:34:42.901101 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:42Z","lastTransitionTime":"2025-11-29T06:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.002752 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b25cq"] Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.003493 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b25cq" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.005486 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.005519 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.005529 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.005552 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.005563 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:43Z","lastTransitionTime":"2025-11-29T06:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.006976 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.007047 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.024483 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ttw9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8440d6ae-a357-461e-a91f-a48625b4a9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c526243829deb889e7afc49647d8bf9960f886b6abc9aa7cba8a69c8d5b3ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mxr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ttw9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:43Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.044691 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b25cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a287baf-0c87-4698-9553-6f94927fbf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbmt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbmt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b25cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:43Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.059995 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84be169c8b603ce62c3c60b6c67eea559c20cbcfc7ba2d0965bcf6acfb00e99b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:43Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.062491 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0a287baf-0c87-4698-9553-6f94927fbf78-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-b25cq\" (UID: \"0a287baf-0c87-4698-9553-6f94927fbf78\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b25cq" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.062542 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbmt5\" (UniqueName: \"kubernetes.io/projected/0a287baf-0c87-4698-9553-6f94927fbf78-kube-api-access-sbmt5\") pod \"ovnkube-control-plane-749d76644c-b25cq\" (UID: \"0a287baf-0c87-4698-9553-6f94927fbf78\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b25cq" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.062573 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0a287baf-0c87-4698-9553-6f94927fbf78-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-b25cq\" (UID: \"0a287baf-0c87-4698-9553-6f94927fbf78\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b25cq" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.062600 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0a287baf-0c87-4698-9553-6f94927fbf78-env-overrides\") pod \"ovnkube-control-plane-749d76644c-b25cq\" (UID: \"0a287baf-0c87-4698-9553-6f94927fbf78\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b25cq" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.078713 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zb4cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6063fc55-4365-4f22-a005-bfac3812fdce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c5fd1426004597dc139d078e4f9b5bb7fec8ab12162ca6b052f5eb43025b6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fe3a8d1d8d729e919ab73a034943a441b5bc44409af32b5671d73b80e2bd6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe3a8d1d8d729e919ab73a034943a441b5bc44409af32b5671d73b80e2bd6ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49d19e2f799deb4cf6b8154a2f3c0485c06ad44d0b3e0bac5b2b85f16228f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a49d19e2f799deb4cf6b8154a2f3c0485c06ad44d0b3e0bac5b2b85f16228f38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zb4cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:43Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.094533 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f4d791f-bb61-4aaa-a09c-3007b59645a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247b6579653599785a44fc9e8e0d47f93802f01f2002465a9ba3825d487a324b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bade16afee9bcb203991caf838d5e9e5302dd0b35462b93b45c34cf98e89c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5zgvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:43Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.112705 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.112773 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.112880 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.112913 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.112937 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:43Z","lastTransitionTime":"2025-11-29T06:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.114139 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlg45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c22792f1661e18f407880f3a743a03890141d9f5c6062e8ba6b06204b4e3da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rndhc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlg45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:43Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.148069 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"268b8d19-01a0-4696-8b9e-0efed4129d56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06bdbb1b45cdc328f5c4345797e222ac33d9f1ea052011c183a229c02134cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa666baf286a9b54dbafbe35f35c3be377481392a4e3811f72cf189d1adbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ef81ddc4f773e06ee9c296ecc2e67ce76bf795fbd8357236cad6c0462d7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360e1f2c76ee825b275816919d9ab1473c6db5da5ab22b43134738150306f87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4712b1e44979f37b926f325313325a788ecb9e01b30583bf8210ab6c53d71d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:43Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.163891 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0a287baf-0c87-4698-9553-6f94927fbf78-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-b25cq\" (UID: \"0a287baf-0c87-4698-9553-6f94927fbf78\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b25cq" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.163952 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbmt5\" (UniqueName: \"kubernetes.io/projected/0a287baf-0c87-4698-9553-6f94927fbf78-kube-api-access-sbmt5\") pod \"ovnkube-control-plane-749d76644c-b25cq\" (UID: \"0a287baf-0c87-4698-9553-6f94927fbf78\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b25cq" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.164026 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0a287baf-0c87-4698-9553-6f94927fbf78-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-b25cq\" (UID: \"0a287baf-0c87-4698-9553-6f94927fbf78\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b25cq" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.164117 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0a287baf-0c87-4698-9553-6f94927fbf78-env-overrides\") pod \"ovnkube-control-plane-749d76644c-b25cq\" (UID: \"0a287baf-0c87-4698-9553-6f94927fbf78\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b25cq" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.165812 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0a287baf-0c87-4698-9553-6f94927fbf78-env-overrides\") pod \"ovnkube-control-plane-749d76644c-b25cq\" (UID: \"0a287baf-0c87-4698-9553-6f94927fbf78\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b25cq" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.166568 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0a287baf-0c87-4698-9553-6f94927fbf78-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-b25cq\" (UID: \"0a287baf-0c87-4698-9553-6f94927fbf78\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b25cq" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.171094 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92f526fea4a10baf89765a67868ca0fe47ce19bbde73cd538bd65add7578f000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3934144663bb1471d168783c160c42654884d04d2510cbe1e5f2f6e2cfb94f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:43Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.176636 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0a287baf-0c87-4698-9553-6f94927fbf78-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-b25cq\" (UID: \"0a287baf-0c87-4698-9553-6f94927fbf78\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b25cq" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.178450 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:34:43 crc kubenswrapper[4947]: E1129 06:34:43.178638 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.194381 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbmt5\" (UniqueName: \"kubernetes.io/projected/0a287baf-0c87-4698-9553-6f94927fbf78-kube-api-access-sbmt5\") pod \"ovnkube-control-plane-749d76644c-b25cq\" (UID: \"0a287baf-0c87-4698-9553-6f94927fbf78\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b25cq" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.196205 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:43Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.212009 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sxdk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f180a1-2fb0-4b96-85ed-1116677a7c99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b80d32b0abeeab39bc9fd8c24510ec865e957006ecac31da11f86b6e0fb983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swgjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sxdk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:43Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.216773 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.216833 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.216850 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.216876 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.216893 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:43Z","lastTransitionTime":"2025-11-29T06:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.233190 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21d5af32-2f1c-4aa8-a9b3-c436a5b3cb32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8225fda5f45611099716e891065acf0ffa2db6d2c9b79192b5955c1bac77daf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d79edf1f405f23d4ec3ff1a52d9583fc26d31d1fb690337a7b081868c397d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfae5cdad70e7948b6383df2f934a477b237c38b94cf3d3eda22d11fe02826e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20235c8af3fdd17aa70bd0744d8faf37aa7781f63ef194f6e467721a47e388d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:43Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.247733 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:43Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.266441 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabaff26-a896-4929-8b32-6e32efe02ffc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e518bb57f917b37cc08b0f099557c012da1b6c08a168ff3acb262abc2c9d6983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73fed24d3da601900cf1f54e0adc59daba09bd9a50813cc39508f2d8c66ae5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187288a1894fa8a8aaf843f1889edb54adeb0771fb0a5759a1346d1ae04f9970\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb63d3fae21b626041036425ac0dbe4cc0fba792aa82812037eb0cf64036984\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce17a0c77733e53cc77e1ced6944f39b53419840d78420bc7cf4ad34fec82e8b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 06:34:22.756831 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 06:34:22.759210 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196230451/tls.crt::/tmp/serving-cert-4196230451/tls.key\\\\\\\"\\\\nI1129 06:34:28.364401 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 06:34:28.369411 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 06:34:28.369441 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 06:34:28.369468 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 06:34:28.369474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 06:34:28.374098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 06:34:28.374134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374148 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 06:34:28.374153 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 06:34:28.374157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 06:34:28.374161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 06:34:28.374166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 06:34:28.376189 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ce7bb31402783842d6a20f7d0667426065729bdee1f7cf4fcbe636173e79da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:43Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.283253 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de17371c2f08b16e96df9f76ce3eaf0981cc8590aca0784ea95598b7842812eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:43Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.298544 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:43Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.319280 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.319319 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.319333 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.319351 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.319361 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:43Z","lastTransitionTime":"2025-11-29T06:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.322785 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26f82f9be693d04a2b2536773e664c7e2525ea3cf90f6135e5ed8a41e0def93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffeb939b1928f4e71530473d93e5b142eb70a09d929bee5bea36c407c0fb44a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b624f494cbf18c5d0c15ee7d074273ebb685465bc73aeebae77758840c958415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03aa7fc02c19732bcc2f291ae3e1eb8ee53ac2f8e4b6138a24c73eb69201fa5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba937f20fd42fdc9c935084319c2d3c455813e35002ed22a4be631cc74fab26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5372e6932fe776f4e4d18124f45a05be3f374919ce6df4426d89a023353836\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8310aad81d0ce5b52b2c8c5d096733fcabcd9f0d0c9cbae9cf7be55a2ca3e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b232ce88d09a01c32778e258576e6dc2f9ae180513aaf31ffa19ab6e892fefaf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T06:34:40Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 06:34:40.269714 6222 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1129 06:34:40.269807 6222 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1129 06:34:40.269821 6222 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1129 06:34:40.269846 6222 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1129 06:34:40.269862 6222 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1129 06:34:40.269866 6222 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1129 06:34:40.269884 6222 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1129 06:34:40.269897 6222 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1129 06:34:40.269908 6222 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1129 06:34:40.269910 6222 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1129 06:34:40.269914 6222 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1129 06:34:40.269920 6222 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1129 06:34:40.269931 6222 handler.go:208] Removed *v1.Node event handler 7\\\\nI1129 06:34:40.269965 6222 factory.go:656] Stopping watch factory\\\\nI1129 06:34:40.269981 6222 ovnkube.go:599] Stopped ovnkube\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fadf23d78dff09f3edcb6159171e1f55f5a84b42d3dbc6804a2e1764d6c71797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z4rxq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:43Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.333852 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b25cq" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.423591 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.423620 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.423651 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.423665 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.423673 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:43Z","lastTransitionTime":"2025-11-29T06:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.427053 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b25cq" event={"ID":"0a287baf-0c87-4698-9553-6f94927fbf78","Type":"ContainerStarted","Data":"b54f2a1549bbe42234900f501c1c7e79751a5fa033a31c5828ba3bd3ecf82d0c"} Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.526286 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.526333 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.526342 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.526357 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.526367 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:43Z","lastTransitionTime":"2025-11-29T06:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.628789 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.628835 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.628849 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.628867 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.628879 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:43Z","lastTransitionTime":"2025-11-29T06:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.732340 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.732393 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.732406 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.732422 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.732432 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:43Z","lastTransitionTime":"2025-11-29T06:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.835106 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.835154 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.835165 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.835182 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.835193 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:43Z","lastTransitionTime":"2025-11-29T06:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.937979 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.938046 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.938060 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.938078 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:43 crc kubenswrapper[4947]: I1129 06:34:43.938089 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:43Z","lastTransitionTime":"2025-11-29T06:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.040939 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.041017 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.041030 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.041052 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.041064 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:44Z","lastTransitionTime":"2025-11-29T06:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.136308 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-2fbj5"] Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.136781 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:34:44 crc kubenswrapper[4947]: E1129 06:34:44.136848 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fbj5" podUID="53a3bcac-8ad0-47ce-abee-ee56fd152ea8" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.143781 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.143827 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.143839 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.143859 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.143873 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:44Z","lastTransitionTime":"2025-11-29T06:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.151609 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21d5af32-2f1c-4aa8-a9b3-c436a5b3cb32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8225fda5f45611099716e891065acf0ffa2db6d2c9b79192b5955c1bac77daf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d79edf1f405f23d4ec3ff1a52d9583fc26d31d1fb690337a7b081868c397d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfae5cdad70e7948b6383df2f934a477b237c38b94cf3d3eda22d11fe02826e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20235c8af3fdd17aa70bd0744d8faf37aa7781f63ef194f6e467721a47e388d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:44Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.165329 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:44Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.173805 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.173919 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.173964 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxj27\" (UniqueName: \"kubernetes.io/projected/53a3bcac-8ad0-47ce-abee-ee56fd152ea8-kube-api-access-nxj27\") pod \"network-metrics-daemon-2fbj5\" (UID: \"53a3bcac-8ad0-47ce-abee-ee56fd152ea8\") " pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.173988 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/53a3bcac-8ad0-47ce-abee-ee56fd152ea8-metrics-certs\") pod \"network-metrics-daemon-2fbj5\" (UID: \"53a3bcac-8ad0-47ce-abee-ee56fd152ea8\") " pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.174013 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:34:44 crc kubenswrapper[4947]: E1129 06:34:44.174065 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:35:00.174052371 +0000 UTC m=+51.218434452 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:34:44 crc kubenswrapper[4947]: E1129 06:34:44.174159 4947 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 06:34:44 crc kubenswrapper[4947]: E1129 06:34:44.174239 4947 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 06:34:44 crc kubenswrapper[4947]: E1129 06:34:44.174284 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 06:35:00.174277617 +0000 UTC m=+51.218659698 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 06:34:44 crc kubenswrapper[4947]: E1129 06:34:44.174393 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 06:35:00.174350998 +0000 UTC m=+51.218733119 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.180722 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2fbj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53a3bcac-8ad0-47ce-abee-ee56fd152ea8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nxj27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nxj27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2fbj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:44Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.182757 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:34:44 crc kubenswrapper[4947]: E1129 06:34:44.182880 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.182949 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:34:44 crc kubenswrapper[4947]: E1129 06:34:44.183022 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.194213 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabaff26-a896-4929-8b32-6e32efe02ffc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e518bb57f917b37cc08b0f099557c012da1b6c08a168ff3acb262abc2c9d6983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73fed24d3da601900cf1f54e0adc59daba09bd9a50813cc39508f2d8c66ae5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187288a1894fa8a8aaf843f1889edb54adeb0771fb0a5759a1346d1ae04f9970\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb63d3fae21b626041036425ac0dbe4cc0fba792aa82812037eb0cf64036984\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce17a0c77733e53cc77e1ced6944f39b53419840d78420bc7cf4ad34fec82e8b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 06:34:22.756831 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 06:34:22.759210 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196230451/tls.crt::/tmp/serving-cert-4196230451/tls.key\\\\\\\"\\\\nI1129 06:34:28.364401 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 06:34:28.369411 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 06:34:28.369441 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 06:34:28.369468 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 06:34:28.369474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 06:34:28.374098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 06:34:28.374134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374148 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 06:34:28.374153 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 06:34:28.374157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 06:34:28.374161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 06:34:28.374166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 06:34:28.376189 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ce7bb31402783842d6a20f7d0667426065729bdee1f7cf4fcbe636173e79da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:44Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.207690 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de17371c2f08b16e96df9f76ce3eaf0981cc8590aca0784ea95598b7842812eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:44Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.221157 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:44Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.387374 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/53a3bcac-8ad0-47ce-abee-ee56fd152ea8-metrics-certs\") pod \"network-metrics-daemon-2fbj5\" (UID: \"53a3bcac-8ad0-47ce-abee-ee56fd152ea8\") " pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.387523 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.387553 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.387594 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxj27\" (UniqueName: \"kubernetes.io/projected/53a3bcac-8ad0-47ce-abee-ee56fd152ea8-kube-api-access-nxj27\") pod \"network-metrics-daemon-2fbj5\" (UID: \"53a3bcac-8ad0-47ce-abee-ee56fd152ea8\") " pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:34:44 crc kubenswrapper[4947]: E1129 06:34:44.388814 4947 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 06:34:44 crc kubenswrapper[4947]: E1129 06:34:44.388827 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 06:34:44 crc kubenswrapper[4947]: E1129 06:34:44.388912 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 06:34:44 crc kubenswrapper[4947]: E1129 06:34:44.388926 4947 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 06:34:44 crc kubenswrapper[4947]: E1129 06:34:44.388948 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 06:34:44 crc kubenswrapper[4947]: E1129 06:34:44.388967 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 06:34:44 crc kubenswrapper[4947]: E1129 06:34:44.388890 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/53a3bcac-8ad0-47ce-abee-ee56fd152ea8-metrics-certs podName:53a3bcac-8ad0-47ce-abee-ee56fd152ea8 nodeName:}" failed. No retries permitted until 2025-11-29 06:34:44.888867396 +0000 UTC m=+35.933249477 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/53a3bcac-8ad0-47ce-abee-ee56fd152ea8-metrics-certs") pod "network-metrics-daemon-2fbj5" (UID: "53a3bcac-8ad0-47ce-abee-ee56fd152ea8") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 06:34:44 crc kubenswrapper[4947]: E1129 06:34:44.388992 4947 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 06:34:44 crc kubenswrapper[4947]: E1129 06:34:44.389008 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-29 06:35:00.388992879 +0000 UTC m=+51.433374960 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 06:34:44 crc kubenswrapper[4947]: E1129 06:34:44.389053 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-29 06:35:00.389046721 +0000 UTC m=+51.433428802 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.391895 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.391950 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.391963 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.391982 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.391996 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:44Z","lastTransitionTime":"2025-11-29T06:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.409804 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26f82f9be693d04a2b2536773e664c7e2525ea3cf90f6135e5ed8a41e0def93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffeb939b1928f4e71530473d93e5b142eb70a09d929bee5bea36c407c0fb44a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b624f494cbf18c5d0c15ee7d074273ebb685465bc73aeebae77758840c958415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03aa7fc02c19732bcc2f291ae3e1eb8ee53ac2f8e4b6138a24c73eb69201fa5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba937f20fd42fdc9c935084319c2d3c455813e35002ed22a4be631cc74fab26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5372e6932fe776f4e4d18124f45a05be3f374919ce6df4426d89a023353836\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8310aad81d0ce5b52b2c8c5d096733fcabcd9f0d0c9cbae9cf7be55a2ca3e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b232ce88d09a01c32778e258576e6dc2f9ae180513aaf31ffa19ab6e892fefaf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T06:34:40Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 06:34:40.269714 6222 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1129 06:34:40.269807 6222 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1129 06:34:40.269821 6222 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1129 06:34:40.269846 6222 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1129 06:34:40.269862 6222 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1129 06:34:40.269866 6222 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1129 06:34:40.269884 6222 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1129 06:34:40.269897 6222 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1129 06:34:40.269908 6222 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1129 06:34:40.269910 6222 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1129 06:34:40.269914 6222 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1129 06:34:40.269920 6222 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1129 06:34:40.269931 6222 handler.go:208] Removed *v1.Node event handler 7\\\\nI1129 06:34:40.269965 6222 factory.go:656] Stopping watch factory\\\\nI1129 06:34:40.269981 6222 ovnkube.go:599] Stopped ovnkube\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fadf23d78dff09f3edcb6159171e1f55f5a84b42d3dbc6804a2e1764d6c71797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z4rxq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:44Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.413319 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxj27\" (UniqueName: \"kubernetes.io/projected/53a3bcac-8ad0-47ce-abee-ee56fd152ea8-kube-api-access-nxj27\") pod \"network-metrics-daemon-2fbj5\" (UID: \"53a3bcac-8ad0-47ce-abee-ee56fd152ea8\") " pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.421776 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ttw9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8440d6ae-a357-461e-a91f-a48625b4a9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c526243829deb889e7afc49647d8bf9960f886b6abc9aa7cba8a69c8d5b3ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mxr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ttw9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:44Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.434053 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b25cq" event={"ID":"0a287baf-0c87-4698-9553-6f94927fbf78","Type":"ContainerStarted","Data":"754f770b2635be1fc785e0cc958e0c885dd7516ba54de760493ec7778d738708"} Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.434110 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b25cq" event={"ID":"0a287baf-0c87-4698-9553-6f94927fbf78","Type":"ContainerStarted","Data":"60e69ea28bcbeb8379671147cd41f131b8b37b41a285319b082c43381a56cdc1"} Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.435306 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b25cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a287baf-0c87-4698-9553-6f94927fbf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbmt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbmt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b25cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:44Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.436067 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z4rxq_dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0/ovnkube-controller/1.log" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.436675 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z4rxq_dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0/ovnkube-controller/0.log" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.439437 4947 generic.go:334] "Generic (PLEG): container finished" podID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" containerID="d8310aad81d0ce5b52b2c8c5d096733fcabcd9f0d0c9cbae9cf7be55a2ca3e0b" exitCode=1 Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.439507 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" event={"ID":"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0","Type":"ContainerDied","Data":"d8310aad81d0ce5b52b2c8c5d096733fcabcd9f0d0c9cbae9cf7be55a2ca3e0b"} Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.439588 4947 scope.go:117] "RemoveContainer" containerID="b232ce88d09a01c32778e258576e6dc2f9ae180513aaf31ffa19ab6e892fefaf" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.440346 4947 scope.go:117] "RemoveContainer" containerID="d8310aad81d0ce5b52b2c8c5d096733fcabcd9f0d0c9cbae9cf7be55a2ca3e0b" Nov 29 06:34:44 crc kubenswrapper[4947]: E1129 06:34:44.440508 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-z4rxq_openshift-ovn-kubernetes(dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" podUID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.449510 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84be169c8b603ce62c3c60b6c67eea559c20cbcfc7ba2d0965bcf6acfb00e99b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:44Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.465129 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zb4cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6063fc55-4365-4f22-a005-bfac3812fdce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c5fd1426004597dc139d078e4f9b5bb7fec8ab12162ca6b052f5eb43025b6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fe3a8d1d8d729e919ab73a034943a441b5bc44409af32b5671d73b80e2bd6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe3a8d1d8d729e919ab73a034943a441b5bc44409af32b5671d73b80e2bd6ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49d19e2f799deb4cf6b8154a2f3c0485c06ad44d0b3e0bac5b2b85f16228f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a49d19e2f799deb4cf6b8154a2f3c0485c06ad44d0b3e0bac5b2b85f16228f38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zb4cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:44Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.477154 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f4d791f-bb61-4aaa-a09c-3007b59645a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247b6579653599785a44fc9e8e0d47f93802f01f2002465a9ba3825d487a324b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bade16afee9bcb203991caf838d5e9e5302dd0b35462b93b45c34cf98e89c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5zgvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:44Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.492761 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlg45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c22792f1661e18f407880f3a743a03890141d9f5c6062e8ba6b06204b4e3da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rndhc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlg45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:44Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.495228 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.495258 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.495270 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.495290 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.495304 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:44Z","lastTransitionTime":"2025-11-29T06:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.523657 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"268b8d19-01a0-4696-8b9e-0efed4129d56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06bdbb1b45cdc328f5c4345797e222ac33d9f1ea052011c183a229c02134cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa666baf286a9b54dbafbe35f35c3be377481392a4e3811f72cf189d1adbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ef81ddc4f773e06ee9c296ecc2e67ce76bf795fbd8357236cad6c0462d7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360e1f2c76ee825b275816919d9ab1473c6db5da5ab22b43134738150306f87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4712b1e44979f37b926f325313325a788ecb9e01b30583bf8210ab6c53d71d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:44Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.539004 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92f526fea4a10baf89765a67868ca0fe47ce19bbde73cd538bd65add7578f000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3934144663bb1471d168783c160c42654884d04d2510cbe1e5f2f6e2cfb94f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:44Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.571339 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:44Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.583552 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sxdk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f180a1-2fb0-4b96-85ed-1116677a7c99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b80d32b0abeeab39bc9fd8c24510ec865e957006ecac31da11f86b6e0fb983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swgjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sxdk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:44Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.598001 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.598031 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.598043 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.598068 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.598080 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:44Z","lastTransitionTime":"2025-11-29T06:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.608205 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"268b8d19-01a0-4696-8b9e-0efed4129d56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06bdbb1b45cdc328f5c4345797e222ac33d9f1ea052011c183a229c02134cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa666baf286a9b54dbafbe35f35c3be377481392a4e3811f72cf189d1adbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ef81ddc4f773e06ee9c296ecc2e67ce76bf795fbd8357236cad6c0462d7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360e1f2c76ee825b275816919d9ab1473c6db5da5ab22b43134738150306f87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4712b1e44979f37b926f325313325a788ecb9e01b30583bf8210ab6c53d71d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:44Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.620976 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92f526fea4a10baf89765a67868ca0fe47ce19bbde73cd538bd65add7578f000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3934144663bb1471d168783c160c42654884d04d2510cbe1e5f2f6e2cfb94f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:44Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.632996 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:44Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.644048 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sxdk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f180a1-2fb0-4b96-85ed-1116677a7c99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b80d32b0abeeab39bc9fd8c24510ec865e957006ecac31da11f86b6e0fb983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swgjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sxdk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:44Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.671780 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlg45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c22792f1661e18f407880f3a743a03890141d9f5c6062e8ba6b06204b4e3da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rndhc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlg45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:44Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.693292 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21d5af32-2f1c-4aa8-a9b3-c436a5b3cb32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8225fda5f45611099716e891065acf0ffa2db6d2c9b79192b5955c1bac77daf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d79edf1f405f23d4ec3ff1a52d9583fc26d31d1fb690337a7b081868c397d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfae5cdad70e7948b6383df2f934a477b237c38b94cf3d3eda22d11fe02826e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20235c8af3fdd17aa70bd0744d8faf37aa7781f63ef194f6e467721a47e388d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:44Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.700873 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.700919 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.700930 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.700948 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.700958 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:44Z","lastTransitionTime":"2025-11-29T06:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.710927 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:44Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.728423 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2fbj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53a3bcac-8ad0-47ce-abee-ee56fd152ea8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nxj27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nxj27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2fbj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:44Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.744469 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabaff26-a896-4929-8b32-6e32efe02ffc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e518bb57f917b37cc08b0f099557c012da1b6c08a168ff3acb262abc2c9d6983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73fed24d3da601900cf1f54e0adc59daba09bd9a50813cc39508f2d8c66ae5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187288a1894fa8a8aaf843f1889edb54adeb0771fb0a5759a1346d1ae04f9970\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb63d3fae21b626041036425ac0dbe4cc0fba792aa82812037eb0cf64036984\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce17a0c77733e53cc77e1ced6944f39b53419840d78420bc7cf4ad34fec82e8b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 06:34:22.756831 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 06:34:22.759210 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196230451/tls.crt::/tmp/serving-cert-4196230451/tls.key\\\\\\\"\\\\nI1129 06:34:28.364401 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 06:34:28.369411 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 06:34:28.369441 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 06:34:28.369468 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 06:34:28.369474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 06:34:28.374098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 06:34:28.374134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374148 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 06:34:28.374153 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 06:34:28.374157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 06:34:28.374161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 06:34:28.374166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 06:34:28.376189 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ce7bb31402783842d6a20f7d0667426065729bdee1f7cf4fcbe636173e79da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:44Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.758513 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de17371c2f08b16e96df9f76ce3eaf0981cc8590aca0784ea95598b7842812eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:44Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.770265 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:44Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.787327 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26f82f9be693d04a2b2536773e664c7e2525ea3cf90f6135e5ed8a41e0def93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffeb939b1928f4e71530473d93e5b142eb70a09d929bee5bea36c407c0fb44a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b624f494cbf18c5d0c15ee7d074273ebb685465bc73aeebae77758840c958415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03aa7fc02c19732bcc2f291ae3e1eb8ee53ac2f8e4b6138a24c73eb69201fa5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba937f20fd42fdc9c935084319c2d3c455813e35002ed22a4be631cc74fab26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5372e6932fe776f4e4d18124f45a05be3f374919ce6df4426d89a023353836\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8310aad81d0ce5b52b2c8c5d096733fcabcd9f0d0c9cbae9cf7be55a2ca3e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b232ce88d09a01c32778e258576e6dc2f9ae180513aaf31ffa19ab6e892fefaf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T06:34:40Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 06:34:40.269714 6222 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1129 06:34:40.269807 6222 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1129 06:34:40.269821 6222 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1129 06:34:40.269846 6222 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1129 06:34:40.269862 6222 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1129 06:34:40.269866 6222 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1129 06:34:40.269884 6222 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1129 06:34:40.269897 6222 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1129 06:34:40.269908 6222 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1129 06:34:40.269910 6222 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1129 06:34:40.269914 6222 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1129 06:34:40.269920 6222 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1129 06:34:40.269931 6222 handler.go:208] Removed *v1.Node event handler 7\\\\nI1129 06:34:40.269965 6222 factory.go:656] Stopping watch factory\\\\nI1129 06:34:40.269981 6222 ovnkube.go:599] Stopped ovnkube\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8310aad81d0ce5b52b2c8c5d096733fcabcd9f0d0c9cbae9cf7be55a2ca3e0b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T06:34:43Z\\\",\\\"message\\\":\\\"303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1129 06:34:41.333749 6382 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1129 06:34:41.333772 6382 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1129 06:34:41.333785 6382 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1129 06:34:41.333798 6382 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1129 06:34:41.333636 6382 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI1129 06:34:41.333864 6382 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI1129 06:34:41.333627 6382 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-zb4cv\\\\nI1129 06:34:41.333932 6382 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-zb4cv in node crc\\\\nI1129 06:34:41.333939 6382\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fadf23d78dff09f3edcb6159171e1f55f5a84b42d3dbc6804a2e1764d6c71797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z4rxq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:44Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.798673 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b25cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a287baf-0c87-4698-9553-6f94927fbf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69ea28bcbeb8379671147cd41f131b8b37b41a285319b082c43381a56cdc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbmt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754f770b2635be1fc785e0cc958e0c885dd7516ba54de760493ec7778d738708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbmt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b25cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:44Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.803078 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.803125 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.803135 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.803150 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.803159 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:44Z","lastTransitionTime":"2025-11-29T06:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.812688 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84be169c8b603ce62c3c60b6c67eea559c20cbcfc7ba2d0965bcf6acfb00e99b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:44Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.827537 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zb4cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6063fc55-4365-4f22-a005-bfac3812fdce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c5fd1426004597dc139d078e4f9b5bb7fec8ab12162ca6b052f5eb43025b6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fe3a8d1d8d729e919ab73a034943a441b5bc44409af32b5671d73b80e2bd6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe3a8d1d8d729e919ab73a034943a441b5bc44409af32b5671d73b80e2bd6ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49d19e2f799deb4cf6b8154a2f3c0485c06ad44d0b3e0bac5b2b85f16228f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a49d19e2f799deb4cf6b8154a2f3c0485c06ad44d0b3e0bac5b2b85f16228f38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zb4cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:44Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.837235 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f4d791f-bb61-4aaa-a09c-3007b59645a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247b6579653599785a44fc9e8e0d47f93802f01f2002465a9ba3825d487a324b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bade16afee9bcb203991caf838d5e9e5302dd0b35462b93b45c34cf98e89c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5zgvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:44Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.845592 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ttw9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8440d6ae-a357-461e-a91f-a48625b4a9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c526243829deb889e7afc49647d8bf9960f886b6abc9aa7cba8a69c8d5b3ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mxr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ttw9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:44Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.893266 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/53a3bcac-8ad0-47ce-abee-ee56fd152ea8-metrics-certs\") pod \"network-metrics-daemon-2fbj5\" (UID: \"53a3bcac-8ad0-47ce-abee-ee56fd152ea8\") " pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:34:44 crc kubenswrapper[4947]: E1129 06:34:44.893385 4947 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 06:34:44 crc kubenswrapper[4947]: E1129 06:34:44.893463 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/53a3bcac-8ad0-47ce-abee-ee56fd152ea8-metrics-certs podName:53a3bcac-8ad0-47ce-abee-ee56fd152ea8 nodeName:}" failed. No retries permitted until 2025-11-29 06:34:45.893448564 +0000 UTC m=+36.937830645 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/53a3bcac-8ad0-47ce-abee-ee56fd152ea8-metrics-certs") pod "network-metrics-daemon-2fbj5" (UID: "53a3bcac-8ad0-47ce-abee-ee56fd152ea8") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.905575 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.905641 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.905664 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.905696 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:44 crc kubenswrapper[4947]: I1129 06:34:44.905718 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:44Z","lastTransitionTime":"2025-11-29T06:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.008684 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.008772 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.008790 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.009204 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.009336 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:45Z","lastTransitionTime":"2025-11-29T06:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.112355 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.112398 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.112409 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.112426 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.112437 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:45Z","lastTransitionTime":"2025-11-29T06:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.130565 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.175158 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"268b8d19-01a0-4696-8b9e-0efed4129d56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06bdbb1b45cdc328f5c4345797e222ac33d9f1ea052011c183a229c02134cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa666baf286a9b54dbafbe35f35c3be377481392a4e3811f72cf189d1adbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ef81ddc4f773e06ee9c296ecc2e67ce76bf795fbd8357236cad6c0462d7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360e1f2c76ee825b275816919d9ab1473c6db5da5ab22b43134738150306f87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4712b1e44979f37b926f325313325a788ecb9e01b30583bf8210ab6c53d71d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:45Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.178438 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:34:45 crc kubenswrapper[4947]: E1129 06:34:45.178645 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.197280 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92f526fea4a10baf89765a67868ca0fe47ce19bbde73cd538bd65add7578f000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3934144663bb1471d168783c160c42654884d04d2510cbe1e5f2f6e2cfb94f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:45Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.213014 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:45Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.215292 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.215316 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.215327 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.215343 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.215353 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:45Z","lastTransitionTime":"2025-11-29T06:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.226131 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sxdk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f180a1-2fb0-4b96-85ed-1116677a7c99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b80d32b0abeeab39bc9fd8c24510ec865e957006ecac31da11f86b6e0fb983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swgjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sxdk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:45Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.246562 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlg45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c22792f1661e18f407880f3a743a03890141d9f5c6062e8ba6b06204b4e3da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rndhc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlg45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:45Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.269471 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21d5af32-2f1c-4aa8-a9b3-c436a5b3cb32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8225fda5f45611099716e891065acf0ffa2db6d2c9b79192b5955c1bac77daf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d79edf1f405f23d4ec3ff1a52d9583fc26d31d1fb690337a7b081868c397d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfae5cdad70e7948b6383df2f934a477b237c38b94cf3d3eda22d11fe02826e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20235c8af3fdd17aa70bd0744d8faf37aa7781f63ef194f6e467721a47e388d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:45Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.289265 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:45Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.309167 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2fbj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53a3bcac-8ad0-47ce-abee-ee56fd152ea8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nxj27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nxj27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2fbj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:45Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.318679 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.318967 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.319104 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.319267 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.319448 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:45Z","lastTransitionTime":"2025-11-29T06:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.332740 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabaff26-a896-4929-8b32-6e32efe02ffc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e518bb57f917b37cc08b0f099557c012da1b6c08a168ff3acb262abc2c9d6983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73fed24d3da601900cf1f54e0adc59daba09bd9a50813cc39508f2d8c66ae5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187288a1894fa8a8aaf843f1889edb54adeb0771fb0a5759a1346d1ae04f9970\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb63d3fae21b626041036425ac0dbe4cc0fba792aa82812037eb0cf64036984\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce17a0c77733e53cc77e1ced6944f39b53419840d78420bc7cf4ad34fec82e8b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 06:34:22.756831 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 06:34:22.759210 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196230451/tls.crt::/tmp/serving-cert-4196230451/tls.key\\\\\\\"\\\\nI1129 06:34:28.364401 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 06:34:28.369411 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 06:34:28.369441 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 06:34:28.369468 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 06:34:28.369474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 06:34:28.374098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 06:34:28.374134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374148 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 06:34:28.374153 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 06:34:28.374157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 06:34:28.374161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 06:34:28.374166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 06:34:28.376189 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ce7bb31402783842d6a20f7d0667426065729bdee1f7cf4fcbe636173e79da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:45Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.347317 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de17371c2f08b16e96df9f76ce3eaf0981cc8590aca0784ea95598b7842812eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:45Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.360920 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:45Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.383708 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26f82f9be693d04a2b2536773e664c7e2525ea3cf90f6135e5ed8a41e0def93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffeb939b1928f4e71530473d93e5b142eb70a09d929bee5bea36c407c0fb44a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b624f494cbf18c5d0c15ee7d074273ebb685465bc73aeebae77758840c958415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03aa7fc02c19732bcc2f291ae3e1eb8ee53ac2f8e4b6138a24c73eb69201fa5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba937f20fd42fdc9c935084319c2d3c455813e35002ed22a4be631cc74fab26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5372e6932fe776f4e4d18124f45a05be3f374919ce6df4426d89a023353836\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8310aad81d0ce5b52b2c8c5d096733fcabcd9f0d0c9cbae9cf7be55a2ca3e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b232ce88d09a01c32778e258576e6dc2f9ae180513aaf31ffa19ab6e892fefaf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T06:34:40Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 06:34:40.269714 6222 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1129 06:34:40.269807 6222 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1129 06:34:40.269821 6222 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1129 06:34:40.269846 6222 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1129 06:34:40.269862 6222 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1129 06:34:40.269866 6222 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1129 06:34:40.269884 6222 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1129 06:34:40.269897 6222 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1129 06:34:40.269908 6222 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1129 06:34:40.269910 6222 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1129 06:34:40.269914 6222 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1129 06:34:40.269920 6222 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1129 06:34:40.269931 6222 handler.go:208] Removed *v1.Node event handler 7\\\\nI1129 06:34:40.269965 6222 factory.go:656] Stopping watch factory\\\\nI1129 06:34:40.269981 6222 ovnkube.go:599] Stopped ovnkube\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8310aad81d0ce5b52b2c8c5d096733fcabcd9f0d0c9cbae9cf7be55a2ca3e0b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T06:34:43Z\\\",\\\"message\\\":\\\"303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1129 06:34:41.333749 6382 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1129 06:34:41.333772 6382 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1129 06:34:41.333785 6382 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1129 06:34:41.333798 6382 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1129 06:34:41.333636 6382 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI1129 06:34:41.333864 6382 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI1129 06:34:41.333627 6382 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-zb4cv\\\\nI1129 06:34:41.333932 6382 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-zb4cv in node crc\\\\nI1129 06:34:41.333939 6382\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fadf23d78dff09f3edcb6159171e1f55f5a84b42d3dbc6804a2e1764d6c71797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z4rxq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:45Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.398367 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84be169c8b603ce62c3c60b6c67eea559c20cbcfc7ba2d0965bcf6acfb00e99b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:45Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.415988 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zb4cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6063fc55-4365-4f22-a005-bfac3812fdce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c5fd1426004597dc139d078e4f9b5bb7fec8ab12162ca6b052f5eb43025b6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fe3a8d1d8d729e919ab73a034943a441b5bc44409af32b5671d73b80e2bd6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe3a8d1d8d729e919ab73a034943a441b5bc44409af32b5671d73b80e2bd6ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49d19e2f799deb4cf6b8154a2f3c0485c06ad44d0b3e0bac5b2b85f16228f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a49d19e2f799deb4cf6b8154a2f3c0485c06ad44d0b3e0bac5b2b85f16228f38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zb4cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:45Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.422618 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.422678 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.422691 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.422709 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.422722 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:45Z","lastTransitionTime":"2025-11-29T06:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.428815 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f4d791f-bb61-4aaa-a09c-3007b59645a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247b6579653599785a44fc9e8e0d47f93802f01f2002465a9ba3825d487a324b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bade16afee9bcb203991caf838d5e9e5302dd0b35462b93b45c34cf98e89c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5zgvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:45Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.438625 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ttw9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8440d6ae-a357-461e-a91f-a48625b4a9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c526243829deb889e7afc49647d8bf9960f886b6abc9aa7cba8a69c8d5b3ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mxr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ttw9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:45Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.443405 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z4rxq_dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0/ovnkube-controller/1.log" Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.451679 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b25cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a287baf-0c87-4698-9553-6f94927fbf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69ea28bcbeb8379671147cd41f131b8b37b41a285319b082c43381a56cdc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbmt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754f770b2635be1fc785e0cc958e0c885dd7516ba54de760493ec7778d738708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbmt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b25cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:45Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.525579 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.525648 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.525663 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.525685 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.525698 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:45Z","lastTransitionTime":"2025-11-29T06:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.628909 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.628978 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.628996 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.629022 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.629047 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:45Z","lastTransitionTime":"2025-11-29T06:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.732330 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.732379 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.732391 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.732407 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.732416 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:45Z","lastTransitionTime":"2025-11-29T06:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.897897 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.897975 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.897988 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.898008 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.898019 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:45Z","lastTransitionTime":"2025-11-29T06:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:45 crc kubenswrapper[4947]: I1129 06:34:45.901980 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/53a3bcac-8ad0-47ce-abee-ee56fd152ea8-metrics-certs\") pod \"network-metrics-daemon-2fbj5\" (UID: \"53a3bcac-8ad0-47ce-abee-ee56fd152ea8\") " pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:34:45 crc kubenswrapper[4947]: E1129 06:34:45.902169 4947 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 06:34:45 crc kubenswrapper[4947]: E1129 06:34:45.902250 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/53a3bcac-8ad0-47ce-abee-ee56fd152ea8-metrics-certs podName:53a3bcac-8ad0-47ce-abee-ee56fd152ea8 nodeName:}" failed. No retries permitted until 2025-11-29 06:34:47.902234049 +0000 UTC m=+38.946616140 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/53a3bcac-8ad0-47ce-abee-ee56fd152ea8-metrics-certs") pod "network-metrics-daemon-2fbj5" (UID: "53a3bcac-8ad0-47ce-abee-ee56fd152ea8") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 06:34:46 crc kubenswrapper[4947]: I1129 06:34:46.000871 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:46 crc kubenswrapper[4947]: I1129 06:34:46.000935 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:46 crc kubenswrapper[4947]: I1129 06:34:46.000955 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:46 crc kubenswrapper[4947]: I1129 06:34:46.000980 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:46 crc kubenswrapper[4947]: I1129 06:34:46.000997 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:46Z","lastTransitionTime":"2025-11-29T06:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:46 crc kubenswrapper[4947]: I1129 06:34:46.104048 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:46 crc kubenswrapper[4947]: I1129 06:34:46.104104 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:46 crc kubenswrapper[4947]: I1129 06:34:46.104121 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:46 crc kubenswrapper[4947]: I1129 06:34:46.104144 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:46 crc kubenswrapper[4947]: I1129 06:34:46.104161 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:46Z","lastTransitionTime":"2025-11-29T06:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:46 crc kubenswrapper[4947]: I1129 06:34:46.178566 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:34:46 crc kubenswrapper[4947]: I1129 06:34:46.178566 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:34:46 crc kubenswrapper[4947]: E1129 06:34:46.178831 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 06:34:46 crc kubenswrapper[4947]: I1129 06:34:46.178619 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:34:46 crc kubenswrapper[4947]: E1129 06:34:46.178926 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 06:34:46 crc kubenswrapper[4947]: E1129 06:34:46.179120 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fbj5" podUID="53a3bcac-8ad0-47ce-abee-ee56fd152ea8" Nov 29 06:34:46 crc kubenswrapper[4947]: I1129 06:34:46.207450 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:46 crc kubenswrapper[4947]: I1129 06:34:46.207499 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:46 crc kubenswrapper[4947]: I1129 06:34:46.207513 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:46 crc kubenswrapper[4947]: I1129 06:34:46.207548 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:46 crc kubenswrapper[4947]: I1129 06:34:46.207563 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:46Z","lastTransitionTime":"2025-11-29T06:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:46 crc kubenswrapper[4947]: I1129 06:34:46.309969 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:46 crc kubenswrapper[4947]: I1129 06:34:46.310007 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:46 crc kubenswrapper[4947]: I1129 06:34:46.310017 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:46 crc kubenswrapper[4947]: I1129 06:34:46.310033 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:46 crc kubenswrapper[4947]: I1129 06:34:46.310044 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:46Z","lastTransitionTime":"2025-11-29T06:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:46 crc kubenswrapper[4947]: I1129 06:34:46.413192 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:46 crc kubenswrapper[4947]: I1129 06:34:46.413253 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:46 crc kubenswrapper[4947]: I1129 06:34:46.413264 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:46 crc kubenswrapper[4947]: I1129 06:34:46.413279 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:46 crc kubenswrapper[4947]: I1129 06:34:46.413289 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:46Z","lastTransitionTime":"2025-11-29T06:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:46 crc kubenswrapper[4947]: I1129 06:34:46.516300 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:46 crc kubenswrapper[4947]: I1129 06:34:46.516352 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:46 crc kubenswrapper[4947]: I1129 06:34:46.516363 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:46 crc kubenswrapper[4947]: I1129 06:34:46.516382 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:46 crc kubenswrapper[4947]: I1129 06:34:46.516398 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:46Z","lastTransitionTime":"2025-11-29T06:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:46 crc kubenswrapper[4947]: I1129 06:34:46.619592 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:46 crc kubenswrapper[4947]: I1129 06:34:46.619684 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:46 crc kubenswrapper[4947]: I1129 06:34:46.619702 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:46 crc kubenswrapper[4947]: I1129 06:34:46.619729 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:46 crc kubenswrapper[4947]: I1129 06:34:46.619748 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:46Z","lastTransitionTime":"2025-11-29T06:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:46 crc kubenswrapper[4947]: I1129 06:34:46.723182 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:46 crc kubenswrapper[4947]: I1129 06:34:46.723275 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:46 crc kubenswrapper[4947]: I1129 06:34:46.723294 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:46 crc kubenswrapper[4947]: I1129 06:34:46.723322 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:46 crc kubenswrapper[4947]: I1129 06:34:46.723339 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:46Z","lastTransitionTime":"2025-11-29T06:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:46 crc kubenswrapper[4947]: I1129 06:34:46.827109 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:46 crc kubenswrapper[4947]: I1129 06:34:46.827185 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:46 crc kubenswrapper[4947]: I1129 06:34:46.827208 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:46 crc kubenswrapper[4947]: I1129 06:34:46.827286 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:46 crc kubenswrapper[4947]: I1129 06:34:46.827308 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:46Z","lastTransitionTime":"2025-11-29T06:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:46 crc kubenswrapper[4947]: I1129 06:34:46.931764 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:46 crc kubenswrapper[4947]: I1129 06:34:46.931837 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:46 crc kubenswrapper[4947]: I1129 06:34:46.931856 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:46 crc kubenswrapper[4947]: I1129 06:34:46.931886 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:46 crc kubenswrapper[4947]: I1129 06:34:46.931908 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:46Z","lastTransitionTime":"2025-11-29T06:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:47 crc kubenswrapper[4947]: I1129 06:34:47.034529 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:47 crc kubenswrapper[4947]: I1129 06:34:47.034643 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:47 crc kubenswrapper[4947]: I1129 06:34:47.034677 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:47 crc kubenswrapper[4947]: I1129 06:34:47.034714 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:47 crc kubenswrapper[4947]: I1129 06:34:47.034738 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:47Z","lastTransitionTime":"2025-11-29T06:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:47 crc kubenswrapper[4947]: I1129 06:34:47.137067 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:47 crc kubenswrapper[4947]: I1129 06:34:47.137113 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:47 crc kubenswrapper[4947]: I1129 06:34:47.137125 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:47 crc kubenswrapper[4947]: I1129 06:34:47.137143 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:47 crc kubenswrapper[4947]: I1129 06:34:47.137155 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:47Z","lastTransitionTime":"2025-11-29T06:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:47 crc kubenswrapper[4947]: I1129 06:34:47.178699 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:34:47 crc kubenswrapper[4947]: E1129 06:34:47.178907 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 06:34:47 crc kubenswrapper[4947]: I1129 06:34:47.240841 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:47 crc kubenswrapper[4947]: I1129 06:34:47.240909 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:47 crc kubenswrapper[4947]: I1129 06:34:47.240931 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:47 crc kubenswrapper[4947]: I1129 06:34:47.240962 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:47 crc kubenswrapper[4947]: I1129 06:34:47.240984 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:47Z","lastTransitionTime":"2025-11-29T06:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:47 crc kubenswrapper[4947]: I1129 06:34:47.344264 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:47 crc kubenswrapper[4947]: I1129 06:34:47.344334 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:47 crc kubenswrapper[4947]: I1129 06:34:47.344354 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:47 crc kubenswrapper[4947]: I1129 06:34:47.344382 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:47 crc kubenswrapper[4947]: I1129 06:34:47.344402 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:47Z","lastTransitionTime":"2025-11-29T06:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:47 crc kubenswrapper[4947]: I1129 06:34:47.448305 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:47 crc kubenswrapper[4947]: I1129 06:34:47.448386 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:47 crc kubenswrapper[4947]: I1129 06:34:47.448410 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:47 crc kubenswrapper[4947]: I1129 06:34:47.448438 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:47 crc kubenswrapper[4947]: I1129 06:34:47.448458 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:47Z","lastTransitionTime":"2025-11-29T06:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:47 crc kubenswrapper[4947]: I1129 06:34:47.552282 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:47 crc kubenswrapper[4947]: I1129 06:34:47.552339 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:47 crc kubenswrapper[4947]: I1129 06:34:47.552357 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:47 crc kubenswrapper[4947]: I1129 06:34:47.552384 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:47 crc kubenswrapper[4947]: I1129 06:34:47.552401 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:47Z","lastTransitionTime":"2025-11-29T06:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:47 crc kubenswrapper[4947]: I1129 06:34:47.654951 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:47 crc kubenswrapper[4947]: I1129 06:34:47.654986 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:47 crc kubenswrapper[4947]: I1129 06:34:47.654996 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:47 crc kubenswrapper[4947]: I1129 06:34:47.655012 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:47 crc kubenswrapper[4947]: I1129 06:34:47.655023 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:47Z","lastTransitionTime":"2025-11-29T06:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:47 crc kubenswrapper[4947]: I1129 06:34:47.757395 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:47 crc kubenswrapper[4947]: I1129 06:34:47.757438 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:47 crc kubenswrapper[4947]: I1129 06:34:47.757448 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:47 crc kubenswrapper[4947]: I1129 06:34:47.757463 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:47 crc kubenswrapper[4947]: I1129 06:34:47.757473 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:47Z","lastTransitionTime":"2025-11-29T06:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:47 crc kubenswrapper[4947]: I1129 06:34:47.861049 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:47 crc kubenswrapper[4947]: I1129 06:34:47.861115 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:47 crc kubenswrapper[4947]: I1129 06:34:47.861163 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:47 crc kubenswrapper[4947]: I1129 06:34:47.861210 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:47 crc kubenswrapper[4947]: I1129 06:34:47.861250 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:47Z","lastTransitionTime":"2025-11-29T06:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:47 crc kubenswrapper[4947]: I1129 06:34:47.924962 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/53a3bcac-8ad0-47ce-abee-ee56fd152ea8-metrics-certs\") pod \"network-metrics-daemon-2fbj5\" (UID: \"53a3bcac-8ad0-47ce-abee-ee56fd152ea8\") " pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:34:47 crc kubenswrapper[4947]: E1129 06:34:47.925261 4947 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 06:34:47 crc kubenswrapper[4947]: E1129 06:34:47.925368 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/53a3bcac-8ad0-47ce-abee-ee56fd152ea8-metrics-certs podName:53a3bcac-8ad0-47ce-abee-ee56fd152ea8 nodeName:}" failed. No retries permitted until 2025-11-29 06:34:51.925344242 +0000 UTC m=+42.969726363 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/53a3bcac-8ad0-47ce-abee-ee56fd152ea8-metrics-certs") pod "network-metrics-daemon-2fbj5" (UID: "53a3bcac-8ad0-47ce-abee-ee56fd152ea8") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 06:34:47 crc kubenswrapper[4947]: I1129 06:34:47.964649 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:47 crc kubenswrapper[4947]: I1129 06:34:47.964722 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:47 crc kubenswrapper[4947]: I1129 06:34:47.964739 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:47 crc kubenswrapper[4947]: I1129 06:34:47.964768 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:47 crc kubenswrapper[4947]: I1129 06:34:47.964785 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:47Z","lastTransitionTime":"2025-11-29T06:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:48 crc kubenswrapper[4947]: I1129 06:34:48.068060 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:48 crc kubenswrapper[4947]: I1129 06:34:48.068130 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:48 crc kubenswrapper[4947]: I1129 06:34:48.068153 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:48 crc kubenswrapper[4947]: I1129 06:34:48.068183 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:48 crc kubenswrapper[4947]: I1129 06:34:48.068206 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:48Z","lastTransitionTime":"2025-11-29T06:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:48 crc kubenswrapper[4947]: I1129 06:34:48.171415 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:48 crc kubenswrapper[4947]: I1129 06:34:48.171481 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:48 crc kubenswrapper[4947]: I1129 06:34:48.171504 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:48 crc kubenswrapper[4947]: I1129 06:34:48.171532 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:48 crc kubenswrapper[4947]: I1129 06:34:48.171552 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:48Z","lastTransitionTime":"2025-11-29T06:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:48 crc kubenswrapper[4947]: I1129 06:34:48.178826 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:34:48 crc kubenswrapper[4947]: I1129 06:34:48.178867 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:34:48 crc kubenswrapper[4947]: I1129 06:34:48.178838 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:34:48 crc kubenswrapper[4947]: E1129 06:34:48.179033 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 06:34:48 crc kubenswrapper[4947]: E1129 06:34:48.179161 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 06:34:48 crc kubenswrapper[4947]: E1129 06:34:48.179319 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fbj5" podUID="53a3bcac-8ad0-47ce-abee-ee56fd152ea8" Nov 29 06:34:48 crc kubenswrapper[4947]: I1129 06:34:48.274755 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:48 crc kubenswrapper[4947]: I1129 06:34:48.274861 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:48 crc kubenswrapper[4947]: I1129 06:34:48.274880 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:48 crc kubenswrapper[4947]: I1129 06:34:48.274945 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:48 crc kubenswrapper[4947]: I1129 06:34:48.274963 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:48Z","lastTransitionTime":"2025-11-29T06:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:48 crc kubenswrapper[4947]: I1129 06:34:48.378124 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:48 crc kubenswrapper[4947]: I1129 06:34:48.378282 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:48 crc kubenswrapper[4947]: I1129 06:34:48.378304 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:48 crc kubenswrapper[4947]: I1129 06:34:48.378329 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:48 crc kubenswrapper[4947]: I1129 06:34:48.378346 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:48Z","lastTransitionTime":"2025-11-29T06:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:48 crc kubenswrapper[4947]: I1129 06:34:48.480847 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:48 crc kubenswrapper[4947]: I1129 06:34:48.480907 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:48 crc kubenswrapper[4947]: I1129 06:34:48.480925 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:48 crc kubenswrapper[4947]: I1129 06:34:48.480949 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:48 crc kubenswrapper[4947]: I1129 06:34:48.480966 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:48Z","lastTransitionTime":"2025-11-29T06:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:48 crc kubenswrapper[4947]: I1129 06:34:48.583107 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:48 crc kubenswrapper[4947]: I1129 06:34:48.583158 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:48 crc kubenswrapper[4947]: I1129 06:34:48.583181 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:48 crc kubenswrapper[4947]: I1129 06:34:48.583206 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:48 crc kubenswrapper[4947]: I1129 06:34:48.583286 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:48Z","lastTransitionTime":"2025-11-29T06:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:48 crc kubenswrapper[4947]: I1129 06:34:48.685985 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:48 crc kubenswrapper[4947]: I1129 06:34:48.686048 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:48 crc kubenswrapper[4947]: I1129 06:34:48.686067 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:48 crc kubenswrapper[4947]: I1129 06:34:48.686094 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:48 crc kubenswrapper[4947]: I1129 06:34:48.686112 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:48Z","lastTransitionTime":"2025-11-29T06:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:48 crc kubenswrapper[4947]: I1129 06:34:48.789829 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:48 crc kubenswrapper[4947]: I1129 06:34:48.789913 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:48 crc kubenswrapper[4947]: I1129 06:34:48.789936 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:48 crc kubenswrapper[4947]: I1129 06:34:48.789966 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:48 crc kubenswrapper[4947]: I1129 06:34:48.789987 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:48Z","lastTransitionTime":"2025-11-29T06:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:48 crc kubenswrapper[4947]: I1129 06:34:48.893361 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:48 crc kubenswrapper[4947]: I1129 06:34:48.893413 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:48 crc kubenswrapper[4947]: I1129 06:34:48.893427 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:48 crc kubenswrapper[4947]: I1129 06:34:48.893445 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:48 crc kubenswrapper[4947]: I1129 06:34:48.893474 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:48Z","lastTransitionTime":"2025-11-29T06:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:48 crc kubenswrapper[4947]: I1129 06:34:48.995936 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:48 crc kubenswrapper[4947]: I1129 06:34:48.995974 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:48 crc kubenswrapper[4947]: I1129 06:34:48.995983 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:48 crc kubenswrapper[4947]: I1129 06:34:48.995998 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:48 crc kubenswrapper[4947]: I1129 06:34:48.996008 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:48Z","lastTransitionTime":"2025-11-29T06:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.099511 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.099561 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.099576 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.099593 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.099607 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:49Z","lastTransitionTime":"2025-11-29T06:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.178248 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:34:49 crc kubenswrapper[4947]: E1129 06:34:49.178395 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.199589 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabaff26-a896-4929-8b32-6e32efe02ffc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e518bb57f917b37cc08b0f099557c012da1b6c08a168ff3acb262abc2c9d6983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73fed24d3da601900cf1f54e0adc59daba09bd9a50813cc39508f2d8c66ae5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187288a1894fa8a8aaf843f1889edb54adeb0771fb0a5759a1346d1ae04f9970\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb63d3fae21b626041036425ac0dbe4cc0fba792aa82812037eb0cf64036984\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce17a0c77733e53cc77e1ced6944f39b53419840d78420bc7cf4ad34fec82e8b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 06:34:22.756831 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 06:34:22.759210 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196230451/tls.crt::/tmp/serving-cert-4196230451/tls.key\\\\\\\"\\\\nI1129 06:34:28.364401 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 06:34:28.369411 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 06:34:28.369441 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 06:34:28.369468 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 06:34:28.369474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 06:34:28.374098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 06:34:28.374134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374148 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 06:34:28.374153 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 06:34:28.374157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 06:34:28.374161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 06:34:28.374166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 06:34:28.376189 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ce7bb31402783842d6a20f7d0667426065729bdee1f7cf4fcbe636173e79da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:49Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.201771 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.201836 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.201849 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.201865 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.202436 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:49Z","lastTransitionTime":"2025-11-29T06:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.215724 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de17371c2f08b16e96df9f76ce3eaf0981cc8590aca0784ea95598b7842812eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:49Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.230296 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:49Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.259147 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26f82f9be693d04a2b2536773e664c7e2525ea3cf90f6135e5ed8a41e0def93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffeb939b1928f4e71530473d93e5b142eb70a09d929bee5bea36c407c0fb44a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b624f494cbf18c5d0c15ee7d074273ebb685465bc73aeebae77758840c958415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03aa7fc02c19732bcc2f291ae3e1eb8ee53ac2f8e4b6138a24c73eb69201fa5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba937f20fd42fdc9c935084319c2d3c455813e35002ed22a4be631cc74fab26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5372e6932fe776f4e4d18124f45a05be3f374919ce6df4426d89a023353836\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8310aad81d0ce5b52b2c8c5d096733fcabcd9f0d0c9cbae9cf7be55a2ca3e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b232ce88d09a01c32778e258576e6dc2f9ae180513aaf31ffa19ab6e892fefaf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T06:34:40Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 06:34:40.269714 6222 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1129 06:34:40.269807 6222 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1129 06:34:40.269821 6222 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1129 06:34:40.269846 6222 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1129 06:34:40.269862 6222 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1129 06:34:40.269866 6222 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1129 06:34:40.269884 6222 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1129 06:34:40.269897 6222 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1129 06:34:40.269908 6222 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1129 06:34:40.269910 6222 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1129 06:34:40.269914 6222 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1129 06:34:40.269920 6222 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1129 06:34:40.269931 6222 handler.go:208] Removed *v1.Node event handler 7\\\\nI1129 06:34:40.269965 6222 factory.go:656] Stopping watch factory\\\\nI1129 06:34:40.269981 6222 ovnkube.go:599] Stopped ovnkube\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8310aad81d0ce5b52b2c8c5d096733fcabcd9f0d0c9cbae9cf7be55a2ca3e0b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T06:34:43Z\\\",\\\"message\\\":\\\"303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1129 06:34:41.333749 6382 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1129 06:34:41.333772 6382 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1129 06:34:41.333785 6382 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1129 06:34:41.333798 6382 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1129 06:34:41.333636 6382 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI1129 06:34:41.333864 6382 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI1129 06:34:41.333627 6382 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-zb4cv\\\\nI1129 06:34:41.333932 6382 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-zb4cv in node crc\\\\nI1129 06:34:41.333939 6382\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fadf23d78dff09f3edcb6159171e1f55f5a84b42d3dbc6804a2e1764d6c71797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z4rxq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:49Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.276932 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ttw9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8440d6ae-a357-461e-a91f-a48625b4a9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c526243829deb889e7afc49647d8bf9960f886b6abc9aa7cba8a69c8d5b3ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mxr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ttw9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:49Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.294039 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b25cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a287baf-0c87-4698-9553-6f94927fbf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69ea28bcbeb8379671147cd41f131b8b37b41a285319b082c43381a56cdc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbmt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754f770b2635be1fc785e0cc958e0c885dd7516ba54de760493ec7778d738708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbmt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b25cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:49Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.305026 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.305095 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.305108 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.305155 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.305167 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:49Z","lastTransitionTime":"2025-11-29T06:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.309814 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84be169c8b603ce62c3c60b6c67eea559c20cbcfc7ba2d0965bcf6acfb00e99b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:49Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.323684 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zb4cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6063fc55-4365-4f22-a005-bfac3812fdce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c5fd1426004597dc139d078e4f9b5bb7fec8ab12162ca6b052f5eb43025b6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fe3a8d1d8d729e919ab73a034943a441b5bc44409af32b5671d73b80e2bd6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe3a8d1d8d729e919ab73a034943a441b5bc44409af32b5671d73b80e2bd6ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49d19e2f799deb4cf6b8154a2f3c0485c06ad44d0b3e0bac5b2b85f16228f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a49d19e2f799deb4cf6b8154a2f3c0485c06ad44d0b3e0bac5b2b85f16228f38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zb4cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:49Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.334973 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f4d791f-bb61-4aaa-a09c-3007b59645a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247b6579653599785a44fc9e8e0d47f93802f01f2002465a9ba3825d487a324b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bade16afee9bcb203991caf838d5e9e5302dd0b35462b93b45c34cf98e89c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5zgvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:49Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.352870 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlg45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c22792f1661e18f407880f3a743a03890141d9f5c6062e8ba6b06204b4e3da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rndhc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlg45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:49Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.375015 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"268b8d19-01a0-4696-8b9e-0efed4129d56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06bdbb1b45cdc328f5c4345797e222ac33d9f1ea052011c183a229c02134cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa666baf286a9b54dbafbe35f35c3be377481392a4e3811f72cf189d1adbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ef81ddc4f773e06ee9c296ecc2e67ce76bf795fbd8357236cad6c0462d7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360e1f2c76ee825b275816919d9ab1473c6db5da5ab22b43134738150306f87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4712b1e44979f37b926f325313325a788ecb9e01b30583bf8210ab6c53d71d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:49Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.390208 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92f526fea4a10baf89765a67868ca0fe47ce19bbde73cd538bd65add7578f000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3934144663bb1471d168783c160c42654884d04d2510cbe1e5f2f6e2cfb94f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:49Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.405454 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:49Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.408746 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.408797 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.408813 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.408837 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.408853 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:49Z","lastTransitionTime":"2025-11-29T06:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.418009 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sxdk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f180a1-2fb0-4b96-85ed-1116677a7c99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b80d32b0abeeab39bc9fd8c24510ec865e957006ecac31da11f86b6e0fb983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swgjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sxdk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:49Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.431160 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21d5af32-2f1c-4aa8-a9b3-c436a5b3cb32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8225fda5f45611099716e891065acf0ffa2db6d2c9b79192b5955c1bac77daf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d79edf1f405f23d4ec3ff1a52d9583fc26d31d1fb690337a7b081868c397d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfae5cdad70e7948b6383df2f934a477b237c38b94cf3d3eda22d11fe02826e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20235c8af3fdd17aa70bd0744d8faf37aa7781f63ef194f6e467721a47e388d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:49Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.444729 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:49Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.456326 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2fbj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53a3bcac-8ad0-47ce-abee-ee56fd152ea8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nxj27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nxj27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2fbj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:49Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.517746 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.518137 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.518334 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.518486 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.518612 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:49Z","lastTransitionTime":"2025-11-29T06:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.622831 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.622907 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.622932 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.622963 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.622983 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:49Z","lastTransitionTime":"2025-11-29T06:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.726843 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.726917 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.726941 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.726970 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.726992 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:49Z","lastTransitionTime":"2025-11-29T06:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.830189 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.830302 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.830322 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.830346 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.830365 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:49Z","lastTransitionTime":"2025-11-29T06:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.933628 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.933670 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.933681 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.933699 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:49 crc kubenswrapper[4947]: I1129 06:34:49.933718 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:49Z","lastTransitionTime":"2025-11-29T06:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:50 crc kubenswrapper[4947]: I1129 06:34:50.036651 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:50 crc kubenswrapper[4947]: I1129 06:34:50.036714 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:50 crc kubenswrapper[4947]: I1129 06:34:50.036733 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:50 crc kubenswrapper[4947]: I1129 06:34:50.036758 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:50 crc kubenswrapper[4947]: I1129 06:34:50.036775 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:50Z","lastTransitionTime":"2025-11-29T06:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:50 crc kubenswrapper[4947]: I1129 06:34:50.140489 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:50 crc kubenswrapper[4947]: I1129 06:34:50.140577 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:50 crc kubenswrapper[4947]: I1129 06:34:50.140604 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:50 crc kubenswrapper[4947]: I1129 06:34:50.140635 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:50 crc kubenswrapper[4947]: I1129 06:34:50.140655 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:50Z","lastTransitionTime":"2025-11-29T06:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:50 crc kubenswrapper[4947]: I1129 06:34:50.178205 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:34:50 crc kubenswrapper[4947]: I1129 06:34:50.178312 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:34:50 crc kubenswrapper[4947]: I1129 06:34:50.178346 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:34:50 crc kubenswrapper[4947]: E1129 06:34:50.178484 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 06:34:50 crc kubenswrapper[4947]: E1129 06:34:50.178651 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fbj5" podUID="53a3bcac-8ad0-47ce-abee-ee56fd152ea8" Nov 29 06:34:50 crc kubenswrapper[4947]: E1129 06:34:50.178821 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 06:34:50 crc kubenswrapper[4947]: I1129 06:34:50.243836 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:50 crc kubenswrapper[4947]: I1129 06:34:50.243897 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:50 crc kubenswrapper[4947]: I1129 06:34:50.243909 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:50 crc kubenswrapper[4947]: I1129 06:34:50.243927 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:50 crc kubenswrapper[4947]: I1129 06:34:50.243938 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:50Z","lastTransitionTime":"2025-11-29T06:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:50 crc kubenswrapper[4947]: I1129 06:34:50.346938 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:50 crc kubenswrapper[4947]: I1129 06:34:50.346986 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:50 crc kubenswrapper[4947]: I1129 06:34:50.347003 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:50 crc kubenswrapper[4947]: I1129 06:34:50.347041 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:50 crc kubenswrapper[4947]: I1129 06:34:50.347059 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:50Z","lastTransitionTime":"2025-11-29T06:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:50 crc kubenswrapper[4947]: I1129 06:34:50.449635 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:50 crc kubenswrapper[4947]: I1129 06:34:50.449694 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:50 crc kubenswrapper[4947]: I1129 06:34:50.449715 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:50 crc kubenswrapper[4947]: I1129 06:34:50.449743 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:50 crc kubenswrapper[4947]: I1129 06:34:50.449764 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:50Z","lastTransitionTime":"2025-11-29T06:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:50 crc kubenswrapper[4947]: I1129 06:34:50.558356 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:50 crc kubenswrapper[4947]: I1129 06:34:50.558425 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:50 crc kubenswrapper[4947]: I1129 06:34:50.558451 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:50 crc kubenswrapper[4947]: I1129 06:34:50.558487 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:50 crc kubenswrapper[4947]: I1129 06:34:50.558510 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:50Z","lastTransitionTime":"2025-11-29T06:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:50 crc kubenswrapper[4947]: I1129 06:34:50.664061 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:50 crc kubenswrapper[4947]: I1129 06:34:50.664123 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:50 crc kubenswrapper[4947]: I1129 06:34:50.664142 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:50 crc kubenswrapper[4947]: I1129 06:34:50.664166 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:50 crc kubenswrapper[4947]: I1129 06:34:50.664182 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:50Z","lastTransitionTime":"2025-11-29T06:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:50 crc kubenswrapper[4947]: I1129 06:34:50.767522 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:50 crc kubenswrapper[4947]: I1129 06:34:50.767563 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:50 crc kubenswrapper[4947]: I1129 06:34:50.767573 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:50 crc kubenswrapper[4947]: I1129 06:34:50.767592 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:50 crc kubenswrapper[4947]: I1129 06:34:50.767604 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:50Z","lastTransitionTime":"2025-11-29T06:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:50 crc kubenswrapper[4947]: I1129 06:34:50.870779 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:50 crc kubenswrapper[4947]: I1129 06:34:50.870808 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:50 crc kubenswrapper[4947]: I1129 06:34:50.870817 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:50 crc kubenswrapper[4947]: I1129 06:34:50.870830 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:50 crc kubenswrapper[4947]: I1129 06:34:50.870839 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:50Z","lastTransitionTime":"2025-11-29T06:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:50 crc kubenswrapper[4947]: I1129 06:34:50.974270 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:50 crc kubenswrapper[4947]: I1129 06:34:50.974322 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:50 crc kubenswrapper[4947]: I1129 06:34:50.974339 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:50 crc kubenswrapper[4947]: I1129 06:34:50.974363 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:50 crc kubenswrapper[4947]: I1129 06:34:50.974380 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:50Z","lastTransitionTime":"2025-11-29T06:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:51 crc kubenswrapper[4947]: I1129 06:34:51.077756 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:51 crc kubenswrapper[4947]: I1129 06:34:51.077801 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:51 crc kubenswrapper[4947]: I1129 06:34:51.077818 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:51 crc kubenswrapper[4947]: I1129 06:34:51.077841 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:51 crc kubenswrapper[4947]: I1129 06:34:51.077858 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:51Z","lastTransitionTime":"2025-11-29T06:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:51 crc kubenswrapper[4947]: I1129 06:34:51.178746 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:34:51 crc kubenswrapper[4947]: E1129 06:34:51.178956 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 06:34:51 crc kubenswrapper[4947]: I1129 06:34:51.181351 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:51 crc kubenswrapper[4947]: I1129 06:34:51.181401 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:51 crc kubenswrapper[4947]: I1129 06:34:51.181416 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:51 crc kubenswrapper[4947]: I1129 06:34:51.181434 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:51 crc kubenswrapper[4947]: I1129 06:34:51.181446 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:51Z","lastTransitionTime":"2025-11-29T06:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:51 crc kubenswrapper[4947]: I1129 06:34:51.284152 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:51 crc kubenswrapper[4947]: I1129 06:34:51.284202 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:51 crc kubenswrapper[4947]: I1129 06:34:51.284232 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:51 crc kubenswrapper[4947]: I1129 06:34:51.284247 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:51 crc kubenswrapper[4947]: I1129 06:34:51.284259 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:51Z","lastTransitionTime":"2025-11-29T06:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:51 crc kubenswrapper[4947]: I1129 06:34:51.386938 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:51 crc kubenswrapper[4947]: I1129 06:34:51.386983 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:51 crc kubenswrapper[4947]: I1129 06:34:51.386996 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:51 crc kubenswrapper[4947]: I1129 06:34:51.387014 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:51 crc kubenswrapper[4947]: I1129 06:34:51.387025 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:51Z","lastTransitionTime":"2025-11-29T06:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:51 crc kubenswrapper[4947]: I1129 06:34:51.489422 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:51 crc kubenswrapper[4947]: I1129 06:34:51.489463 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:51 crc kubenswrapper[4947]: I1129 06:34:51.489471 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:51 crc kubenswrapper[4947]: I1129 06:34:51.489486 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:51 crc kubenswrapper[4947]: I1129 06:34:51.489495 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:51Z","lastTransitionTime":"2025-11-29T06:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:51 crc kubenswrapper[4947]: I1129 06:34:51.592721 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:51 crc kubenswrapper[4947]: I1129 06:34:51.592781 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:51 crc kubenswrapper[4947]: I1129 06:34:51.592805 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:51 crc kubenswrapper[4947]: I1129 06:34:51.592834 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:51 crc kubenswrapper[4947]: I1129 06:34:51.592855 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:51Z","lastTransitionTime":"2025-11-29T06:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:51 crc kubenswrapper[4947]: I1129 06:34:51.696049 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:51 crc kubenswrapper[4947]: I1129 06:34:51.696098 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:51 crc kubenswrapper[4947]: I1129 06:34:51.696109 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:51 crc kubenswrapper[4947]: I1129 06:34:51.696126 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:51 crc kubenswrapper[4947]: I1129 06:34:51.696136 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:51Z","lastTransitionTime":"2025-11-29T06:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:51 crc kubenswrapper[4947]: I1129 06:34:51.800170 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:51 crc kubenswrapper[4947]: I1129 06:34:51.800274 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:51 crc kubenswrapper[4947]: I1129 06:34:51.800296 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:51 crc kubenswrapper[4947]: I1129 06:34:51.800323 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:51 crc kubenswrapper[4947]: I1129 06:34:51.800343 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:51Z","lastTransitionTime":"2025-11-29T06:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:51 crc kubenswrapper[4947]: I1129 06:34:51.903133 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:51 crc kubenswrapper[4947]: I1129 06:34:51.903209 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:51 crc kubenswrapper[4947]: I1129 06:34:51.903269 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:51 crc kubenswrapper[4947]: I1129 06:34:51.903305 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:51 crc kubenswrapper[4947]: I1129 06:34:51.903328 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:51Z","lastTransitionTime":"2025-11-29T06:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:51 crc kubenswrapper[4947]: I1129 06:34:51.973689 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/53a3bcac-8ad0-47ce-abee-ee56fd152ea8-metrics-certs\") pod \"network-metrics-daemon-2fbj5\" (UID: \"53a3bcac-8ad0-47ce-abee-ee56fd152ea8\") " pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:34:51 crc kubenswrapper[4947]: E1129 06:34:51.973996 4947 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 06:34:51 crc kubenswrapper[4947]: E1129 06:34:51.974107 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/53a3bcac-8ad0-47ce-abee-ee56fd152ea8-metrics-certs podName:53a3bcac-8ad0-47ce-abee-ee56fd152ea8 nodeName:}" failed. No retries permitted until 2025-11-29 06:34:59.974076462 +0000 UTC m=+51.018458583 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/53a3bcac-8ad0-47ce-abee-ee56fd152ea8-metrics-certs") pod "network-metrics-daemon-2fbj5" (UID: "53a3bcac-8ad0-47ce-abee-ee56fd152ea8") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.005859 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.005920 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.005937 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.005960 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.005977 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:52Z","lastTransitionTime":"2025-11-29T06:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.109627 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.109677 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.109692 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.109716 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.109730 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:52Z","lastTransitionTime":"2025-11-29T06:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.178566 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.178620 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:34:52 crc kubenswrapper[4947]: E1129 06:34:52.178761 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.178643 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:34:52 crc kubenswrapper[4947]: E1129 06:34:52.178858 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fbj5" podUID="53a3bcac-8ad0-47ce-abee-ee56fd152ea8" Nov 29 06:34:52 crc kubenswrapper[4947]: E1129 06:34:52.178922 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.212329 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.212406 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.212430 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.212460 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.212483 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:52Z","lastTransitionTime":"2025-11-29T06:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.315169 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.315250 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.315270 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.315294 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.315310 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:52Z","lastTransitionTime":"2025-11-29T06:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.418659 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.418757 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.418783 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.418810 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.418826 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:52Z","lastTransitionTime":"2025-11-29T06:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.522132 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.522261 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.522302 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.522340 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.522363 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:52Z","lastTransitionTime":"2025-11-29T06:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.625161 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.625205 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.625234 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.625249 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.625257 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:52Z","lastTransitionTime":"2025-11-29T06:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.727760 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.727799 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.727809 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.727825 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.727836 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:52Z","lastTransitionTime":"2025-11-29T06:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.731638 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.731699 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.731711 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.731727 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.731761 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:52Z","lastTransitionTime":"2025-11-29T06:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:52 crc kubenswrapper[4947]: E1129 06:34:52.745969 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ead809c-947a-4269-a0f3-b817113e9662\\\",\\\"systemUUID\\\":\\\"f0748469-4a41-446c-a5c3-776c2ca32148\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:52Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.749900 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.749933 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.749942 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.749955 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.749964 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:52Z","lastTransitionTime":"2025-11-29T06:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:52 crc kubenswrapper[4947]: E1129 06:34:52.765975 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ead809c-947a-4269-a0f3-b817113e9662\\\",\\\"systemUUID\\\":\\\"f0748469-4a41-446c-a5c3-776c2ca32148\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:52Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.770613 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.770645 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.770656 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.770673 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.770704 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:52Z","lastTransitionTime":"2025-11-29T06:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:52 crc kubenswrapper[4947]: E1129 06:34:52.786281 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ead809c-947a-4269-a0f3-b817113e9662\\\",\\\"systemUUID\\\":\\\"f0748469-4a41-446c-a5c3-776c2ca32148\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:52Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.789768 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.789799 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.789810 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.789827 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.789838 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:52Z","lastTransitionTime":"2025-11-29T06:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:52 crc kubenswrapper[4947]: E1129 06:34:52.803032 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ead809c-947a-4269-a0f3-b817113e9662\\\",\\\"systemUUID\\\":\\\"f0748469-4a41-446c-a5c3-776c2ca32148\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:52Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.807068 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.807097 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.807110 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.807126 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.807139 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:52Z","lastTransitionTime":"2025-11-29T06:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:52 crc kubenswrapper[4947]: E1129 06:34:52.822775 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:34:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ead809c-947a-4269-a0f3-b817113e9662\\\",\\\"systemUUID\\\":\\\"f0748469-4a41-446c-a5c3-776c2ca32148\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:52Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:52 crc kubenswrapper[4947]: E1129 06:34:52.822881 4947 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.829198 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.829242 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.829254 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.829269 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.829280 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:52Z","lastTransitionTime":"2025-11-29T06:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.931344 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.931385 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.931398 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.931415 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:52 crc kubenswrapper[4947]: I1129 06:34:52.931429 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:52Z","lastTransitionTime":"2025-11-29T06:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:53 crc kubenswrapper[4947]: I1129 06:34:53.034134 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:53 crc kubenswrapper[4947]: I1129 06:34:53.034172 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:53 crc kubenswrapper[4947]: I1129 06:34:53.034181 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:53 crc kubenswrapper[4947]: I1129 06:34:53.034198 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:53 crc kubenswrapper[4947]: I1129 06:34:53.034209 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:53Z","lastTransitionTime":"2025-11-29T06:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:53 crc kubenswrapper[4947]: I1129 06:34:53.136600 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:53 crc kubenswrapper[4947]: I1129 06:34:53.136647 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:53 crc kubenswrapper[4947]: I1129 06:34:53.136658 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:53 crc kubenswrapper[4947]: I1129 06:34:53.136674 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:53 crc kubenswrapper[4947]: I1129 06:34:53.136685 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:53Z","lastTransitionTime":"2025-11-29T06:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:53 crc kubenswrapper[4947]: I1129 06:34:53.178443 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:34:53 crc kubenswrapper[4947]: E1129 06:34:53.178584 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 06:34:53 crc kubenswrapper[4947]: I1129 06:34:53.238932 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:53 crc kubenswrapper[4947]: I1129 06:34:53.238972 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:53 crc kubenswrapper[4947]: I1129 06:34:53.238982 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:53 crc kubenswrapper[4947]: I1129 06:34:53.238998 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:53 crc kubenswrapper[4947]: I1129 06:34:53.239011 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:53Z","lastTransitionTime":"2025-11-29T06:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:53 crc kubenswrapper[4947]: I1129 06:34:53.341058 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:53 crc kubenswrapper[4947]: I1129 06:34:53.341097 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:53 crc kubenswrapper[4947]: I1129 06:34:53.341106 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:53 crc kubenswrapper[4947]: I1129 06:34:53.341123 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:53 crc kubenswrapper[4947]: I1129 06:34:53.341132 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:53Z","lastTransitionTime":"2025-11-29T06:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:53 crc kubenswrapper[4947]: I1129 06:34:53.443509 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:53 crc kubenswrapper[4947]: I1129 06:34:53.443577 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:53 crc kubenswrapper[4947]: I1129 06:34:53.443586 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:53 crc kubenswrapper[4947]: I1129 06:34:53.443601 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:53 crc kubenswrapper[4947]: I1129 06:34:53.443610 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:53Z","lastTransitionTime":"2025-11-29T06:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:53 crc kubenswrapper[4947]: I1129 06:34:53.546190 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:53 crc kubenswrapper[4947]: I1129 06:34:53.546346 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:53 crc kubenswrapper[4947]: I1129 06:34:53.546372 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:53 crc kubenswrapper[4947]: I1129 06:34:53.546395 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:53 crc kubenswrapper[4947]: I1129 06:34:53.546410 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:53Z","lastTransitionTime":"2025-11-29T06:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:53 crc kubenswrapper[4947]: I1129 06:34:53.649346 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:53 crc kubenswrapper[4947]: I1129 06:34:53.649384 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:53 crc kubenswrapper[4947]: I1129 06:34:53.649395 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:53 crc kubenswrapper[4947]: I1129 06:34:53.649413 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:53 crc kubenswrapper[4947]: I1129 06:34:53.649424 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:53Z","lastTransitionTime":"2025-11-29T06:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:53 crc kubenswrapper[4947]: I1129 06:34:53.752991 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:53 crc kubenswrapper[4947]: I1129 06:34:53.753086 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:53 crc kubenswrapper[4947]: I1129 06:34:53.753102 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:53 crc kubenswrapper[4947]: I1129 06:34:53.753132 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:53 crc kubenswrapper[4947]: I1129 06:34:53.753148 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:53Z","lastTransitionTime":"2025-11-29T06:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:53 crc kubenswrapper[4947]: I1129 06:34:53.856762 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:53 crc kubenswrapper[4947]: I1129 06:34:53.856818 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:53 crc kubenswrapper[4947]: I1129 06:34:53.856830 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:53 crc kubenswrapper[4947]: I1129 06:34:53.856845 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:53 crc kubenswrapper[4947]: I1129 06:34:53.856858 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:53Z","lastTransitionTime":"2025-11-29T06:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:53 crc kubenswrapper[4947]: I1129 06:34:53.960394 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:53 crc kubenswrapper[4947]: I1129 06:34:53.960471 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:53 crc kubenswrapper[4947]: I1129 06:34:53.960482 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:53 crc kubenswrapper[4947]: I1129 06:34:53.960514 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:53 crc kubenswrapper[4947]: I1129 06:34:53.960528 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:53Z","lastTransitionTime":"2025-11-29T06:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:54 crc kubenswrapper[4947]: I1129 06:34:54.063304 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:54 crc kubenswrapper[4947]: I1129 06:34:54.063377 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:54 crc kubenswrapper[4947]: I1129 06:34:54.063400 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:54 crc kubenswrapper[4947]: I1129 06:34:54.063429 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:54 crc kubenswrapper[4947]: I1129 06:34:54.063452 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:54Z","lastTransitionTime":"2025-11-29T06:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:54 crc kubenswrapper[4947]: I1129 06:34:54.166061 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:54 crc kubenswrapper[4947]: I1129 06:34:54.166123 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:54 crc kubenswrapper[4947]: I1129 06:34:54.166140 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:54 crc kubenswrapper[4947]: I1129 06:34:54.166167 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:54 crc kubenswrapper[4947]: I1129 06:34:54.166185 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:54Z","lastTransitionTime":"2025-11-29T06:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:54 crc kubenswrapper[4947]: I1129 06:34:54.178130 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:34:54 crc kubenswrapper[4947]: I1129 06:34:54.178204 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:34:54 crc kubenswrapper[4947]: E1129 06:34:54.178382 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 06:34:54 crc kubenswrapper[4947]: I1129 06:34:54.178404 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:34:54 crc kubenswrapper[4947]: E1129 06:34:54.178459 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fbj5" podUID="53a3bcac-8ad0-47ce-abee-ee56fd152ea8" Nov 29 06:34:54 crc kubenswrapper[4947]: E1129 06:34:54.178596 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 06:34:54 crc kubenswrapper[4947]: I1129 06:34:54.268965 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:54 crc kubenswrapper[4947]: I1129 06:34:54.269016 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:54 crc kubenswrapper[4947]: I1129 06:34:54.269027 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:54 crc kubenswrapper[4947]: I1129 06:34:54.269044 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:54 crc kubenswrapper[4947]: I1129 06:34:54.269055 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:54Z","lastTransitionTime":"2025-11-29T06:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:54 crc kubenswrapper[4947]: I1129 06:34:54.372257 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:54 crc kubenswrapper[4947]: I1129 06:34:54.372529 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:54 crc kubenswrapper[4947]: I1129 06:34:54.372639 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:54 crc kubenswrapper[4947]: I1129 06:34:54.372781 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:54 crc kubenswrapper[4947]: I1129 06:34:54.372869 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:54Z","lastTransitionTime":"2025-11-29T06:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:54 crc kubenswrapper[4947]: I1129 06:34:54.476815 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:54 crc kubenswrapper[4947]: I1129 06:34:54.476871 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:54 crc kubenswrapper[4947]: I1129 06:34:54.476882 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:54 crc kubenswrapper[4947]: I1129 06:34:54.476901 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:54 crc kubenswrapper[4947]: I1129 06:34:54.476913 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:54Z","lastTransitionTime":"2025-11-29T06:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:54 crc kubenswrapper[4947]: I1129 06:34:54.583883 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:54 crc kubenswrapper[4947]: I1129 06:34:54.583990 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:54 crc kubenswrapper[4947]: I1129 06:34:54.584059 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:54 crc kubenswrapper[4947]: I1129 06:34:54.584087 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:54 crc kubenswrapper[4947]: I1129 06:34:54.584104 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:54Z","lastTransitionTime":"2025-11-29T06:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:54 crc kubenswrapper[4947]: I1129 06:34:54.688362 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:54 crc kubenswrapper[4947]: I1129 06:34:54.688421 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:54 crc kubenswrapper[4947]: I1129 06:34:54.688439 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:54 crc kubenswrapper[4947]: I1129 06:34:54.688463 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:54 crc kubenswrapper[4947]: I1129 06:34:54.688479 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:54Z","lastTransitionTime":"2025-11-29T06:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:54 crc kubenswrapper[4947]: I1129 06:34:54.791722 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:54 crc kubenswrapper[4947]: I1129 06:34:54.791798 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:54 crc kubenswrapper[4947]: I1129 06:34:54.791810 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:54 crc kubenswrapper[4947]: I1129 06:34:54.791829 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:54 crc kubenswrapper[4947]: I1129 06:34:54.791840 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:54Z","lastTransitionTime":"2025-11-29T06:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:54 crc kubenswrapper[4947]: I1129 06:34:54.894914 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:54 crc kubenswrapper[4947]: I1129 06:34:54.894971 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:54 crc kubenswrapper[4947]: I1129 06:34:54.894987 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:54 crc kubenswrapper[4947]: I1129 06:34:54.895014 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:54 crc kubenswrapper[4947]: I1129 06:34:54.895030 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:54Z","lastTransitionTime":"2025-11-29T06:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:54 crc kubenswrapper[4947]: I1129 06:34:54.997831 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:54 crc kubenswrapper[4947]: I1129 06:34:54.998108 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:54 crc kubenswrapper[4947]: I1129 06:34:54.998365 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:54 crc kubenswrapper[4947]: I1129 06:34:54.998506 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:54 crc kubenswrapper[4947]: I1129 06:34:54.998626 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:54Z","lastTransitionTime":"2025-11-29T06:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.101667 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.101723 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.101740 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.101764 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.101781 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:55Z","lastTransitionTime":"2025-11-29T06:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.177954 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:34:55 crc kubenswrapper[4947]: E1129 06:34:55.178559 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.203214 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.204067 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.204129 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.204151 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.204179 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.204202 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:55Z","lastTransitionTime":"2025-11-29T06:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.204509 4947 scope.go:117] "RemoveContainer" containerID="d8310aad81d0ce5b52b2c8c5d096733fcabcd9f0d0c9cbae9cf7be55a2ca3e0b" Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.225875 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21d5af32-2f1c-4aa8-a9b3-c436a5b3cb32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8225fda5f45611099716e891065acf0ffa2db6d2c9b79192b5955c1bac77daf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d79edf1f405f23d4ec3ff1a52d9583fc26d31d1fb690337a7b081868c397d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfae5cdad70e7948b6383df2f934a477b237c38b94cf3d3eda22d11fe02826e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20235c8af3fdd17aa70bd0744d8faf37aa7781f63ef194f6e467721a47e388d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:55Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.247059 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:55Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.261096 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2fbj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53a3bcac-8ad0-47ce-abee-ee56fd152ea8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nxj27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nxj27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2fbj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:55Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.277514 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabaff26-a896-4929-8b32-6e32efe02ffc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e518bb57f917b37cc08b0f099557c012da1b6c08a168ff3acb262abc2c9d6983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73fed24d3da601900cf1f54e0adc59daba09bd9a50813cc39508f2d8c66ae5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187288a1894fa8a8aaf843f1889edb54adeb0771fb0a5759a1346d1ae04f9970\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb63d3fae21b626041036425ac0dbe4cc0fba792aa82812037eb0cf64036984\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce17a0c77733e53cc77e1ced6944f39b53419840d78420bc7cf4ad34fec82e8b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 06:34:22.756831 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 06:34:22.759210 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196230451/tls.crt::/tmp/serving-cert-4196230451/tls.key\\\\\\\"\\\\nI1129 06:34:28.364401 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 06:34:28.369411 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 06:34:28.369441 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 06:34:28.369468 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 06:34:28.369474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 06:34:28.374098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 06:34:28.374134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374148 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 06:34:28.374153 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 06:34:28.374157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 06:34:28.374161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 06:34:28.374166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 06:34:28.376189 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ce7bb31402783842d6a20f7d0667426065729bdee1f7cf4fcbe636173e79da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:55Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.291537 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de17371c2f08b16e96df9f76ce3eaf0981cc8590aca0784ea95598b7842812eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:55Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.303638 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:55Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.306366 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.306390 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.306400 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.306414 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.306424 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:55Z","lastTransitionTime":"2025-11-29T06:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.324938 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26f82f9be693d04a2b2536773e664c7e2525ea3cf90f6135e5ed8a41e0def93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffeb939b1928f4e71530473d93e5b142eb70a09d929bee5bea36c407c0fb44a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b624f494cbf18c5d0c15ee7d074273ebb685465bc73aeebae77758840c958415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03aa7fc02c19732bcc2f291ae3e1eb8ee53ac2f8e4b6138a24c73eb69201fa5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba937f20fd42fdc9c935084319c2d3c455813e35002ed22a4be631cc74fab26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5372e6932fe776f4e4d18124f45a05be3f374919ce6df4426d89a023353836\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8310aad81d0ce5b52b2c8c5d096733fcabcd9f0d0c9cbae9cf7be55a2ca3e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8310aad81d0ce5b52b2c8c5d096733fcabcd9f0d0c9cbae9cf7be55a2ca3e0b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T06:34:43Z\\\",\\\"message\\\":\\\"303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1129 06:34:41.333749 6382 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1129 06:34:41.333772 6382 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1129 06:34:41.333785 6382 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1129 06:34:41.333798 6382 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1129 06:34:41.333636 6382 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI1129 06:34:41.333864 6382 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI1129 06:34:41.333627 6382 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-zb4cv\\\\nI1129 06:34:41.333932 6382 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-zb4cv in node crc\\\\nI1129 06:34:41.333939 6382\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-z4rxq_openshift-ovn-kubernetes(dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fadf23d78dff09f3edcb6159171e1f55f5a84b42d3dbc6804a2e1764d6c71797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z4rxq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:55Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.337457 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ttw9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8440d6ae-a357-461e-a91f-a48625b4a9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c526243829deb889e7afc49647d8bf9960f886b6abc9aa7cba8a69c8d5b3ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mxr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ttw9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:55Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.349435 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b25cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a287baf-0c87-4698-9553-6f94927fbf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69ea28bcbeb8379671147cd41f131b8b37b41a285319b082c43381a56cdc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbmt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754f770b2635be1fc785e0cc958e0c885dd7516ba54de760493ec7778d738708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbmt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b25cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:55Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.362603 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84be169c8b603ce62c3c60b6c67eea559c20cbcfc7ba2d0965bcf6acfb00e99b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:55Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.379077 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zb4cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6063fc55-4365-4f22-a005-bfac3812fdce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c5fd1426004597dc139d078e4f9b5bb7fec8ab12162ca6b052f5eb43025b6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fe3a8d1d8d729e919ab73a034943a441b5bc44409af32b5671d73b80e2bd6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe3a8d1d8d729e919ab73a034943a441b5bc44409af32b5671d73b80e2bd6ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49d19e2f799deb4cf6b8154a2f3c0485c06ad44d0b3e0bac5b2b85f16228f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a49d19e2f799deb4cf6b8154a2f3c0485c06ad44d0b3e0bac5b2b85f16228f38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zb4cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:55Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.389245 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f4d791f-bb61-4aaa-a09c-3007b59645a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247b6579653599785a44fc9e8e0d47f93802f01f2002465a9ba3825d487a324b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bade16afee9bcb203991caf838d5e9e5302dd0b35462b93b45c34cf98e89c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5zgvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:55Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.400167 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlg45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c22792f1661e18f407880f3a743a03890141d9f5c6062e8ba6b06204b4e3da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rndhc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlg45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:55Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.410103 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.410143 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.410151 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.410169 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.410180 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:55Z","lastTransitionTime":"2025-11-29T06:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.420178 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"268b8d19-01a0-4696-8b9e-0efed4129d56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06bdbb1b45cdc328f5c4345797e222ac33d9f1ea052011c183a229c02134cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa666baf286a9b54dbafbe35f35c3be377481392a4e3811f72cf189d1adbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ef81ddc4f773e06ee9c296ecc2e67ce76bf795fbd8357236cad6c0462d7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360e1f2c76ee825b275816919d9ab1473c6db5da5ab22b43134738150306f87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4712b1e44979f37b926f325313325a788ecb9e01b30583bf8210ab6c53d71d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:55Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.431535 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92f526fea4a10baf89765a67868ca0fe47ce19bbde73cd538bd65add7578f000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3934144663bb1471d168783c160c42654884d04d2510cbe1e5f2f6e2cfb94f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:55Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.442005 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:55Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.451313 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sxdk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f180a1-2fb0-4b96-85ed-1116677a7c99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b80d32b0abeeab39bc9fd8c24510ec865e957006ecac31da11f86b6e0fb983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swgjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sxdk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:55Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.511908 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.512021 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.512039 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.512057 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.512069 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:55Z","lastTransitionTime":"2025-11-29T06:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.614283 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.614315 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.614327 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.614342 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.614353 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:55Z","lastTransitionTime":"2025-11-29T06:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.717677 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.717758 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.717772 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.717789 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.717801 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:55Z","lastTransitionTime":"2025-11-29T06:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.820847 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.820905 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.820922 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.820959 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.820980 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:55Z","lastTransitionTime":"2025-11-29T06:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.985553 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.985605 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.985613 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.985625 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:55 crc kubenswrapper[4947]: I1129 06:34:55.985635 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:55Z","lastTransitionTime":"2025-11-29T06:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.088753 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.088792 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.088801 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.088816 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.088824 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:56Z","lastTransitionTime":"2025-11-29T06:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.178690 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.178693 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:34:56 crc kubenswrapper[4947]: E1129 06:34:56.178874 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 06:34:56 crc kubenswrapper[4947]: E1129 06:34:56.178996 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fbj5" podUID="53a3bcac-8ad0-47ce-abee-ee56fd152ea8" Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.178710 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:34:56 crc kubenswrapper[4947]: E1129 06:34:56.179185 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.191528 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.191577 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.191593 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.191614 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.191630 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:56Z","lastTransitionTime":"2025-11-29T06:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.294043 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.294116 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.294136 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.294161 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.294178 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:56Z","lastTransitionTime":"2025-11-29T06:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.396471 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.396520 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.396536 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.396557 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.396573 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:56Z","lastTransitionTime":"2025-11-29T06:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.487482 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z4rxq_dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0/ovnkube-controller/1.log" Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.491997 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" event={"ID":"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0","Type":"ContainerStarted","Data":"ee517e6c1aeb923ec34bc4ffa9dd6d445887230476b7ea3be7f09d82f1465442"} Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.492500 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.501919 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.501963 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.501975 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.501994 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.502008 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:56Z","lastTransitionTime":"2025-11-29T06:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.507936 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f4d791f-bb61-4aaa-a09c-3007b59645a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247b6579653599785a44fc9e8e0d47f93802f01f2002465a9ba3825d487a324b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bade16afee9bcb203991caf838d5e9e5302dd0b35462b93b45c34cf98e89c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5zgvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:56Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.521078 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ttw9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8440d6ae-a357-461e-a91f-a48625b4a9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c526243829deb889e7afc49647d8bf9960f886b6abc9aa7cba8a69c8d5b3ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mxr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ttw9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:56Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.532891 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b25cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a287baf-0c87-4698-9553-6f94927fbf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69ea28bcbeb8379671147cd41f131b8b37b41a285319b082c43381a56cdc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbmt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754f770b2635be1fc785e0cc958e0c885dd7516ba54de760493ec7778d738708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbmt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b25cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:56Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.545122 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84be169c8b603ce62c3c60b6c67eea559c20cbcfc7ba2d0965bcf6acfb00e99b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:56Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.561956 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zb4cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6063fc55-4365-4f22-a005-bfac3812fdce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c5fd1426004597dc139d078e4f9b5bb7fec8ab12162ca6b052f5eb43025b6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fe3a8d1d8d729e919ab73a034943a441b5bc44409af32b5671d73b80e2bd6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe3a8d1d8d729e919ab73a034943a441b5bc44409af32b5671d73b80e2bd6ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49d19e2f799deb4cf6b8154a2f3c0485c06ad44d0b3e0bac5b2b85f16228f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a49d19e2f799deb4cf6b8154a2f3c0485c06ad44d0b3e0bac5b2b85f16228f38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zb4cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:56Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.575514 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sxdk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f180a1-2fb0-4b96-85ed-1116677a7c99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b80d32b0abeeab39bc9fd8c24510ec865e957006ecac31da11f86b6e0fb983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swgjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sxdk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:56Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.588552 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlg45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c22792f1661e18f407880f3a743a03890141d9f5c6062e8ba6b06204b4e3da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rndhc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlg45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:56Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.604038 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.604073 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.604086 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.604104 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.604117 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:56Z","lastTransitionTime":"2025-11-29T06:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.609057 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"268b8d19-01a0-4696-8b9e-0efed4129d56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06bdbb1b45cdc328f5c4345797e222ac33d9f1ea052011c183a229c02134cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa666baf286a9b54dbafbe35f35c3be377481392a4e3811f72cf189d1adbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ef81ddc4f773e06ee9c296ecc2e67ce76bf795fbd8357236cad6c0462d7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360e1f2c76ee825b275816919d9ab1473c6db5da5ab22b43134738150306f87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4712b1e44979f37b926f325313325a788ecb9e01b30583bf8210ab6c53d71d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:56Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.626131 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92f526fea4a10baf89765a67868ca0fe47ce19bbde73cd538bd65add7578f000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3934144663bb1471d168783c160c42654884d04d2510cbe1e5f2f6e2cfb94f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:56Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.642372 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:56Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.657802 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2fbj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53a3bcac-8ad0-47ce-abee-ee56fd152ea8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nxj27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nxj27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2fbj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:56Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.673432 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21d5af32-2f1c-4aa8-a9b3-c436a5b3cb32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8225fda5f45611099716e891065acf0ffa2db6d2c9b79192b5955c1bac77daf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d79edf1f405f23d4ec3ff1a52d9583fc26d31d1fb690337a7b081868c397d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfae5cdad70e7948b6383df2f934a477b237c38b94cf3d3eda22d11fe02826e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20235c8af3fdd17aa70bd0744d8faf37aa7781f63ef194f6e467721a47e388d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:56Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.690051 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:56Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.707724 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.707792 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.707815 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.707849 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.707871 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:56Z","lastTransitionTime":"2025-11-29T06:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.720529 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26f82f9be693d04a2b2536773e664c7e2525ea3cf90f6135e5ed8a41e0def93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffeb939b1928f4e71530473d93e5b142eb70a09d929bee5bea36c407c0fb44a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b624f494cbf18c5d0c15ee7d074273ebb685465bc73aeebae77758840c958415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03aa7fc02c19732bcc2f291ae3e1eb8ee53ac2f8e4b6138a24c73eb69201fa5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba937f20fd42fdc9c935084319c2d3c455813e35002ed22a4be631cc74fab26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5372e6932fe776f4e4d18124f45a05be3f374919ce6df4426d89a023353836\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee517e6c1aeb923ec34bc4ffa9dd6d445887230476b7ea3be7f09d82f1465442\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8310aad81d0ce5b52b2c8c5d096733fcabcd9f0d0c9cbae9cf7be55a2ca3e0b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T06:34:43Z\\\",\\\"message\\\":\\\"303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1129 06:34:41.333749 6382 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1129 06:34:41.333772 6382 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1129 06:34:41.333785 6382 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1129 06:34:41.333798 6382 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1129 06:34:41.333636 6382 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI1129 06:34:41.333864 6382 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI1129 06:34:41.333627 6382 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-zb4cv\\\\nI1129 06:34:41.333932 6382 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-zb4cv in node crc\\\\nI1129 06:34:41.333939 6382\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fadf23d78dff09f3edcb6159171e1f55f5a84b42d3dbc6804a2e1764d6c71797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z4rxq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:56Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.738154 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabaff26-a896-4929-8b32-6e32efe02ffc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e518bb57f917b37cc08b0f099557c012da1b6c08a168ff3acb262abc2c9d6983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73fed24d3da601900cf1f54e0adc59daba09bd9a50813cc39508f2d8c66ae5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187288a1894fa8a8aaf843f1889edb54adeb0771fb0a5759a1346d1ae04f9970\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb63d3fae21b626041036425ac0dbe4cc0fba792aa82812037eb0cf64036984\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce17a0c77733e53cc77e1ced6944f39b53419840d78420bc7cf4ad34fec82e8b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 06:34:22.756831 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 06:34:22.759210 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196230451/tls.crt::/tmp/serving-cert-4196230451/tls.key\\\\\\\"\\\\nI1129 06:34:28.364401 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 06:34:28.369411 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 06:34:28.369441 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 06:34:28.369468 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 06:34:28.369474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 06:34:28.374098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 06:34:28.374134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374148 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 06:34:28.374153 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 06:34:28.374157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 06:34:28.374161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 06:34:28.374166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 06:34:28.376189 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ce7bb31402783842d6a20f7d0667426065729bdee1f7cf4fcbe636173e79da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:56Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.755702 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de17371c2f08b16e96df9f76ce3eaf0981cc8590aca0784ea95598b7842812eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:56Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.771083 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:56Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.810647 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.810710 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.810721 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.810738 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.810751 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:56Z","lastTransitionTime":"2025-11-29T06:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.913601 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.913649 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.913660 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.913679 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:56 crc kubenswrapper[4947]: I1129 06:34:56.913691 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:56Z","lastTransitionTime":"2025-11-29T06:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.016825 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.016885 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.016902 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.016927 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.016953 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:57Z","lastTransitionTime":"2025-11-29T06:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.120788 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.120825 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.120836 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.120853 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.120865 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:57Z","lastTransitionTime":"2025-11-29T06:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.178667 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:34:57 crc kubenswrapper[4947]: E1129 06:34:57.178849 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.224422 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.224502 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.224539 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.224576 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.224629 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:57Z","lastTransitionTime":"2025-11-29T06:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.328057 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.328205 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.328270 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.328304 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.328329 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:57Z","lastTransitionTime":"2025-11-29T06:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.386731 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.397630 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.404326 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21d5af32-2f1c-4aa8-a9b3-c436a5b3cb32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8225fda5f45611099716e891065acf0ffa2db6d2c9b79192b5955c1bac77daf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d79edf1f405f23d4ec3ff1a52d9583fc26d31d1fb690337a7b081868c397d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfae5cdad70e7948b6383df2f934a477b237c38b94cf3d3eda22d11fe02826e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20235c8af3fdd17aa70bd0744d8faf37aa7781f63ef194f6e467721a47e388d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:57Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.431440 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.431473 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.431485 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.431503 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.431515 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:57Z","lastTransitionTime":"2025-11-29T06:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.436249 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:57Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.451897 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2fbj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53a3bcac-8ad0-47ce-abee-ee56fd152ea8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nxj27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nxj27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2fbj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:57Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.466294 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabaff26-a896-4929-8b32-6e32efe02ffc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e518bb57f917b37cc08b0f099557c012da1b6c08a168ff3acb262abc2c9d6983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73fed24d3da601900cf1f54e0adc59daba09bd9a50813cc39508f2d8c66ae5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187288a1894fa8a8aaf843f1889edb54adeb0771fb0a5759a1346d1ae04f9970\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb63d3fae21b626041036425ac0dbe4cc0fba792aa82812037eb0cf64036984\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce17a0c77733e53cc77e1ced6944f39b53419840d78420bc7cf4ad34fec82e8b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 06:34:22.756831 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 06:34:22.759210 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196230451/tls.crt::/tmp/serving-cert-4196230451/tls.key\\\\\\\"\\\\nI1129 06:34:28.364401 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 06:34:28.369411 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 06:34:28.369441 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 06:34:28.369468 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 06:34:28.369474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 06:34:28.374098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 06:34:28.374134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374148 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 06:34:28.374153 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 06:34:28.374157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 06:34:28.374161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 06:34:28.374166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 06:34:28.376189 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ce7bb31402783842d6a20f7d0667426065729bdee1f7cf4fcbe636173e79da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:57Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.479082 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de17371c2f08b16e96df9f76ce3eaf0981cc8590aca0784ea95598b7842812eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:57Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.495350 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:57Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.496823 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z4rxq_dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0/ovnkube-controller/2.log" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.497531 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z4rxq_dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0/ovnkube-controller/1.log" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.500171 4947 generic.go:334] "Generic (PLEG): container finished" podID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" containerID="ee517e6c1aeb923ec34bc4ffa9dd6d445887230476b7ea3be7f09d82f1465442" exitCode=1 Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.500871 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" event={"ID":"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0","Type":"ContainerDied","Data":"ee517e6c1aeb923ec34bc4ffa9dd6d445887230476b7ea3be7f09d82f1465442"} Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.500933 4947 scope.go:117] "RemoveContainer" containerID="d8310aad81d0ce5b52b2c8c5d096733fcabcd9f0d0c9cbae9cf7be55a2ca3e0b" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.501136 4947 scope.go:117] "RemoveContainer" containerID="ee517e6c1aeb923ec34bc4ffa9dd6d445887230476b7ea3be7f09d82f1465442" Nov 29 06:34:57 crc kubenswrapper[4947]: E1129 06:34:57.501293 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-z4rxq_openshift-ovn-kubernetes(dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" podUID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.522731 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26f82f9be693d04a2b2536773e664c7e2525ea3cf90f6135e5ed8a41e0def93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffeb939b1928f4e71530473d93e5b142eb70a09d929bee5bea36c407c0fb44a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b624f494cbf18c5d0c15ee7d074273ebb685465bc73aeebae77758840c958415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03aa7fc02c19732bcc2f291ae3e1eb8ee53ac2f8e4b6138a24c73eb69201fa5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba937f20fd42fdc9c935084319c2d3c455813e35002ed22a4be631cc74fab26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5372e6932fe776f4e4d18124f45a05be3f374919ce6df4426d89a023353836\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee517e6c1aeb923ec34bc4ffa9dd6d445887230476b7ea3be7f09d82f1465442\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8310aad81d0ce5b52b2c8c5d096733fcabcd9f0d0c9cbae9cf7be55a2ca3e0b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T06:34:43Z\\\",\\\"message\\\":\\\"303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1129 06:34:41.333749 6382 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1129 06:34:41.333772 6382 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1129 06:34:41.333785 6382 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1129 06:34:41.333798 6382 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1129 06:34:41.333636 6382 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI1129 06:34:41.333864 6382 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI1129 06:34:41.333627 6382 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-zb4cv\\\\nI1129 06:34:41.333932 6382 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-zb4cv in node crc\\\\nI1129 06:34:41.333939 6382\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fadf23d78dff09f3edcb6159171e1f55f5a84b42d3dbc6804a2e1764d6c71797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z4rxq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:57Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.538569 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.538618 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.538630 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.538647 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.538660 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:57Z","lastTransitionTime":"2025-11-29T06:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.539288 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ttw9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8440d6ae-a357-461e-a91f-a48625b4a9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c526243829deb889e7afc49647d8bf9960f886b6abc9aa7cba8a69c8d5b3ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mxr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ttw9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:57Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.554025 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b25cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a287baf-0c87-4698-9553-6f94927fbf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69ea28bcbeb8379671147cd41f131b8b37b41a285319b082c43381a56cdc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbmt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754f770b2635be1fc785e0cc958e0c885dd7516ba54de760493ec7778d738708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbmt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b25cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:57Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.570309 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84be169c8b603ce62c3c60b6c67eea559c20cbcfc7ba2d0965bcf6acfb00e99b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:57Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.594934 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zb4cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6063fc55-4365-4f22-a005-bfac3812fdce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c5fd1426004597dc139d078e4f9b5bb7fec8ab12162ca6b052f5eb43025b6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fe3a8d1d8d729e919ab73a034943a441b5bc44409af32b5671d73b80e2bd6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe3a8d1d8d729e919ab73a034943a441b5bc44409af32b5671d73b80e2bd6ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49d19e2f799deb4cf6b8154a2f3c0485c06ad44d0b3e0bac5b2b85f16228f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a49d19e2f799deb4cf6b8154a2f3c0485c06ad44d0b3e0bac5b2b85f16228f38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zb4cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:57Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.607658 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f4d791f-bb61-4aaa-a09c-3007b59645a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247b6579653599785a44fc9e8e0d47f93802f01f2002465a9ba3825d487a324b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bade16afee9bcb203991caf838d5e9e5302dd0b35462b93b45c34cf98e89c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5zgvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:57Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.625082 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlg45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c22792f1661e18f407880f3a743a03890141d9f5c6062e8ba6b06204b4e3da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rndhc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlg45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:57Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.641316 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.641361 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.641373 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.641399 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.641411 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:57Z","lastTransitionTime":"2025-11-29T06:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.647252 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"268b8d19-01a0-4696-8b9e-0efed4129d56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06bdbb1b45cdc328f5c4345797e222ac33d9f1ea052011c183a229c02134cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa666baf286a9b54dbafbe35f35c3be377481392a4e3811f72cf189d1adbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ef81ddc4f773e06ee9c296ecc2e67ce76bf795fbd8357236cad6c0462d7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360e1f2c76ee825b275816919d9ab1473c6db5da5ab22b43134738150306f87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4712b1e44979f37b926f325313325a788ecb9e01b30583bf8210ab6c53d71d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:57Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.660524 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92f526fea4a10baf89765a67868ca0fe47ce19bbde73cd538bd65add7578f000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3934144663bb1471d168783c160c42654884d04d2510cbe1e5f2f6e2cfb94f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:57Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.675257 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:57Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.687550 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sxdk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f180a1-2fb0-4b96-85ed-1116677a7c99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b80d32b0abeeab39bc9fd8c24510ec865e957006ecac31da11f86b6e0fb983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swgjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sxdk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:57Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.701101 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de17371c2f08b16e96df9f76ce3eaf0981cc8590aca0784ea95598b7842812eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:57Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.714372 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:57Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.733335 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26f82f9be693d04a2b2536773e664c7e2525ea3cf90f6135e5ed8a41e0def93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffeb939b1928f4e71530473d93e5b142eb70a09d929bee5bea36c407c0fb44a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b624f494cbf18c5d0c15ee7d074273ebb685465bc73aeebae77758840c958415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03aa7fc02c19732bcc2f291ae3e1eb8ee53ac2f8e4b6138a24c73eb69201fa5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba937f20fd42fdc9c935084319c2d3c455813e35002ed22a4be631cc74fab26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5372e6932fe776f4e4d18124f45a05be3f374919ce6df4426d89a023353836\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee517e6c1aeb923ec34bc4ffa9dd6d445887230476b7ea3be7f09d82f1465442\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8310aad81d0ce5b52b2c8c5d096733fcabcd9f0d0c9cbae9cf7be55a2ca3e0b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T06:34:43Z\\\",\\\"message\\\":\\\"303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1129 06:34:41.333749 6382 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1129 06:34:41.333772 6382 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1129 06:34:41.333785 6382 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1129 06:34:41.333798 6382 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1129 06:34:41.333636 6382 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI1129 06:34:41.333864 6382 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI1129 06:34:41.333627 6382 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-zb4cv\\\\nI1129 06:34:41.333932 6382 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-zb4cv in node crc\\\\nI1129 06:34:41.333939 6382\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee517e6c1aeb923ec34bc4ffa9dd6d445887230476b7ea3be7f09d82f1465442\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T06:34:56Z\\\",\\\"message\\\":\\\"tor *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 06:34:56.438063 6572 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1129 06:34:56.438071 6572 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1129 06:34:56.438097 6572 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1129 06:34:56.438057 6572 factory.go:656] Stopping watch factory\\\\nI1129 06:34:56.438088 6572 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1129 06:34:56.438055 6572 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1129 06:34:56.438275 6572 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 06:34:56.438450 6572 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 06:34:56.438512 6572 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 06:34:56.438613 6572 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fadf23d78dff09f3edcb6159171e1f55f5a84b42d3dbc6804a2e1764d6c71797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z4rxq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:57Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.743227 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.743257 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.743266 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.743282 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.743291 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:57Z","lastTransitionTime":"2025-11-29T06:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.749680 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabaff26-a896-4929-8b32-6e32efe02ffc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e518bb57f917b37cc08b0f099557c012da1b6c08a168ff3acb262abc2c9d6983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73fed24d3da601900cf1f54e0adc59daba09bd9a50813cc39508f2d8c66ae5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187288a1894fa8a8aaf843f1889edb54adeb0771fb0a5759a1346d1ae04f9970\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb63d3fae21b626041036425ac0dbe4cc0fba792aa82812037eb0cf64036984\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce17a0c77733e53cc77e1ced6944f39b53419840d78420bc7cf4ad34fec82e8b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 06:34:22.756831 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 06:34:22.759210 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196230451/tls.crt::/tmp/serving-cert-4196230451/tls.key\\\\\\\"\\\\nI1129 06:34:28.364401 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 06:34:28.369411 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 06:34:28.369441 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 06:34:28.369468 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 06:34:28.369474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 06:34:28.374098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 06:34:28.374134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374148 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 06:34:28.374153 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 06:34:28.374157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 06:34:28.374161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 06:34:28.374166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 06:34:28.376189 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ce7bb31402783842d6a20f7d0667426065729bdee1f7cf4fcbe636173e79da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:57Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.761507 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84be169c8b603ce62c3c60b6c67eea559c20cbcfc7ba2d0965bcf6acfb00e99b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:57Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.776710 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zb4cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6063fc55-4365-4f22-a005-bfac3812fdce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c5fd1426004597dc139d078e4f9b5bb7fec8ab12162ca6b052f5eb43025b6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fe3a8d1d8d729e919ab73a034943a441b5bc44409af32b5671d73b80e2bd6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe3a8d1d8d729e919ab73a034943a441b5bc44409af32b5671d73b80e2bd6ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49d19e2f799deb4cf6b8154a2f3c0485c06ad44d0b3e0bac5b2b85f16228f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a49d19e2f799deb4cf6b8154a2f3c0485c06ad44d0b3e0bac5b2b85f16228f38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zb4cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:57Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.789178 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f4d791f-bb61-4aaa-a09c-3007b59645a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247b6579653599785a44fc9e8e0d47f93802f01f2002465a9ba3825d487a324b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bade16afee9bcb203991caf838d5e9e5302dd0b35462b93b45c34cf98e89c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5zgvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:57Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.798581 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ttw9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8440d6ae-a357-461e-a91f-a48625b4a9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c526243829deb889e7afc49647d8bf9960f886b6abc9aa7cba8a69c8d5b3ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mxr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ttw9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:57Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.811868 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b25cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a287baf-0c87-4698-9553-6f94927fbf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69ea28bcbeb8379671147cd41f131b8b37b41a285319b082c43381a56cdc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbmt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754f770b2635be1fc785e0cc958e0c885dd7516ba54de760493ec7778d738708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbmt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b25cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:57Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.826801 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92f526fea4a10baf89765a67868ca0fe47ce19bbde73cd538bd65add7578f000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3934144663bb1471d168783c160c42654884d04d2510cbe1e5f2f6e2cfb94f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:57Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.840742 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:57Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.845886 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.845935 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.845952 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.845976 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.845992 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:57Z","lastTransitionTime":"2025-11-29T06:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.852815 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sxdk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f180a1-2fb0-4b96-85ed-1116677a7c99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b80d32b0abeeab39bc9fd8c24510ec865e957006ecac31da11f86b6e0fb983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swgjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sxdk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:57Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.871340 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlg45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c22792f1661e18f407880f3a743a03890141d9f5c6062e8ba6b06204b4e3da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rndhc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlg45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:57Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.891096 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"268b8d19-01a0-4696-8b9e-0efed4129d56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06bdbb1b45cdc328f5c4345797e222ac33d9f1ea052011c183a229c02134cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa666baf286a9b54dbafbe35f35c3be377481392a4e3811f72cf189d1adbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ef81ddc4f773e06ee9c296ecc2e67ce76bf795fbd8357236cad6c0462d7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360e1f2c76ee825b275816919d9ab1473c6db5da5ab22b43134738150306f87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4712b1e44979f37b926f325313325a788ecb9e01b30583bf8210ab6c53d71d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:57Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.906999 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"682a3ca0-7f80-4aa3-8627-44c5f9d6c661\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0decb70631f9a60b5a44909b2cd152c099aa6955393b715617a93d2639a8f211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7347898f9e11318a33ea5f24ef489a4e58da64e0631ac46aa91f30f5691ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://836bfd5239874f47639673b177b0d441dff3d84e255c7c6d1983c9e0db5134fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0e4509596cc7d5e28048c72689ccfc8c249cf06f856142be2b48103608b05f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d0e4509596cc7d5e28048c72689ccfc8c249cf06f856142be2b48103608b05f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:57Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.924190 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:57Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.937488 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2fbj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53a3bcac-8ad0-47ce-abee-ee56fd152ea8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nxj27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nxj27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2fbj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:57Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.948517 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.948563 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.948575 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.948589 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.948599 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:57Z","lastTransitionTime":"2025-11-29T06:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:57 crc kubenswrapper[4947]: I1129 06:34:57.950396 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21d5af32-2f1c-4aa8-a9b3-c436a5b3cb32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8225fda5f45611099716e891065acf0ffa2db6d2c9b79192b5955c1bac77daf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d79edf1f405f23d4ec3ff1a52d9583fc26d31d1fb690337a7b081868c397d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfae5cdad70e7948b6383df2f934a477b237c38b94cf3d3eda22d11fe02826e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20235c8af3fdd17aa70bd0744d8faf37aa7781f63ef194f6e467721a47e388d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:57Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.050650 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.050692 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.050700 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.050713 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.050721 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:58Z","lastTransitionTime":"2025-11-29T06:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.154006 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.154058 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.154074 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.154100 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.154123 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:58Z","lastTransitionTime":"2025-11-29T06:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.177786 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.177790 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.177889 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:34:58 crc kubenswrapper[4947]: E1129 06:34:58.178049 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 06:34:58 crc kubenswrapper[4947]: E1129 06:34:58.178171 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 06:34:58 crc kubenswrapper[4947]: E1129 06:34:58.178332 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fbj5" podUID="53a3bcac-8ad0-47ce-abee-ee56fd152ea8" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.256452 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.256518 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.256536 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.256562 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.256581 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:58Z","lastTransitionTime":"2025-11-29T06:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.359153 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.359191 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.359201 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.359302 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.359324 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:58Z","lastTransitionTime":"2025-11-29T06:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.462673 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.462749 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.462767 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.462792 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.462831 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:58Z","lastTransitionTime":"2025-11-29T06:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.507638 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z4rxq_dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0/ovnkube-controller/2.log" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.517445 4947 scope.go:117] "RemoveContainer" containerID="ee517e6c1aeb923ec34bc4ffa9dd6d445887230476b7ea3be7f09d82f1465442" Nov 29 06:34:58 crc kubenswrapper[4947]: E1129 06:34:58.517736 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-z4rxq_openshift-ovn-kubernetes(dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" podUID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.539670 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21d5af32-2f1c-4aa8-a9b3-c436a5b3cb32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8225fda5f45611099716e891065acf0ffa2db6d2c9b79192b5955c1bac77daf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d79edf1f405f23d4ec3ff1a52d9583fc26d31d1fb690337a7b081868c397d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfae5cdad70e7948b6383df2f934a477b237c38b94cf3d3eda22d11fe02826e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20235c8af3fdd17aa70bd0744d8faf37aa7781f63ef194f6e467721a47e388d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:58Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.561184 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"682a3ca0-7f80-4aa3-8627-44c5f9d6c661\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0decb70631f9a60b5a44909b2cd152c099aa6955393b715617a93d2639a8f211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7347898f9e11318a33ea5f24ef489a4e58da64e0631ac46aa91f30f5691ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://836bfd5239874f47639673b177b0d441dff3d84e255c7c6d1983c9e0db5134fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0e4509596cc7d5e28048c72689ccfc8c249cf06f856142be2b48103608b05f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d0e4509596cc7d5e28048c72689ccfc8c249cf06f856142be2b48103608b05f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:58Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.565828 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.565892 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.565912 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.565937 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.565953 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:58Z","lastTransitionTime":"2025-11-29T06:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.581682 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:58Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.596792 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2fbj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53a3bcac-8ad0-47ce-abee-ee56fd152ea8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nxj27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nxj27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2fbj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:58Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.612137 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabaff26-a896-4929-8b32-6e32efe02ffc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e518bb57f917b37cc08b0f099557c012da1b6c08a168ff3acb262abc2c9d6983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73fed24d3da601900cf1f54e0adc59daba09bd9a50813cc39508f2d8c66ae5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187288a1894fa8a8aaf843f1889edb54adeb0771fb0a5759a1346d1ae04f9970\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb63d3fae21b626041036425ac0dbe4cc0fba792aa82812037eb0cf64036984\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce17a0c77733e53cc77e1ced6944f39b53419840d78420bc7cf4ad34fec82e8b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 06:34:22.756831 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 06:34:22.759210 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196230451/tls.crt::/tmp/serving-cert-4196230451/tls.key\\\\\\\"\\\\nI1129 06:34:28.364401 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 06:34:28.369411 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 06:34:28.369441 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 06:34:28.369468 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 06:34:28.369474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 06:34:28.374098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 06:34:28.374134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374148 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 06:34:28.374153 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 06:34:28.374157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 06:34:28.374161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 06:34:28.374166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 06:34:28.376189 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ce7bb31402783842d6a20f7d0667426065729bdee1f7cf4fcbe636173e79da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:58Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.628112 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de17371c2f08b16e96df9f76ce3eaf0981cc8590aca0784ea95598b7842812eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:58Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.641041 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:58Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.658578 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26f82f9be693d04a2b2536773e664c7e2525ea3cf90f6135e5ed8a41e0def93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffeb939b1928f4e71530473d93e5b142eb70a09d929bee5bea36c407c0fb44a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b624f494cbf18c5d0c15ee7d074273ebb685465bc73aeebae77758840c958415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03aa7fc02c19732bcc2f291ae3e1eb8ee53ac2f8e4b6138a24c73eb69201fa5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba937f20fd42fdc9c935084319c2d3c455813e35002ed22a4be631cc74fab26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5372e6932fe776f4e4d18124f45a05be3f374919ce6df4426d89a023353836\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee517e6c1aeb923ec34bc4ffa9dd6d445887230476b7ea3be7f09d82f1465442\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee517e6c1aeb923ec34bc4ffa9dd6d445887230476b7ea3be7f09d82f1465442\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T06:34:56Z\\\",\\\"message\\\":\\\"tor *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 06:34:56.438063 6572 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1129 06:34:56.438071 6572 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1129 06:34:56.438097 6572 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1129 06:34:56.438057 6572 factory.go:656] Stopping watch factory\\\\nI1129 06:34:56.438088 6572 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1129 06:34:56.438055 6572 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1129 06:34:56.438275 6572 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 06:34:56.438450 6572 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 06:34:56.438512 6572 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 06:34:56.438613 6572 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-z4rxq_openshift-ovn-kubernetes(dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fadf23d78dff09f3edcb6159171e1f55f5a84b42d3dbc6804a2e1764d6c71797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z4rxq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:58Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.668576 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.668618 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.668651 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.668668 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.668680 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:58Z","lastTransitionTime":"2025-11-29T06:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.668687 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ttw9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8440d6ae-a357-461e-a91f-a48625b4a9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c526243829deb889e7afc49647d8bf9960f886b6abc9aa7cba8a69c8d5b3ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mxr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ttw9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:58Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.678651 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b25cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a287baf-0c87-4698-9553-6f94927fbf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69ea28bcbeb8379671147cd41f131b8b37b41a285319b082c43381a56cdc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbmt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754f770b2635be1fc785e0cc958e0c885dd7516ba54de760493ec7778d738708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbmt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b25cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:58Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.690123 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84be169c8b603ce62c3c60b6c67eea559c20cbcfc7ba2d0965bcf6acfb00e99b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:58Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.703446 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zb4cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6063fc55-4365-4f22-a005-bfac3812fdce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c5fd1426004597dc139d078e4f9b5bb7fec8ab12162ca6b052f5eb43025b6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fe3a8d1d8d729e919ab73a034943a441b5bc44409af32b5671d73b80e2bd6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe3a8d1d8d729e919ab73a034943a441b5bc44409af32b5671d73b80e2bd6ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49d19e2f799deb4cf6b8154a2f3c0485c06ad44d0b3e0bac5b2b85f16228f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a49d19e2f799deb4cf6b8154a2f3c0485c06ad44d0b3e0bac5b2b85f16228f38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zb4cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:58Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.715038 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f4d791f-bb61-4aaa-a09c-3007b59645a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247b6579653599785a44fc9e8e0d47f93802f01f2002465a9ba3825d487a324b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bade16afee9bcb203991caf838d5e9e5302dd0b35462b93b45c34cf98e89c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5zgvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:58Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.729805 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlg45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c22792f1661e18f407880f3a743a03890141d9f5c6062e8ba6b06204b4e3da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rndhc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlg45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:58Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.747614 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"268b8d19-01a0-4696-8b9e-0efed4129d56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06bdbb1b45cdc328f5c4345797e222ac33d9f1ea052011c183a229c02134cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa666baf286a9b54dbafbe35f35c3be377481392a4e3811f72cf189d1adbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ef81ddc4f773e06ee9c296ecc2e67ce76bf795fbd8357236cad6c0462d7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360e1f2c76ee825b275816919d9ab1473c6db5da5ab22b43134738150306f87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4712b1e44979f37b926f325313325a788ecb9e01b30583bf8210ab6c53d71d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:58Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.760909 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92f526fea4a10baf89765a67868ca0fe47ce19bbde73cd538bd65add7578f000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3934144663bb1471d168783c160c42654884d04d2510cbe1e5f2f6e2cfb94f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:58Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.771180 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.771240 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.771255 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.771273 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.771284 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:58Z","lastTransitionTime":"2025-11-29T06:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.774839 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:58Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.787325 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sxdk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f180a1-2fb0-4b96-85ed-1116677a7c99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b80d32b0abeeab39bc9fd8c24510ec865e957006ecac31da11f86b6e0fb983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swgjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sxdk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:58Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.875059 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.875095 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.875103 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.875120 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.875132 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:58Z","lastTransitionTime":"2025-11-29T06:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.977709 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.977765 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.977779 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.977796 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:58 crc kubenswrapper[4947]: I1129 06:34:58.977807 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:58Z","lastTransitionTime":"2025-11-29T06:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.080422 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.080470 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.080482 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.080502 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.080516 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:59Z","lastTransitionTime":"2025-11-29T06:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.178797 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:34:59 crc kubenswrapper[4947]: E1129 06:34:59.179028 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.185581 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.185656 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.185667 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.185688 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.185700 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:59Z","lastTransitionTime":"2025-11-29T06:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.196386 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ttw9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8440d6ae-a357-461e-a91f-a48625b4a9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c526243829deb889e7afc49647d8bf9960f886b6abc9aa7cba8a69c8d5b3ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mxr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ttw9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:59Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.213015 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b25cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a287baf-0c87-4698-9553-6f94927fbf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69ea28bcbeb8379671147cd41f131b8b37b41a285319b082c43381a56cdc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbmt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754f770b2635be1fc785e0cc958e0c885dd7516ba54de760493ec7778d738708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbmt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b25cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:59Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.231612 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84be169c8b603ce62c3c60b6c67eea559c20cbcfc7ba2d0965bcf6acfb00e99b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:59Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.247141 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zb4cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6063fc55-4365-4f22-a005-bfac3812fdce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c5fd1426004597dc139d078e4f9b5bb7fec8ab12162ca6b052f5eb43025b6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fe3a8d1d8d729e919ab73a034943a441b5bc44409af32b5671d73b80e2bd6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe3a8d1d8d729e919ab73a034943a441b5bc44409af32b5671d73b80e2bd6ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49d19e2f799deb4cf6b8154a2f3c0485c06ad44d0b3e0bac5b2b85f16228f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a49d19e2f799deb4cf6b8154a2f3c0485c06ad44d0b3e0bac5b2b85f16228f38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zb4cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:59Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.259625 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f4d791f-bb61-4aaa-a09c-3007b59645a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247b6579653599785a44fc9e8e0d47f93802f01f2002465a9ba3825d487a324b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bade16afee9bcb203991caf838d5e9e5302dd0b35462b93b45c34cf98e89c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5zgvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:59Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.280942 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlg45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c22792f1661e18f407880f3a743a03890141d9f5c6062e8ba6b06204b4e3da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rndhc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlg45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:59Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.287113 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.287152 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.287160 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.287174 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.287183 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:59Z","lastTransitionTime":"2025-11-29T06:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.306385 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"268b8d19-01a0-4696-8b9e-0efed4129d56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06bdbb1b45cdc328f5c4345797e222ac33d9f1ea052011c183a229c02134cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa666baf286a9b54dbafbe35f35c3be377481392a4e3811f72cf189d1adbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ef81ddc4f773e06ee9c296ecc2e67ce76bf795fbd8357236cad6c0462d7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360e1f2c76ee825b275816919d9ab1473c6db5da5ab22b43134738150306f87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4712b1e44979f37b926f325313325a788ecb9e01b30583bf8210ab6c53d71d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:59Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.318009 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92f526fea4a10baf89765a67868ca0fe47ce19bbde73cd538bd65add7578f000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3934144663bb1471d168783c160c42654884d04d2510cbe1e5f2f6e2cfb94f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:59Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.331451 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:59Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.343350 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sxdk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f180a1-2fb0-4b96-85ed-1116677a7c99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b80d32b0abeeab39bc9fd8c24510ec865e957006ecac31da11f86b6e0fb983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swgjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sxdk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:59Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.358922 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21d5af32-2f1c-4aa8-a9b3-c436a5b3cb32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8225fda5f45611099716e891065acf0ffa2db6d2c9b79192b5955c1bac77daf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d79edf1f405f23d4ec3ff1a52d9583fc26d31d1fb690337a7b081868c397d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfae5cdad70e7948b6383df2f934a477b237c38b94cf3d3eda22d11fe02826e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20235c8af3fdd17aa70bd0744d8faf37aa7781f63ef194f6e467721a47e388d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:59Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.371148 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"682a3ca0-7f80-4aa3-8627-44c5f9d6c661\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0decb70631f9a60b5a44909b2cd152c099aa6955393b715617a93d2639a8f211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7347898f9e11318a33ea5f24ef489a4e58da64e0631ac46aa91f30f5691ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://836bfd5239874f47639673b177b0d441dff3d84e255c7c6d1983c9e0db5134fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0e4509596cc7d5e28048c72689ccfc8c249cf06f856142be2b48103608b05f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d0e4509596cc7d5e28048c72689ccfc8c249cf06f856142be2b48103608b05f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:59Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.384138 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:59Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.389370 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.389415 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.389426 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.389444 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.389457 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:59Z","lastTransitionTime":"2025-11-29T06:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.396638 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2fbj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53a3bcac-8ad0-47ce-abee-ee56fd152ea8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nxj27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nxj27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2fbj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:59Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.411393 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabaff26-a896-4929-8b32-6e32efe02ffc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e518bb57f917b37cc08b0f099557c012da1b6c08a168ff3acb262abc2c9d6983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73fed24d3da601900cf1f54e0adc59daba09bd9a50813cc39508f2d8c66ae5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187288a1894fa8a8aaf843f1889edb54adeb0771fb0a5759a1346d1ae04f9970\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb63d3fae21b626041036425ac0dbe4cc0fba792aa82812037eb0cf64036984\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce17a0c77733e53cc77e1ced6944f39b53419840d78420bc7cf4ad34fec82e8b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 06:34:22.756831 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 06:34:22.759210 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196230451/tls.crt::/tmp/serving-cert-4196230451/tls.key\\\\\\\"\\\\nI1129 06:34:28.364401 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 06:34:28.369411 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 06:34:28.369441 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 06:34:28.369468 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 06:34:28.369474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 06:34:28.374098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 06:34:28.374134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374148 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 06:34:28.374153 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 06:34:28.374157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 06:34:28.374161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 06:34:28.374166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 06:34:28.376189 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ce7bb31402783842d6a20f7d0667426065729bdee1f7cf4fcbe636173e79da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:59Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.426258 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de17371c2f08b16e96df9f76ce3eaf0981cc8590aca0784ea95598b7842812eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:59Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.438698 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:59Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.459154 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26f82f9be693d04a2b2536773e664c7e2525ea3cf90f6135e5ed8a41e0def93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffeb939b1928f4e71530473d93e5b142eb70a09d929bee5bea36c407c0fb44a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b624f494cbf18c5d0c15ee7d074273ebb685465bc73aeebae77758840c958415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03aa7fc02c19732bcc2f291ae3e1eb8ee53ac2f8e4b6138a24c73eb69201fa5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba937f20fd42fdc9c935084319c2d3c455813e35002ed22a4be631cc74fab26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5372e6932fe776f4e4d18124f45a05be3f374919ce6df4426d89a023353836\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee517e6c1aeb923ec34bc4ffa9dd6d445887230476b7ea3be7f09d82f1465442\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee517e6c1aeb923ec34bc4ffa9dd6d445887230476b7ea3be7f09d82f1465442\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T06:34:56Z\\\",\\\"message\\\":\\\"tor *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 06:34:56.438063 6572 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1129 06:34:56.438071 6572 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1129 06:34:56.438097 6572 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1129 06:34:56.438057 6572 factory.go:656] Stopping watch factory\\\\nI1129 06:34:56.438088 6572 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1129 06:34:56.438055 6572 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1129 06:34:56.438275 6572 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 06:34:56.438450 6572 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 06:34:56.438512 6572 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 06:34:56.438613 6572 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-z4rxq_openshift-ovn-kubernetes(dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fadf23d78dff09f3edcb6159171e1f55f5a84b42d3dbc6804a2e1764d6c71797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z4rxq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:34:59Z is after 2025-08-24T17:21:41Z" Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.492184 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.492262 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.492276 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.492296 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.492311 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:59Z","lastTransitionTime":"2025-11-29T06:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.595263 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.595315 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.595327 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.595350 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.595363 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:59Z","lastTransitionTime":"2025-11-29T06:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.697777 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.697829 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.697845 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.697865 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.697877 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:59Z","lastTransitionTime":"2025-11-29T06:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.800526 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.800585 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.800603 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.800632 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.800650 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:59Z","lastTransitionTime":"2025-11-29T06:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.903054 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.903095 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.903107 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.903125 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:34:59 crc kubenswrapper[4947]: I1129 06:34:59.903136 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:34:59Z","lastTransitionTime":"2025-11-29T06:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:00 crc kubenswrapper[4947]: I1129 06:35:00.005237 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:00 crc kubenswrapper[4947]: I1129 06:35:00.005278 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:00 crc kubenswrapper[4947]: I1129 06:35:00.005290 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:00 crc kubenswrapper[4947]: I1129 06:35:00.005307 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:00 crc kubenswrapper[4947]: I1129 06:35:00.005318 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:00Z","lastTransitionTime":"2025-11-29T06:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:00 crc kubenswrapper[4947]: I1129 06:35:00.020097 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/53a3bcac-8ad0-47ce-abee-ee56fd152ea8-metrics-certs\") pod \"network-metrics-daemon-2fbj5\" (UID: \"53a3bcac-8ad0-47ce-abee-ee56fd152ea8\") " pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:35:00 crc kubenswrapper[4947]: E1129 06:35:00.020278 4947 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 06:35:00 crc kubenswrapper[4947]: E1129 06:35:00.020374 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/53a3bcac-8ad0-47ce-abee-ee56fd152ea8-metrics-certs podName:53a3bcac-8ad0-47ce-abee-ee56fd152ea8 nodeName:}" failed. No retries permitted until 2025-11-29 06:35:16.020351374 +0000 UTC m=+67.064733525 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/53a3bcac-8ad0-47ce-abee-ee56fd152ea8-metrics-certs") pod "network-metrics-daemon-2fbj5" (UID: "53a3bcac-8ad0-47ce-abee-ee56fd152ea8") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 06:35:00 crc kubenswrapper[4947]: I1129 06:35:00.107808 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:00 crc kubenswrapper[4947]: I1129 06:35:00.107857 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:00 crc kubenswrapper[4947]: I1129 06:35:00.107870 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:00 crc kubenswrapper[4947]: I1129 06:35:00.107888 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:00 crc kubenswrapper[4947]: I1129 06:35:00.107899 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:00Z","lastTransitionTime":"2025-11-29T06:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:00 crc kubenswrapper[4947]: I1129 06:35:00.178634 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:35:00 crc kubenswrapper[4947]: I1129 06:35:00.178753 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:35:00 crc kubenswrapper[4947]: E1129 06:35:00.178833 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fbj5" podUID="53a3bcac-8ad0-47ce-abee-ee56fd152ea8" Nov 29 06:35:00 crc kubenswrapper[4947]: E1129 06:35:00.178957 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 06:35:00 crc kubenswrapper[4947]: I1129 06:35:00.179031 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:35:00 crc kubenswrapper[4947]: E1129 06:35:00.179139 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 06:35:00 crc kubenswrapper[4947]: I1129 06:35:00.210274 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:00 crc kubenswrapper[4947]: I1129 06:35:00.210307 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:00 crc kubenswrapper[4947]: I1129 06:35:00.210321 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:00 crc kubenswrapper[4947]: I1129 06:35:00.210339 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:00 crc kubenswrapper[4947]: I1129 06:35:00.210351 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:00Z","lastTransitionTime":"2025-11-29T06:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:00 crc kubenswrapper[4947]: I1129 06:35:00.221958 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:35:00 crc kubenswrapper[4947]: I1129 06:35:00.222120 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:35:00 crc kubenswrapper[4947]: I1129 06:35:00.222278 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:35:00 crc kubenswrapper[4947]: E1129 06:35:00.222442 4947 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 06:35:00 crc kubenswrapper[4947]: E1129 06:35:00.222522 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:35:32.222485228 +0000 UTC m=+83.266867379 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:35:00 crc kubenswrapper[4947]: E1129 06:35:00.222584 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 06:35:32.22257221 +0000 UTC m=+83.266954431 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 06:35:00 crc kubenswrapper[4947]: E1129 06:35:00.222526 4947 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 06:35:00 crc kubenswrapper[4947]: E1129 06:35:00.222831 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 06:35:32.222823226 +0000 UTC m=+83.267205307 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 06:35:00 crc kubenswrapper[4947]: I1129 06:35:00.312966 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:00 crc kubenswrapper[4947]: I1129 06:35:00.313015 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:00 crc kubenswrapper[4947]: I1129 06:35:00.313026 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:00 crc kubenswrapper[4947]: I1129 06:35:00.313039 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:00 crc kubenswrapper[4947]: I1129 06:35:00.313050 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:00Z","lastTransitionTime":"2025-11-29T06:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:00 crc kubenswrapper[4947]: I1129 06:35:00.418347 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:00 crc kubenswrapper[4947]: I1129 06:35:00.419252 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:00 crc kubenswrapper[4947]: I1129 06:35:00.419346 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:00 crc kubenswrapper[4947]: I1129 06:35:00.419397 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:00 crc kubenswrapper[4947]: I1129 06:35:00.419428 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:00Z","lastTransitionTime":"2025-11-29T06:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:00 crc kubenswrapper[4947]: I1129 06:35:00.423859 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:35:00 crc kubenswrapper[4947]: I1129 06:35:00.423889 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:35:00 crc kubenswrapper[4947]: E1129 06:35:00.423977 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 06:35:00 crc kubenswrapper[4947]: E1129 06:35:00.423991 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 06:35:00 crc kubenswrapper[4947]: E1129 06:35:00.424001 4947 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 06:35:00 crc kubenswrapper[4947]: E1129 06:35:00.423999 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 06:35:00 crc kubenswrapper[4947]: E1129 06:35:00.424035 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 06:35:00 crc kubenswrapper[4947]: E1129 06:35:00.424043 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-29 06:35:32.424031247 +0000 UTC m=+83.468413328 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 06:35:00 crc kubenswrapper[4947]: E1129 06:35:00.424053 4947 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 06:35:00 crc kubenswrapper[4947]: E1129 06:35:00.424105 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-29 06:35:32.424087108 +0000 UTC m=+83.468469219 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 06:35:00 crc kubenswrapper[4947]: I1129 06:35:00.521710 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:00 crc kubenswrapper[4947]: I1129 06:35:00.521751 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:00 crc kubenswrapper[4947]: I1129 06:35:00.521763 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:00 crc kubenswrapper[4947]: I1129 06:35:00.521780 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:00 crc kubenswrapper[4947]: I1129 06:35:00.521792 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:00Z","lastTransitionTime":"2025-11-29T06:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:00 crc kubenswrapper[4947]: I1129 06:35:00.624861 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:00 crc kubenswrapper[4947]: I1129 06:35:00.624913 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:00 crc kubenswrapper[4947]: I1129 06:35:00.624928 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:00 crc kubenswrapper[4947]: I1129 06:35:00.624950 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:00 crc kubenswrapper[4947]: I1129 06:35:00.624966 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:00Z","lastTransitionTime":"2025-11-29T06:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:00 crc kubenswrapper[4947]: I1129 06:35:00.727420 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:00 crc kubenswrapper[4947]: I1129 06:35:00.727472 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:00 crc kubenswrapper[4947]: I1129 06:35:00.727485 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:00 crc kubenswrapper[4947]: I1129 06:35:00.727502 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:00 crc kubenswrapper[4947]: I1129 06:35:00.727512 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:00Z","lastTransitionTime":"2025-11-29T06:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:00 crc kubenswrapper[4947]: I1129 06:35:00.830713 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:00 crc kubenswrapper[4947]: I1129 06:35:00.830756 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:00 crc kubenswrapper[4947]: I1129 06:35:00.830765 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:00 crc kubenswrapper[4947]: I1129 06:35:00.830783 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:00 crc kubenswrapper[4947]: I1129 06:35:00.830796 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:00Z","lastTransitionTime":"2025-11-29T06:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:00 crc kubenswrapper[4947]: I1129 06:35:00.934313 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:00 crc kubenswrapper[4947]: I1129 06:35:00.934378 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:00 crc kubenswrapper[4947]: I1129 06:35:00.934400 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:00 crc kubenswrapper[4947]: I1129 06:35:00.934424 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:00 crc kubenswrapper[4947]: I1129 06:35:00.934438 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:00Z","lastTransitionTime":"2025-11-29T06:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:01 crc kubenswrapper[4947]: I1129 06:35:01.037859 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:01 crc kubenswrapper[4947]: I1129 06:35:01.037930 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:01 crc kubenswrapper[4947]: I1129 06:35:01.037951 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:01 crc kubenswrapper[4947]: I1129 06:35:01.037971 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:01 crc kubenswrapper[4947]: I1129 06:35:01.037983 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:01Z","lastTransitionTime":"2025-11-29T06:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:01 crc kubenswrapper[4947]: I1129 06:35:01.140570 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:01 crc kubenswrapper[4947]: I1129 06:35:01.140610 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:01 crc kubenswrapper[4947]: I1129 06:35:01.140621 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:01 crc kubenswrapper[4947]: I1129 06:35:01.140638 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:01 crc kubenswrapper[4947]: I1129 06:35:01.140650 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:01Z","lastTransitionTime":"2025-11-29T06:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:01 crc kubenswrapper[4947]: I1129 06:35:01.178316 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:35:01 crc kubenswrapper[4947]: E1129 06:35:01.178606 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 06:35:01 crc kubenswrapper[4947]: I1129 06:35:01.243735 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:01 crc kubenswrapper[4947]: I1129 06:35:01.243803 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:01 crc kubenswrapper[4947]: I1129 06:35:01.243813 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:01 crc kubenswrapper[4947]: I1129 06:35:01.243835 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:01 crc kubenswrapper[4947]: I1129 06:35:01.243846 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:01Z","lastTransitionTime":"2025-11-29T06:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:01 crc kubenswrapper[4947]: I1129 06:35:01.347652 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:01 crc kubenswrapper[4947]: I1129 06:35:01.347753 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:01 crc kubenswrapper[4947]: I1129 06:35:01.347775 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:01 crc kubenswrapper[4947]: I1129 06:35:01.347803 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:01 crc kubenswrapper[4947]: I1129 06:35:01.347825 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:01Z","lastTransitionTime":"2025-11-29T06:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:01 crc kubenswrapper[4947]: I1129 06:35:01.450434 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:01 crc kubenswrapper[4947]: I1129 06:35:01.450476 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:01 crc kubenswrapper[4947]: I1129 06:35:01.450489 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:01 crc kubenswrapper[4947]: I1129 06:35:01.450516 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:01 crc kubenswrapper[4947]: I1129 06:35:01.450528 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:01Z","lastTransitionTime":"2025-11-29T06:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:01 crc kubenswrapper[4947]: I1129 06:35:01.553659 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:01 crc kubenswrapper[4947]: I1129 06:35:01.553715 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:01 crc kubenswrapper[4947]: I1129 06:35:01.553737 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:01 crc kubenswrapper[4947]: I1129 06:35:01.553762 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:01 crc kubenswrapper[4947]: I1129 06:35:01.553778 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:01Z","lastTransitionTime":"2025-11-29T06:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:01 crc kubenswrapper[4947]: I1129 06:35:01.657379 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:01 crc kubenswrapper[4947]: I1129 06:35:01.657472 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:01 crc kubenswrapper[4947]: I1129 06:35:01.657495 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:01 crc kubenswrapper[4947]: I1129 06:35:01.657530 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:01 crc kubenswrapper[4947]: I1129 06:35:01.657551 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:01Z","lastTransitionTime":"2025-11-29T06:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:01 crc kubenswrapper[4947]: I1129 06:35:01.760271 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:01 crc kubenswrapper[4947]: I1129 06:35:01.760331 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:01 crc kubenswrapper[4947]: I1129 06:35:01.760342 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:01 crc kubenswrapper[4947]: I1129 06:35:01.760361 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:01 crc kubenswrapper[4947]: I1129 06:35:01.760375 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:01Z","lastTransitionTime":"2025-11-29T06:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:01 crc kubenswrapper[4947]: I1129 06:35:01.863765 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:01 crc kubenswrapper[4947]: I1129 06:35:01.863844 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:01 crc kubenswrapper[4947]: I1129 06:35:01.863868 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:01 crc kubenswrapper[4947]: I1129 06:35:01.863899 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:01 crc kubenswrapper[4947]: I1129 06:35:01.863922 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:01Z","lastTransitionTime":"2025-11-29T06:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:01 crc kubenswrapper[4947]: I1129 06:35:01.966933 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:01 crc kubenswrapper[4947]: I1129 06:35:01.966985 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:01 crc kubenswrapper[4947]: I1129 06:35:01.967001 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:01 crc kubenswrapper[4947]: I1129 06:35:01.967024 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:01 crc kubenswrapper[4947]: I1129 06:35:01.967041 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:01Z","lastTransitionTime":"2025-11-29T06:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.069048 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.069108 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.069124 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.069148 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.069164 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:02Z","lastTransitionTime":"2025-11-29T06:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.172075 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.172155 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.172177 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.172210 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.172293 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:02Z","lastTransitionTime":"2025-11-29T06:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.178730 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.178831 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.178736 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:35:02 crc kubenswrapper[4947]: E1129 06:35:02.178930 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 06:35:02 crc kubenswrapper[4947]: E1129 06:35:02.179085 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 06:35:02 crc kubenswrapper[4947]: E1129 06:35:02.179307 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fbj5" podUID="53a3bcac-8ad0-47ce-abee-ee56fd152ea8" Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.275431 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.275497 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.275518 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.275545 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.275566 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:02Z","lastTransitionTime":"2025-11-29T06:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.378901 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.378964 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.378982 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.379008 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.379026 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:02Z","lastTransitionTime":"2025-11-29T06:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.481613 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.481659 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.481668 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.481681 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.481690 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:02Z","lastTransitionTime":"2025-11-29T06:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.583896 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.583966 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.583992 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.584021 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.584043 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:02Z","lastTransitionTime":"2025-11-29T06:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.687113 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.687186 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.687208 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.687312 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.687333 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:02Z","lastTransitionTime":"2025-11-29T06:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.791269 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.791358 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.791376 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.791402 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.791419 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:02Z","lastTransitionTime":"2025-11-29T06:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.894506 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.894593 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.894616 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.894647 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.894669 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:02Z","lastTransitionTime":"2025-11-29T06:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.936319 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.936444 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.936460 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.936487 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.936611 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:02Z","lastTransitionTime":"2025-11-29T06:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:02 crc kubenswrapper[4947]: E1129 06:35:02.962171 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ead809c-947a-4269-a0f3-b817113e9662\\\",\\\"systemUUID\\\":\\\"f0748469-4a41-446c-a5c3-776c2ca32148\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:02Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.967094 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.967127 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.967137 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.967153 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.967162 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:02Z","lastTransitionTime":"2025-11-29T06:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:02 crc kubenswrapper[4947]: E1129 06:35:02.988095 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ead809c-947a-4269-a0f3-b817113e9662\\\",\\\"systemUUID\\\":\\\"f0748469-4a41-446c-a5c3-776c2ca32148\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:02Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.993027 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.993179 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.993335 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.993463 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:02 crc kubenswrapper[4947]: I1129 06:35:02.993546 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:02Z","lastTransitionTime":"2025-11-29T06:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:03 crc kubenswrapper[4947]: E1129 06:35:03.012510 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ead809c-947a-4269-a0f3-b817113e9662\\\",\\\"systemUUID\\\":\\\"f0748469-4a41-446c-a5c3-776c2ca32148\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:03Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.017073 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.017125 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.017138 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.017157 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.017171 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:03Z","lastTransitionTime":"2025-11-29T06:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:03 crc kubenswrapper[4947]: E1129 06:35:03.037312 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ead809c-947a-4269-a0f3-b817113e9662\\\",\\\"systemUUID\\\":\\\"f0748469-4a41-446c-a5c3-776c2ca32148\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:03Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.042438 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.042617 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.042699 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.042773 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.042837 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:03Z","lastTransitionTime":"2025-11-29T06:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:03 crc kubenswrapper[4947]: E1129 06:35:03.061946 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ead809c-947a-4269-a0f3-b817113e9662\\\",\\\"systemUUID\\\":\\\"f0748469-4a41-446c-a5c3-776c2ca32148\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:03Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:03 crc kubenswrapper[4947]: E1129 06:35:03.062112 4947 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.064715 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.064757 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.064769 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.064787 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.064799 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:03Z","lastTransitionTime":"2025-11-29T06:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.168022 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.168094 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.168118 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.168150 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.168175 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:03Z","lastTransitionTime":"2025-11-29T06:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.178440 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:35:03 crc kubenswrapper[4947]: E1129 06:35:03.178630 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.271538 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.271935 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.272069 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.272216 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.272402 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:03Z","lastTransitionTime":"2025-11-29T06:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.375846 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.375915 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.375928 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.375949 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.375962 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:03Z","lastTransitionTime":"2025-11-29T06:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.479392 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.479458 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.479476 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.479500 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.479518 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:03Z","lastTransitionTime":"2025-11-29T06:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.582206 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.582294 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.582308 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.582325 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.582336 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:03Z","lastTransitionTime":"2025-11-29T06:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.685544 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.685605 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.685623 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.685645 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.685661 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:03Z","lastTransitionTime":"2025-11-29T06:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.788151 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.788193 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.788204 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.788222 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.788234 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:03Z","lastTransitionTime":"2025-11-29T06:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.890612 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.890693 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.890708 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.890729 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.890742 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:03Z","lastTransitionTime":"2025-11-29T06:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.994373 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.994519 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.994541 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.994572 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:03 crc kubenswrapper[4947]: I1129 06:35:03.994591 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:03Z","lastTransitionTime":"2025-11-29T06:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:04 crc kubenswrapper[4947]: I1129 06:35:04.098489 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:04 crc kubenswrapper[4947]: I1129 06:35:04.098561 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:04 crc kubenswrapper[4947]: I1129 06:35:04.098571 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:04 crc kubenswrapper[4947]: I1129 06:35:04.098588 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:04 crc kubenswrapper[4947]: I1129 06:35:04.098599 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:04Z","lastTransitionTime":"2025-11-29T06:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:04 crc kubenswrapper[4947]: I1129 06:35:04.178428 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:35:04 crc kubenswrapper[4947]: I1129 06:35:04.178474 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:35:04 crc kubenswrapper[4947]: I1129 06:35:04.178555 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:35:04 crc kubenswrapper[4947]: E1129 06:35:04.178670 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fbj5" podUID="53a3bcac-8ad0-47ce-abee-ee56fd152ea8" Nov 29 06:35:04 crc kubenswrapper[4947]: E1129 06:35:04.178811 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 06:35:04 crc kubenswrapper[4947]: E1129 06:35:04.178941 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 06:35:04 crc kubenswrapper[4947]: I1129 06:35:04.201230 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:04 crc kubenswrapper[4947]: I1129 06:35:04.201298 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:04 crc kubenswrapper[4947]: I1129 06:35:04.201315 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:04 crc kubenswrapper[4947]: I1129 06:35:04.201336 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:04 crc kubenswrapper[4947]: I1129 06:35:04.201351 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:04Z","lastTransitionTime":"2025-11-29T06:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:04 crc kubenswrapper[4947]: I1129 06:35:04.303790 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:04 crc kubenswrapper[4947]: I1129 06:35:04.303840 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:04 crc kubenswrapper[4947]: I1129 06:35:04.303849 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:04 crc kubenswrapper[4947]: I1129 06:35:04.303866 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:04 crc kubenswrapper[4947]: I1129 06:35:04.303878 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:04Z","lastTransitionTime":"2025-11-29T06:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:04 crc kubenswrapper[4947]: I1129 06:35:04.406258 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:04 crc kubenswrapper[4947]: I1129 06:35:04.406311 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:04 crc kubenswrapper[4947]: I1129 06:35:04.406326 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:04 crc kubenswrapper[4947]: I1129 06:35:04.406346 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:04 crc kubenswrapper[4947]: I1129 06:35:04.406372 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:04Z","lastTransitionTime":"2025-11-29T06:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:04 crc kubenswrapper[4947]: I1129 06:35:04.508489 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:04 crc kubenswrapper[4947]: I1129 06:35:04.508517 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:04 crc kubenswrapper[4947]: I1129 06:35:04.508525 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:04 crc kubenswrapper[4947]: I1129 06:35:04.508538 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:04 crc kubenswrapper[4947]: I1129 06:35:04.508547 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:04Z","lastTransitionTime":"2025-11-29T06:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:04 crc kubenswrapper[4947]: I1129 06:35:04.612475 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:04 crc kubenswrapper[4947]: I1129 06:35:04.612532 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:04 crc kubenswrapper[4947]: I1129 06:35:04.612549 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:04 crc kubenswrapper[4947]: I1129 06:35:04.612573 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:04 crc kubenswrapper[4947]: I1129 06:35:04.612592 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:04Z","lastTransitionTime":"2025-11-29T06:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:04 crc kubenswrapper[4947]: I1129 06:35:04.715147 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:04 crc kubenswrapper[4947]: I1129 06:35:04.715194 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:04 crc kubenswrapper[4947]: I1129 06:35:04.715209 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:04 crc kubenswrapper[4947]: I1129 06:35:04.715277 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:04 crc kubenswrapper[4947]: I1129 06:35:04.715292 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:04Z","lastTransitionTime":"2025-11-29T06:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:04 crc kubenswrapper[4947]: I1129 06:35:04.817971 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:04 crc kubenswrapper[4947]: I1129 06:35:04.818020 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:04 crc kubenswrapper[4947]: I1129 06:35:04.818033 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:04 crc kubenswrapper[4947]: I1129 06:35:04.818053 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:04 crc kubenswrapper[4947]: I1129 06:35:04.818068 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:04Z","lastTransitionTime":"2025-11-29T06:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:04 crc kubenswrapper[4947]: I1129 06:35:04.920665 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:04 crc kubenswrapper[4947]: I1129 06:35:04.920722 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:04 crc kubenswrapper[4947]: I1129 06:35:04.920738 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:04 crc kubenswrapper[4947]: I1129 06:35:04.920759 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:04 crc kubenswrapper[4947]: I1129 06:35:04.920774 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:04Z","lastTransitionTime":"2025-11-29T06:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:05 crc kubenswrapper[4947]: I1129 06:35:05.024591 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:05 crc kubenswrapper[4947]: I1129 06:35:05.024646 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:05 crc kubenswrapper[4947]: I1129 06:35:05.024658 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:05 crc kubenswrapper[4947]: I1129 06:35:05.024675 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:05 crc kubenswrapper[4947]: I1129 06:35:05.024686 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:05Z","lastTransitionTime":"2025-11-29T06:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:05 crc kubenswrapper[4947]: I1129 06:35:05.128602 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:05 crc kubenswrapper[4947]: I1129 06:35:05.128657 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:05 crc kubenswrapper[4947]: I1129 06:35:05.128670 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:05 crc kubenswrapper[4947]: I1129 06:35:05.128689 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:05 crc kubenswrapper[4947]: I1129 06:35:05.128702 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:05Z","lastTransitionTime":"2025-11-29T06:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:05 crc kubenswrapper[4947]: I1129 06:35:05.178933 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:35:05 crc kubenswrapper[4947]: E1129 06:35:05.179121 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 06:35:05 crc kubenswrapper[4947]: I1129 06:35:05.231440 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:05 crc kubenswrapper[4947]: I1129 06:35:05.231556 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:05 crc kubenswrapper[4947]: I1129 06:35:05.231582 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:05 crc kubenswrapper[4947]: I1129 06:35:05.231611 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:05 crc kubenswrapper[4947]: I1129 06:35:05.231636 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:05Z","lastTransitionTime":"2025-11-29T06:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:05 crc kubenswrapper[4947]: I1129 06:35:05.335187 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:05 crc kubenswrapper[4947]: I1129 06:35:05.335242 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:05 crc kubenswrapper[4947]: I1129 06:35:05.335254 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:05 crc kubenswrapper[4947]: I1129 06:35:05.335270 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:05 crc kubenswrapper[4947]: I1129 06:35:05.335282 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:05Z","lastTransitionTime":"2025-11-29T06:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:05 crc kubenswrapper[4947]: I1129 06:35:05.438387 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:05 crc kubenswrapper[4947]: I1129 06:35:05.438460 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:05 crc kubenswrapper[4947]: I1129 06:35:05.438472 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:05 crc kubenswrapper[4947]: I1129 06:35:05.438488 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:05 crc kubenswrapper[4947]: I1129 06:35:05.438499 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:05Z","lastTransitionTime":"2025-11-29T06:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:05 crc kubenswrapper[4947]: I1129 06:35:05.541094 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:05 crc kubenswrapper[4947]: I1129 06:35:05.541125 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:05 crc kubenswrapper[4947]: I1129 06:35:05.541134 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:05 crc kubenswrapper[4947]: I1129 06:35:05.541147 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:05 crc kubenswrapper[4947]: I1129 06:35:05.541156 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:05Z","lastTransitionTime":"2025-11-29T06:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:05 crc kubenswrapper[4947]: I1129 06:35:05.643727 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:05 crc kubenswrapper[4947]: I1129 06:35:05.643780 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:05 crc kubenswrapper[4947]: I1129 06:35:05.643793 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:05 crc kubenswrapper[4947]: I1129 06:35:05.643813 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:05 crc kubenswrapper[4947]: I1129 06:35:05.643826 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:05Z","lastTransitionTime":"2025-11-29T06:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:05 crc kubenswrapper[4947]: I1129 06:35:05.746160 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:05 crc kubenswrapper[4947]: I1129 06:35:05.746201 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:05 crc kubenswrapper[4947]: I1129 06:35:05.746211 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:05 crc kubenswrapper[4947]: I1129 06:35:05.746247 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:05 crc kubenswrapper[4947]: I1129 06:35:05.746256 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:05Z","lastTransitionTime":"2025-11-29T06:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:05 crc kubenswrapper[4947]: I1129 06:35:05.849316 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:05 crc kubenswrapper[4947]: I1129 06:35:05.849379 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:05 crc kubenswrapper[4947]: I1129 06:35:05.849388 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:05 crc kubenswrapper[4947]: I1129 06:35:05.849402 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:05 crc kubenswrapper[4947]: I1129 06:35:05.849410 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:05Z","lastTransitionTime":"2025-11-29T06:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:05 crc kubenswrapper[4947]: I1129 06:35:05.952392 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:05 crc kubenswrapper[4947]: I1129 06:35:05.952456 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:05 crc kubenswrapper[4947]: I1129 06:35:05.952474 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:05 crc kubenswrapper[4947]: I1129 06:35:05.952500 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:05 crc kubenswrapper[4947]: I1129 06:35:05.952518 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:05Z","lastTransitionTime":"2025-11-29T06:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:06 crc kubenswrapper[4947]: I1129 06:35:06.056190 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:06 crc kubenswrapper[4947]: I1129 06:35:06.056668 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:06 crc kubenswrapper[4947]: I1129 06:35:06.056724 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:06 crc kubenswrapper[4947]: I1129 06:35:06.056750 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:06 crc kubenswrapper[4947]: I1129 06:35:06.056845 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:06Z","lastTransitionTime":"2025-11-29T06:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:06 crc kubenswrapper[4947]: I1129 06:35:06.160668 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:06 crc kubenswrapper[4947]: I1129 06:35:06.160722 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:06 crc kubenswrapper[4947]: I1129 06:35:06.160733 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:06 crc kubenswrapper[4947]: I1129 06:35:06.160751 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:06 crc kubenswrapper[4947]: I1129 06:35:06.160762 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:06Z","lastTransitionTime":"2025-11-29T06:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:06 crc kubenswrapper[4947]: I1129 06:35:06.178327 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:35:06 crc kubenswrapper[4947]: I1129 06:35:06.178343 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:35:06 crc kubenswrapper[4947]: E1129 06:35:06.178537 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 06:35:06 crc kubenswrapper[4947]: I1129 06:35:06.178352 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:35:06 crc kubenswrapper[4947]: E1129 06:35:06.178663 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 06:35:06 crc kubenswrapper[4947]: E1129 06:35:06.178791 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fbj5" podUID="53a3bcac-8ad0-47ce-abee-ee56fd152ea8" Nov 29 06:35:06 crc kubenswrapper[4947]: I1129 06:35:06.264091 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:06 crc kubenswrapper[4947]: I1129 06:35:06.264130 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:06 crc kubenswrapper[4947]: I1129 06:35:06.264139 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:06 crc kubenswrapper[4947]: I1129 06:35:06.264152 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:06 crc kubenswrapper[4947]: I1129 06:35:06.264161 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:06Z","lastTransitionTime":"2025-11-29T06:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:06 crc kubenswrapper[4947]: I1129 06:35:06.366929 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:06 crc kubenswrapper[4947]: I1129 06:35:06.366987 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:06 crc kubenswrapper[4947]: I1129 06:35:06.367003 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:06 crc kubenswrapper[4947]: I1129 06:35:06.367027 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:06 crc kubenswrapper[4947]: I1129 06:35:06.367044 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:06Z","lastTransitionTime":"2025-11-29T06:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:06 crc kubenswrapper[4947]: I1129 06:35:06.470947 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:06 crc kubenswrapper[4947]: I1129 06:35:06.471007 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:06 crc kubenswrapper[4947]: I1129 06:35:06.471026 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:06 crc kubenswrapper[4947]: I1129 06:35:06.471050 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:06 crc kubenswrapper[4947]: I1129 06:35:06.471067 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:06Z","lastTransitionTime":"2025-11-29T06:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:06 crc kubenswrapper[4947]: I1129 06:35:06.574272 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:06 crc kubenswrapper[4947]: I1129 06:35:06.574352 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:06 crc kubenswrapper[4947]: I1129 06:35:06.574377 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:06 crc kubenswrapper[4947]: I1129 06:35:06.574406 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:06 crc kubenswrapper[4947]: I1129 06:35:06.574428 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:06Z","lastTransitionTime":"2025-11-29T06:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:06 crc kubenswrapper[4947]: I1129 06:35:06.677272 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:06 crc kubenswrapper[4947]: I1129 06:35:06.677326 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:06 crc kubenswrapper[4947]: I1129 06:35:06.677339 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:06 crc kubenswrapper[4947]: I1129 06:35:06.677358 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:06 crc kubenswrapper[4947]: I1129 06:35:06.677371 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:06Z","lastTransitionTime":"2025-11-29T06:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:06 crc kubenswrapper[4947]: I1129 06:35:06.780613 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:06 crc kubenswrapper[4947]: I1129 06:35:06.780700 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:06 crc kubenswrapper[4947]: I1129 06:35:06.780723 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:06 crc kubenswrapper[4947]: I1129 06:35:06.780754 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:06 crc kubenswrapper[4947]: I1129 06:35:06.780776 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:06Z","lastTransitionTime":"2025-11-29T06:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:06 crc kubenswrapper[4947]: I1129 06:35:06.883412 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:06 crc kubenswrapper[4947]: I1129 06:35:06.883718 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:06 crc kubenswrapper[4947]: I1129 06:35:06.883751 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:06 crc kubenswrapper[4947]: I1129 06:35:06.883782 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:06 crc kubenswrapper[4947]: I1129 06:35:06.883806 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:06Z","lastTransitionTime":"2025-11-29T06:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:06 crc kubenswrapper[4947]: I1129 06:35:06.986838 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:06 crc kubenswrapper[4947]: I1129 06:35:06.986891 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:06 crc kubenswrapper[4947]: I1129 06:35:06.986906 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:06 crc kubenswrapper[4947]: I1129 06:35:06.986928 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:06 crc kubenswrapper[4947]: I1129 06:35:06.986943 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:06Z","lastTransitionTime":"2025-11-29T06:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:07 crc kubenswrapper[4947]: I1129 06:35:07.092446 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:07 crc kubenswrapper[4947]: I1129 06:35:07.092702 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:07 crc kubenswrapper[4947]: I1129 06:35:07.092714 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:07 crc kubenswrapper[4947]: I1129 06:35:07.092920 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:07 crc kubenswrapper[4947]: I1129 06:35:07.092931 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:07Z","lastTransitionTime":"2025-11-29T06:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:07 crc kubenswrapper[4947]: I1129 06:35:07.178410 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:35:07 crc kubenswrapper[4947]: E1129 06:35:07.178612 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 06:35:07 crc kubenswrapper[4947]: I1129 06:35:07.195866 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:07 crc kubenswrapper[4947]: I1129 06:35:07.195895 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:07 crc kubenswrapper[4947]: I1129 06:35:07.195903 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:07 crc kubenswrapper[4947]: I1129 06:35:07.195914 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:07 crc kubenswrapper[4947]: I1129 06:35:07.195922 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:07Z","lastTransitionTime":"2025-11-29T06:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:07 crc kubenswrapper[4947]: I1129 06:35:07.298389 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:07 crc kubenswrapper[4947]: I1129 06:35:07.298440 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:07 crc kubenswrapper[4947]: I1129 06:35:07.298456 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:07 crc kubenswrapper[4947]: I1129 06:35:07.298478 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:07 crc kubenswrapper[4947]: I1129 06:35:07.298533 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:07Z","lastTransitionTime":"2025-11-29T06:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:07 crc kubenswrapper[4947]: I1129 06:35:07.402345 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:07 crc kubenswrapper[4947]: I1129 06:35:07.402415 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:07 crc kubenswrapper[4947]: I1129 06:35:07.402435 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:07 crc kubenswrapper[4947]: I1129 06:35:07.402463 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:07 crc kubenswrapper[4947]: I1129 06:35:07.402481 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:07Z","lastTransitionTime":"2025-11-29T06:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:07 crc kubenswrapper[4947]: I1129 06:35:07.555129 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:07 crc kubenswrapper[4947]: I1129 06:35:07.555194 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:07 crc kubenswrapper[4947]: I1129 06:35:07.555206 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:07 crc kubenswrapper[4947]: I1129 06:35:07.555239 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:07 crc kubenswrapper[4947]: I1129 06:35:07.555250 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:07Z","lastTransitionTime":"2025-11-29T06:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:07 crc kubenswrapper[4947]: I1129 06:35:07.657816 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:07 crc kubenswrapper[4947]: I1129 06:35:07.657856 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:07 crc kubenswrapper[4947]: I1129 06:35:07.657865 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:07 crc kubenswrapper[4947]: I1129 06:35:07.657880 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:07 crc kubenswrapper[4947]: I1129 06:35:07.657889 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:07Z","lastTransitionTime":"2025-11-29T06:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:07 crc kubenswrapper[4947]: I1129 06:35:07.760427 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:07 crc kubenswrapper[4947]: I1129 06:35:07.760505 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:07 crc kubenswrapper[4947]: I1129 06:35:07.760516 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:07 crc kubenswrapper[4947]: I1129 06:35:07.760529 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:07 crc kubenswrapper[4947]: I1129 06:35:07.760538 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:07Z","lastTransitionTime":"2025-11-29T06:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:07 crc kubenswrapper[4947]: I1129 06:35:07.863707 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:07 crc kubenswrapper[4947]: I1129 06:35:07.863752 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:07 crc kubenswrapper[4947]: I1129 06:35:07.863763 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:07 crc kubenswrapper[4947]: I1129 06:35:07.863779 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:07 crc kubenswrapper[4947]: I1129 06:35:07.863790 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:07Z","lastTransitionTime":"2025-11-29T06:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:07 crc kubenswrapper[4947]: I1129 06:35:07.966952 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:07 crc kubenswrapper[4947]: I1129 06:35:07.966988 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:07 crc kubenswrapper[4947]: I1129 06:35:07.966996 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:07 crc kubenswrapper[4947]: I1129 06:35:07.967010 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:07 crc kubenswrapper[4947]: I1129 06:35:07.967018 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:07Z","lastTransitionTime":"2025-11-29T06:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:08 crc kubenswrapper[4947]: I1129 06:35:08.070138 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:08 crc kubenswrapper[4947]: I1129 06:35:08.070272 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:08 crc kubenswrapper[4947]: I1129 06:35:08.070306 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:08 crc kubenswrapper[4947]: I1129 06:35:08.070425 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:08 crc kubenswrapper[4947]: I1129 06:35:08.070456 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:08Z","lastTransitionTime":"2025-11-29T06:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:08 crc kubenswrapper[4947]: I1129 06:35:08.174097 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:08 crc kubenswrapper[4947]: I1129 06:35:08.174172 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:08 crc kubenswrapper[4947]: I1129 06:35:08.174191 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:08 crc kubenswrapper[4947]: I1129 06:35:08.174218 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:08 crc kubenswrapper[4947]: I1129 06:35:08.174295 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:08Z","lastTransitionTime":"2025-11-29T06:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:08 crc kubenswrapper[4947]: I1129 06:35:08.178615 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:35:08 crc kubenswrapper[4947]: I1129 06:35:08.178671 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:35:08 crc kubenswrapper[4947]: I1129 06:35:08.178641 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:35:08 crc kubenswrapper[4947]: E1129 06:35:08.178822 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 06:35:08 crc kubenswrapper[4947]: E1129 06:35:08.178995 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 06:35:08 crc kubenswrapper[4947]: E1129 06:35:08.179074 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fbj5" podUID="53a3bcac-8ad0-47ce-abee-ee56fd152ea8" Nov 29 06:35:08 crc kubenswrapper[4947]: I1129 06:35:08.277982 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:08 crc kubenswrapper[4947]: I1129 06:35:08.278057 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:08 crc kubenswrapper[4947]: I1129 06:35:08.278081 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:08 crc kubenswrapper[4947]: I1129 06:35:08.278111 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:08 crc kubenswrapper[4947]: I1129 06:35:08.278137 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:08Z","lastTransitionTime":"2025-11-29T06:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:08 crc kubenswrapper[4947]: I1129 06:35:08.381141 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:08 crc kubenswrapper[4947]: I1129 06:35:08.381201 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:08 crc kubenswrapper[4947]: I1129 06:35:08.381251 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:08 crc kubenswrapper[4947]: I1129 06:35:08.381280 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:08 crc kubenswrapper[4947]: I1129 06:35:08.381297 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:08Z","lastTransitionTime":"2025-11-29T06:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:08 crc kubenswrapper[4947]: I1129 06:35:08.483784 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:08 crc kubenswrapper[4947]: I1129 06:35:08.483849 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:08 crc kubenswrapper[4947]: I1129 06:35:08.483863 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:08 crc kubenswrapper[4947]: I1129 06:35:08.483883 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:08 crc kubenswrapper[4947]: I1129 06:35:08.483896 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:08Z","lastTransitionTime":"2025-11-29T06:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:08 crc kubenswrapper[4947]: I1129 06:35:08.585966 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:08 crc kubenswrapper[4947]: I1129 06:35:08.586036 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:08 crc kubenswrapper[4947]: I1129 06:35:08.586050 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:08 crc kubenswrapper[4947]: I1129 06:35:08.586067 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:08 crc kubenswrapper[4947]: I1129 06:35:08.586461 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:08Z","lastTransitionTime":"2025-11-29T06:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:08 crc kubenswrapper[4947]: I1129 06:35:08.689564 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:08 crc kubenswrapper[4947]: I1129 06:35:08.689624 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:08 crc kubenswrapper[4947]: I1129 06:35:08.689636 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:08 crc kubenswrapper[4947]: I1129 06:35:08.689654 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:08 crc kubenswrapper[4947]: I1129 06:35:08.689669 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:08Z","lastTransitionTime":"2025-11-29T06:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:08 crc kubenswrapper[4947]: I1129 06:35:08.793116 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:08 crc kubenswrapper[4947]: I1129 06:35:08.793177 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:08 crc kubenswrapper[4947]: I1129 06:35:08.793189 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:08 crc kubenswrapper[4947]: I1129 06:35:08.793207 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:08 crc kubenswrapper[4947]: I1129 06:35:08.793237 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:08Z","lastTransitionTime":"2025-11-29T06:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:08 crc kubenswrapper[4947]: I1129 06:35:08.899194 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:08 crc kubenswrapper[4947]: I1129 06:35:08.899284 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:08 crc kubenswrapper[4947]: I1129 06:35:08.899301 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:08 crc kubenswrapper[4947]: I1129 06:35:08.899326 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:08 crc kubenswrapper[4947]: I1129 06:35:08.899342 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:08Z","lastTransitionTime":"2025-11-29T06:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.001174 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.001236 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.001259 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.001274 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.001286 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:09Z","lastTransitionTime":"2025-11-29T06:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.104301 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.104336 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.104348 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.104365 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.104376 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:09Z","lastTransitionTime":"2025-11-29T06:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.178106 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:35:09 crc kubenswrapper[4947]: E1129 06:35:09.178276 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.192709 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlg45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c22792f1661e18f407880f3a743a03890141d9f5c6062e8ba6b06204b4e3da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rndhc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlg45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:09Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.206375 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.206432 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.206446 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.206471 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.206486 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:09Z","lastTransitionTime":"2025-11-29T06:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.212490 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"268b8d19-01a0-4696-8b9e-0efed4129d56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06bdbb1b45cdc328f5c4345797e222ac33d9f1ea052011c183a229c02134cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa666baf286a9b54dbafbe35f35c3be377481392a4e3811f72cf189d1adbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ef81ddc4f773e06ee9c296ecc2e67ce76bf795fbd8357236cad6c0462d7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360e1f2c76ee825b275816919d9ab1473c6db5da5ab22b43134738150306f87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4712b1e44979f37b926f325313325a788ecb9e01b30583bf8210ab6c53d71d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:09Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.224421 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92f526fea4a10baf89765a67868ca0fe47ce19bbde73cd538bd65add7578f000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3934144663bb1471d168783c160c42654884d04d2510cbe1e5f2f6e2cfb94f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:09Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.244534 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:09Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.256799 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sxdk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f180a1-2fb0-4b96-85ed-1116677a7c99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b80d32b0abeeab39bc9fd8c24510ec865e957006ecac31da11f86b6e0fb983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swgjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sxdk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:09Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.269828 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21d5af32-2f1c-4aa8-a9b3-c436a5b3cb32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8225fda5f45611099716e891065acf0ffa2db6d2c9b79192b5955c1bac77daf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d79edf1f405f23d4ec3ff1a52d9583fc26d31d1fb690337a7b081868c397d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfae5cdad70e7948b6383df2f934a477b237c38b94cf3d3eda22d11fe02826e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20235c8af3fdd17aa70bd0744d8faf37aa7781f63ef194f6e467721a47e388d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:09Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.281909 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"682a3ca0-7f80-4aa3-8627-44c5f9d6c661\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0decb70631f9a60b5a44909b2cd152c099aa6955393b715617a93d2639a8f211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7347898f9e11318a33ea5f24ef489a4e58da64e0631ac46aa91f30f5691ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://836bfd5239874f47639673b177b0d441dff3d84e255c7c6d1983c9e0db5134fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0e4509596cc7d5e28048c72689ccfc8c249cf06f856142be2b48103608b05f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d0e4509596cc7d5e28048c72689ccfc8c249cf06f856142be2b48103608b05f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:09Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.294369 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:09Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.304618 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2fbj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53a3bcac-8ad0-47ce-abee-ee56fd152ea8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nxj27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nxj27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2fbj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:09Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.309155 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.309190 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.309201 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.309225 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.309260 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:09Z","lastTransitionTime":"2025-11-29T06:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.321188 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabaff26-a896-4929-8b32-6e32efe02ffc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e518bb57f917b37cc08b0f099557c012da1b6c08a168ff3acb262abc2c9d6983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73fed24d3da601900cf1f54e0adc59daba09bd9a50813cc39508f2d8c66ae5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187288a1894fa8a8aaf843f1889edb54adeb0771fb0a5759a1346d1ae04f9970\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb63d3fae21b626041036425ac0dbe4cc0fba792aa82812037eb0cf64036984\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce17a0c77733e53cc77e1ced6944f39b53419840d78420bc7cf4ad34fec82e8b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 06:34:22.756831 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 06:34:22.759210 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196230451/tls.crt::/tmp/serving-cert-4196230451/tls.key\\\\\\\"\\\\nI1129 06:34:28.364401 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 06:34:28.369411 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 06:34:28.369441 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 06:34:28.369468 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 06:34:28.369474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 06:34:28.374098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 06:34:28.374134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374148 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 06:34:28.374153 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 06:34:28.374157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 06:34:28.374161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 06:34:28.374166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 06:34:28.376189 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ce7bb31402783842d6a20f7d0667426065729bdee1f7cf4fcbe636173e79da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:09Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.333353 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de17371c2f08b16e96df9f76ce3eaf0981cc8590aca0784ea95598b7842812eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:09Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.345424 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:09Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.365632 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26f82f9be693d04a2b2536773e664c7e2525ea3cf90f6135e5ed8a41e0def93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffeb939b1928f4e71530473d93e5b142eb70a09d929bee5bea36c407c0fb44a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b624f494cbf18c5d0c15ee7d074273ebb685465bc73aeebae77758840c958415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03aa7fc02c19732bcc2f291ae3e1eb8ee53ac2f8e4b6138a24c73eb69201fa5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba937f20fd42fdc9c935084319c2d3c455813e35002ed22a4be631cc74fab26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5372e6932fe776f4e4d18124f45a05be3f374919ce6df4426d89a023353836\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee517e6c1aeb923ec34bc4ffa9dd6d445887230476b7ea3be7f09d82f1465442\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee517e6c1aeb923ec34bc4ffa9dd6d445887230476b7ea3be7f09d82f1465442\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T06:34:56Z\\\",\\\"message\\\":\\\"tor *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 06:34:56.438063 6572 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1129 06:34:56.438071 6572 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1129 06:34:56.438097 6572 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1129 06:34:56.438057 6572 factory.go:656] Stopping watch factory\\\\nI1129 06:34:56.438088 6572 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1129 06:34:56.438055 6572 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1129 06:34:56.438275 6572 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 06:34:56.438450 6572 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 06:34:56.438512 6572 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 06:34:56.438613 6572 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-z4rxq_openshift-ovn-kubernetes(dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fadf23d78dff09f3edcb6159171e1f55f5a84b42d3dbc6804a2e1764d6c71797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z4rxq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:09Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.378732 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ttw9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8440d6ae-a357-461e-a91f-a48625b4a9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c526243829deb889e7afc49647d8bf9960f886b6abc9aa7cba8a69c8d5b3ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mxr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ttw9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:09Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.391778 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b25cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a287baf-0c87-4698-9553-6f94927fbf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69ea28bcbeb8379671147cd41f131b8b37b41a285319b082c43381a56cdc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbmt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754f770b2635be1fc785e0cc958e0c885dd7516ba54de760493ec7778d738708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbmt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b25cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:09Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.406567 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84be169c8b603ce62c3c60b6c67eea559c20cbcfc7ba2d0965bcf6acfb00e99b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:09Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.412035 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.412073 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.412082 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.412137 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.412148 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:09Z","lastTransitionTime":"2025-11-29T06:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.425910 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zb4cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6063fc55-4365-4f22-a005-bfac3812fdce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c5fd1426004597dc139d078e4f9b5bb7fec8ab12162ca6b052f5eb43025b6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fe3a8d1d8d729e919ab73a034943a441b5bc44409af32b5671d73b80e2bd6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe3a8d1d8d729e919ab73a034943a441b5bc44409af32b5671d73b80e2bd6ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49d19e2f799deb4cf6b8154a2f3c0485c06ad44d0b3e0bac5b2b85f16228f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a49d19e2f799deb4cf6b8154a2f3c0485c06ad44d0b3e0bac5b2b85f16228f38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zb4cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:09Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.439553 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f4d791f-bb61-4aaa-a09c-3007b59645a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247b6579653599785a44fc9e8e0d47f93802f01f2002465a9ba3825d487a324b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bade16afee9bcb203991caf838d5e9e5302dd0b35462b93b45c34cf98e89c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5zgvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:09Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.515915 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.515954 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.515965 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.515988 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.515999 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:09Z","lastTransitionTime":"2025-11-29T06:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.619920 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.619979 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.619996 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.620021 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.620041 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:09Z","lastTransitionTime":"2025-11-29T06:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.724419 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.724513 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.724536 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.724566 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.724591 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:09Z","lastTransitionTime":"2025-11-29T06:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.827653 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.827723 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.827742 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.827783 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.827803 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:09Z","lastTransitionTime":"2025-11-29T06:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.931777 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.931822 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.931838 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.931859 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:09 crc kubenswrapper[4947]: I1129 06:35:09.931875 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:09Z","lastTransitionTime":"2025-11-29T06:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:10 crc kubenswrapper[4947]: I1129 06:35:10.034535 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:10 crc kubenswrapper[4947]: I1129 06:35:10.034569 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:10 crc kubenswrapper[4947]: I1129 06:35:10.034578 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:10 crc kubenswrapper[4947]: I1129 06:35:10.034592 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:10 crc kubenswrapper[4947]: I1129 06:35:10.034602 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:10Z","lastTransitionTime":"2025-11-29T06:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:10 crc kubenswrapper[4947]: I1129 06:35:10.138835 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:10 crc kubenswrapper[4947]: I1129 06:35:10.138887 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:10 crc kubenswrapper[4947]: I1129 06:35:10.138903 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:10 crc kubenswrapper[4947]: I1129 06:35:10.138926 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:10 crc kubenswrapper[4947]: I1129 06:35:10.138943 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:10Z","lastTransitionTime":"2025-11-29T06:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:10 crc kubenswrapper[4947]: I1129 06:35:10.178571 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:35:10 crc kubenswrapper[4947]: E1129 06:35:10.178706 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 06:35:10 crc kubenswrapper[4947]: I1129 06:35:10.178880 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:35:10 crc kubenswrapper[4947]: E1129 06:35:10.178936 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fbj5" podUID="53a3bcac-8ad0-47ce-abee-ee56fd152ea8" Nov 29 06:35:10 crc kubenswrapper[4947]: I1129 06:35:10.180094 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:35:10 crc kubenswrapper[4947]: E1129 06:35:10.180180 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 06:35:10 crc kubenswrapper[4947]: I1129 06:35:10.241969 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:10 crc kubenswrapper[4947]: I1129 06:35:10.242047 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:10 crc kubenswrapper[4947]: I1129 06:35:10.242065 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:10 crc kubenswrapper[4947]: I1129 06:35:10.242092 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:10 crc kubenswrapper[4947]: I1129 06:35:10.242109 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:10Z","lastTransitionTime":"2025-11-29T06:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:10 crc kubenswrapper[4947]: I1129 06:35:10.344672 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:10 crc kubenswrapper[4947]: I1129 06:35:10.344707 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:10 crc kubenswrapper[4947]: I1129 06:35:10.344716 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:10 crc kubenswrapper[4947]: I1129 06:35:10.344730 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:10 crc kubenswrapper[4947]: I1129 06:35:10.344741 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:10Z","lastTransitionTime":"2025-11-29T06:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:10 crc kubenswrapper[4947]: I1129 06:35:10.447840 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:10 crc kubenswrapper[4947]: I1129 06:35:10.447881 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:10 crc kubenswrapper[4947]: I1129 06:35:10.447892 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:10 crc kubenswrapper[4947]: I1129 06:35:10.447911 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:10 crc kubenswrapper[4947]: I1129 06:35:10.447923 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:10Z","lastTransitionTime":"2025-11-29T06:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:10 crc kubenswrapper[4947]: I1129 06:35:10.550928 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:10 crc kubenswrapper[4947]: I1129 06:35:10.550968 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:10 crc kubenswrapper[4947]: I1129 06:35:10.550979 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:10 crc kubenswrapper[4947]: I1129 06:35:10.550995 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:10 crc kubenswrapper[4947]: I1129 06:35:10.551005 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:10Z","lastTransitionTime":"2025-11-29T06:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:10 crc kubenswrapper[4947]: I1129 06:35:10.654157 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:10 crc kubenswrapper[4947]: I1129 06:35:10.654206 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:10 crc kubenswrapper[4947]: I1129 06:35:10.654235 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:10 crc kubenswrapper[4947]: I1129 06:35:10.654251 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:10 crc kubenswrapper[4947]: I1129 06:35:10.654261 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:10Z","lastTransitionTime":"2025-11-29T06:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:10 crc kubenswrapper[4947]: I1129 06:35:10.758438 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:10 crc kubenswrapper[4947]: I1129 06:35:10.758527 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:10 crc kubenswrapper[4947]: I1129 06:35:10.758540 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:10 crc kubenswrapper[4947]: I1129 06:35:10.758559 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:10 crc kubenswrapper[4947]: I1129 06:35:10.758571 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:10Z","lastTransitionTime":"2025-11-29T06:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:10 crc kubenswrapper[4947]: I1129 06:35:10.862324 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:10 crc kubenswrapper[4947]: I1129 06:35:10.862411 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:10 crc kubenswrapper[4947]: I1129 06:35:10.862436 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:10 crc kubenswrapper[4947]: I1129 06:35:10.862463 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:10 crc kubenswrapper[4947]: I1129 06:35:10.862499 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:10Z","lastTransitionTime":"2025-11-29T06:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:10 crc kubenswrapper[4947]: I1129 06:35:10.965769 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:10 crc kubenswrapper[4947]: I1129 06:35:10.965865 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:10 crc kubenswrapper[4947]: I1129 06:35:10.965884 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:10 crc kubenswrapper[4947]: I1129 06:35:10.965947 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:10 crc kubenswrapper[4947]: I1129 06:35:10.965966 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:10Z","lastTransitionTime":"2025-11-29T06:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:11 crc kubenswrapper[4947]: I1129 06:35:11.069654 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:11 crc kubenswrapper[4947]: I1129 06:35:11.069720 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:11 crc kubenswrapper[4947]: I1129 06:35:11.069742 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:11 crc kubenswrapper[4947]: I1129 06:35:11.069769 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:11 crc kubenswrapper[4947]: I1129 06:35:11.069788 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:11Z","lastTransitionTime":"2025-11-29T06:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:11 crc kubenswrapper[4947]: I1129 06:35:11.172328 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:11 crc kubenswrapper[4947]: I1129 06:35:11.172367 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:11 crc kubenswrapper[4947]: I1129 06:35:11.172375 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:11 crc kubenswrapper[4947]: I1129 06:35:11.172390 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:11 crc kubenswrapper[4947]: I1129 06:35:11.172400 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:11Z","lastTransitionTime":"2025-11-29T06:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:11 crc kubenswrapper[4947]: I1129 06:35:11.177774 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:35:11 crc kubenswrapper[4947]: E1129 06:35:11.177866 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 06:35:11 crc kubenswrapper[4947]: I1129 06:35:11.179153 4947 scope.go:117] "RemoveContainer" containerID="ee517e6c1aeb923ec34bc4ffa9dd6d445887230476b7ea3be7f09d82f1465442" Nov 29 06:35:11 crc kubenswrapper[4947]: E1129 06:35:11.179686 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-z4rxq_openshift-ovn-kubernetes(dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" podUID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" Nov 29 06:35:11 crc kubenswrapper[4947]: I1129 06:35:11.275399 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:11 crc kubenswrapper[4947]: I1129 06:35:11.275475 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:11 crc kubenswrapper[4947]: I1129 06:35:11.275499 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:11 crc kubenswrapper[4947]: I1129 06:35:11.275529 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:11 crc kubenswrapper[4947]: I1129 06:35:11.275551 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:11Z","lastTransitionTime":"2025-11-29T06:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:11 crc kubenswrapper[4947]: I1129 06:35:11.378586 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:11 crc kubenswrapper[4947]: I1129 06:35:11.378656 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:11 crc kubenswrapper[4947]: I1129 06:35:11.378669 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:11 crc kubenswrapper[4947]: I1129 06:35:11.378694 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:11 crc kubenswrapper[4947]: I1129 06:35:11.378713 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:11Z","lastTransitionTime":"2025-11-29T06:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:11 crc kubenswrapper[4947]: I1129 06:35:11.481180 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:11 crc kubenswrapper[4947]: I1129 06:35:11.481275 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:11 crc kubenswrapper[4947]: I1129 06:35:11.481291 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:11 crc kubenswrapper[4947]: I1129 06:35:11.481314 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:11 crc kubenswrapper[4947]: I1129 06:35:11.481330 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:11Z","lastTransitionTime":"2025-11-29T06:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:11 crc kubenswrapper[4947]: I1129 06:35:11.584304 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:11 crc kubenswrapper[4947]: I1129 06:35:11.584361 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:11 crc kubenswrapper[4947]: I1129 06:35:11.584377 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:11 crc kubenswrapper[4947]: I1129 06:35:11.584398 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:11 crc kubenswrapper[4947]: I1129 06:35:11.584413 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:11Z","lastTransitionTime":"2025-11-29T06:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:11 crc kubenswrapper[4947]: I1129 06:35:11.688108 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:11 crc kubenswrapper[4947]: I1129 06:35:11.688158 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:11 crc kubenswrapper[4947]: I1129 06:35:11.688167 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:11 crc kubenswrapper[4947]: I1129 06:35:11.688188 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:11 crc kubenswrapper[4947]: I1129 06:35:11.688200 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:11Z","lastTransitionTime":"2025-11-29T06:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:11 crc kubenswrapper[4947]: I1129 06:35:11.791858 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:11 crc kubenswrapper[4947]: I1129 06:35:11.791918 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:11 crc kubenswrapper[4947]: I1129 06:35:11.791937 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:11 crc kubenswrapper[4947]: I1129 06:35:11.791964 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:11 crc kubenswrapper[4947]: I1129 06:35:11.791981 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:11Z","lastTransitionTime":"2025-11-29T06:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:11 crc kubenswrapper[4947]: I1129 06:35:11.894515 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:11 crc kubenswrapper[4947]: I1129 06:35:11.894568 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:11 crc kubenswrapper[4947]: I1129 06:35:11.894583 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:11 crc kubenswrapper[4947]: I1129 06:35:11.894606 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:11 crc kubenswrapper[4947]: I1129 06:35:11.894621 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:11Z","lastTransitionTime":"2025-11-29T06:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:11 crc kubenswrapper[4947]: I1129 06:35:11.998020 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:11 crc kubenswrapper[4947]: I1129 06:35:11.998065 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:11 crc kubenswrapper[4947]: I1129 06:35:11.998079 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:11 crc kubenswrapper[4947]: I1129 06:35:11.998096 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:11 crc kubenswrapper[4947]: I1129 06:35:11.998108 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:11Z","lastTransitionTime":"2025-11-29T06:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:12 crc kubenswrapper[4947]: I1129 06:35:12.100875 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:12 crc kubenswrapper[4947]: I1129 06:35:12.100943 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:12 crc kubenswrapper[4947]: I1129 06:35:12.100963 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:12 crc kubenswrapper[4947]: I1129 06:35:12.100988 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:12 crc kubenswrapper[4947]: I1129 06:35:12.101006 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:12Z","lastTransitionTime":"2025-11-29T06:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:12 crc kubenswrapper[4947]: I1129 06:35:12.177822 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:35:12 crc kubenswrapper[4947]: E1129 06:35:12.177939 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 06:35:12 crc kubenswrapper[4947]: I1129 06:35:12.177822 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:35:12 crc kubenswrapper[4947]: E1129 06:35:12.178128 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 06:35:12 crc kubenswrapper[4947]: I1129 06:35:12.178493 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:35:12 crc kubenswrapper[4947]: E1129 06:35:12.178790 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fbj5" podUID="53a3bcac-8ad0-47ce-abee-ee56fd152ea8" Nov 29 06:35:12 crc kubenswrapper[4947]: I1129 06:35:12.203775 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:12 crc kubenswrapper[4947]: I1129 06:35:12.203838 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:12 crc kubenswrapper[4947]: I1129 06:35:12.203850 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:12 crc kubenswrapper[4947]: I1129 06:35:12.203866 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:12 crc kubenswrapper[4947]: I1129 06:35:12.203881 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:12Z","lastTransitionTime":"2025-11-29T06:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:12 crc kubenswrapper[4947]: I1129 06:35:12.306157 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:12 crc kubenswrapper[4947]: I1129 06:35:12.306450 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:12 crc kubenswrapper[4947]: I1129 06:35:12.306615 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:12 crc kubenswrapper[4947]: I1129 06:35:12.306727 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:12 crc kubenswrapper[4947]: I1129 06:35:12.306809 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:12Z","lastTransitionTime":"2025-11-29T06:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:12 crc kubenswrapper[4947]: I1129 06:35:12.411258 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:12 crc kubenswrapper[4947]: I1129 06:35:12.411332 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:12 crc kubenswrapper[4947]: I1129 06:35:12.411354 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:12 crc kubenswrapper[4947]: I1129 06:35:12.411381 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:12 crc kubenswrapper[4947]: I1129 06:35:12.411403 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:12Z","lastTransitionTime":"2025-11-29T06:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:12 crc kubenswrapper[4947]: I1129 06:35:12.513482 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:12 crc kubenswrapper[4947]: I1129 06:35:12.514065 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:12 crc kubenswrapper[4947]: I1129 06:35:12.514142 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:12 crc kubenswrapper[4947]: I1129 06:35:12.514242 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:12 crc kubenswrapper[4947]: I1129 06:35:12.514319 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:12Z","lastTransitionTime":"2025-11-29T06:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:12 crc kubenswrapper[4947]: I1129 06:35:12.617648 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:12 crc kubenswrapper[4947]: I1129 06:35:12.617703 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:12 crc kubenswrapper[4947]: I1129 06:35:12.617713 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:12 crc kubenswrapper[4947]: I1129 06:35:12.617739 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:12 crc kubenswrapper[4947]: I1129 06:35:12.617750 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:12Z","lastTransitionTime":"2025-11-29T06:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:12 crc kubenswrapper[4947]: I1129 06:35:12.720807 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:12 crc kubenswrapper[4947]: I1129 06:35:12.720877 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:12 crc kubenswrapper[4947]: I1129 06:35:12.720887 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:12 crc kubenswrapper[4947]: I1129 06:35:12.720910 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:12 crc kubenswrapper[4947]: I1129 06:35:12.720922 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:12Z","lastTransitionTime":"2025-11-29T06:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:12 crc kubenswrapper[4947]: I1129 06:35:12.823762 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:12 crc kubenswrapper[4947]: I1129 06:35:12.823832 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:12 crc kubenswrapper[4947]: I1129 06:35:12.823844 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:12 crc kubenswrapper[4947]: I1129 06:35:12.823868 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:12 crc kubenswrapper[4947]: I1129 06:35:12.823883 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:12Z","lastTransitionTime":"2025-11-29T06:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:12 crc kubenswrapper[4947]: I1129 06:35:12.926913 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:12 crc kubenswrapper[4947]: I1129 06:35:12.926963 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:12 crc kubenswrapper[4947]: I1129 06:35:12.926979 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:12 crc kubenswrapper[4947]: I1129 06:35:12.927004 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:12 crc kubenswrapper[4947]: I1129 06:35:12.927020 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:12Z","lastTransitionTime":"2025-11-29T06:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.030608 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.030658 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.030669 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.030689 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.030701 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:13Z","lastTransitionTime":"2025-11-29T06:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.133431 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.133516 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.133534 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.133571 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.133594 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:13Z","lastTransitionTime":"2025-11-29T06:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.180461 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:35:13 crc kubenswrapper[4947]: E1129 06:35:13.180932 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.235958 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.236014 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.236024 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.236047 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.236063 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:13Z","lastTransitionTime":"2025-11-29T06:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.289794 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.290122 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.290323 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.290421 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.290492 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:13Z","lastTransitionTime":"2025-11-29T06:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:13 crc kubenswrapper[4947]: E1129 06:35:13.303248 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ead809c-947a-4269-a0f3-b817113e9662\\\",\\\"systemUUID\\\":\\\"f0748469-4a41-446c-a5c3-776c2ca32148\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:13Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.307714 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.307858 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.307955 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.308045 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.308122 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:13Z","lastTransitionTime":"2025-11-29T06:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:13 crc kubenswrapper[4947]: E1129 06:35:13.319318 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ead809c-947a-4269-a0f3-b817113e9662\\\",\\\"systemUUID\\\":\\\"f0748469-4a41-446c-a5c3-776c2ca32148\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:13Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.322859 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.323069 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.323163 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.323281 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.323385 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:13Z","lastTransitionTime":"2025-11-29T06:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:13 crc kubenswrapper[4947]: E1129 06:35:13.335342 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ead809c-947a-4269-a0f3-b817113e9662\\\",\\\"systemUUID\\\":\\\"f0748469-4a41-446c-a5c3-776c2ca32148\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:13Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.339406 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.339448 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.339458 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.339478 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.339489 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:13Z","lastTransitionTime":"2025-11-29T06:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:13 crc kubenswrapper[4947]: E1129 06:35:13.349247 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ead809c-947a-4269-a0f3-b817113e9662\\\",\\\"systemUUID\\\":\\\"f0748469-4a41-446c-a5c3-776c2ca32148\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:13Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.352546 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.352662 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.352757 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.352849 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.352926 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:13Z","lastTransitionTime":"2025-11-29T06:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:13 crc kubenswrapper[4947]: E1129 06:35:13.365187 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ead809c-947a-4269-a0f3-b817113e9662\\\",\\\"systemUUID\\\":\\\"f0748469-4a41-446c-a5c3-776c2ca32148\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:13Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:13 crc kubenswrapper[4947]: E1129 06:35:13.365325 4947 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.366882 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.366908 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.366917 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.366932 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.366942 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:13Z","lastTransitionTime":"2025-11-29T06:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.469319 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.469362 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.469371 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.469387 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.469397 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:13Z","lastTransitionTime":"2025-11-29T06:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.571704 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.571765 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.571794 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.571808 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.571817 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:13Z","lastTransitionTime":"2025-11-29T06:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.674711 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.674745 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.674774 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.674789 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.674798 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:13Z","lastTransitionTime":"2025-11-29T06:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.777255 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.777499 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.777588 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.777685 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.777825 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:13Z","lastTransitionTime":"2025-11-29T06:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.880977 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.881027 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.881038 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.881056 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.881069 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:13Z","lastTransitionTime":"2025-11-29T06:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.983497 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.983549 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.983558 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.983574 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:13 crc kubenswrapper[4947]: I1129 06:35:13.983584 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:13Z","lastTransitionTime":"2025-11-29T06:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:14 crc kubenswrapper[4947]: I1129 06:35:14.086813 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:14 crc kubenswrapper[4947]: I1129 06:35:14.086898 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:14 crc kubenswrapper[4947]: I1129 06:35:14.086913 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:14 crc kubenswrapper[4947]: I1129 06:35:14.086939 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:14 crc kubenswrapper[4947]: I1129 06:35:14.086956 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:14Z","lastTransitionTime":"2025-11-29T06:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:14 crc kubenswrapper[4947]: I1129 06:35:14.178490 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:35:14 crc kubenswrapper[4947]: I1129 06:35:14.178490 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:35:14 crc kubenswrapper[4947]: I1129 06:35:14.178506 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:35:14 crc kubenswrapper[4947]: E1129 06:35:14.179165 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fbj5" podUID="53a3bcac-8ad0-47ce-abee-ee56fd152ea8" Nov 29 06:35:14 crc kubenswrapper[4947]: E1129 06:35:14.179268 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 06:35:14 crc kubenswrapper[4947]: E1129 06:35:14.178969 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 06:35:14 crc kubenswrapper[4947]: I1129 06:35:14.189705 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:14 crc kubenswrapper[4947]: I1129 06:35:14.189991 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:14 crc kubenswrapper[4947]: I1129 06:35:14.190057 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:14 crc kubenswrapper[4947]: I1129 06:35:14.190132 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:14 crc kubenswrapper[4947]: I1129 06:35:14.190199 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:14Z","lastTransitionTime":"2025-11-29T06:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:14 crc kubenswrapper[4947]: I1129 06:35:14.293627 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:14 crc kubenswrapper[4947]: I1129 06:35:14.293693 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:14 crc kubenswrapper[4947]: I1129 06:35:14.293703 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:14 crc kubenswrapper[4947]: I1129 06:35:14.293718 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:14 crc kubenswrapper[4947]: I1129 06:35:14.293728 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:14Z","lastTransitionTime":"2025-11-29T06:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:14 crc kubenswrapper[4947]: I1129 06:35:14.396770 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:14 crc kubenswrapper[4947]: I1129 06:35:14.396813 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:14 crc kubenswrapper[4947]: I1129 06:35:14.396824 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:14 crc kubenswrapper[4947]: I1129 06:35:14.396840 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:14 crc kubenswrapper[4947]: I1129 06:35:14.396854 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:14Z","lastTransitionTime":"2025-11-29T06:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:14 crc kubenswrapper[4947]: I1129 06:35:14.500198 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:14 crc kubenswrapper[4947]: I1129 06:35:14.500274 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:14 crc kubenswrapper[4947]: I1129 06:35:14.500282 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:14 crc kubenswrapper[4947]: I1129 06:35:14.500298 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:14 crc kubenswrapper[4947]: I1129 06:35:14.500308 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:14Z","lastTransitionTime":"2025-11-29T06:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:14 crc kubenswrapper[4947]: I1129 06:35:14.602188 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:14 crc kubenswrapper[4947]: I1129 06:35:14.602277 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:14 crc kubenswrapper[4947]: I1129 06:35:14.602297 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:14 crc kubenswrapper[4947]: I1129 06:35:14.602322 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:14 crc kubenswrapper[4947]: I1129 06:35:14.602339 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:14Z","lastTransitionTime":"2025-11-29T06:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:14 crc kubenswrapper[4947]: I1129 06:35:14.704655 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:14 crc kubenswrapper[4947]: I1129 06:35:14.704943 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:14 crc kubenswrapper[4947]: I1129 06:35:14.705039 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:14 crc kubenswrapper[4947]: I1129 06:35:14.705150 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:14 crc kubenswrapper[4947]: I1129 06:35:14.705272 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:14Z","lastTransitionTime":"2025-11-29T06:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:14 crc kubenswrapper[4947]: I1129 06:35:14.808508 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:14 crc kubenswrapper[4947]: I1129 06:35:14.808794 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:14 crc kubenswrapper[4947]: I1129 06:35:14.808866 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:14 crc kubenswrapper[4947]: I1129 06:35:14.808960 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:14 crc kubenswrapper[4947]: I1129 06:35:14.809048 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:14Z","lastTransitionTime":"2025-11-29T06:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:14 crc kubenswrapper[4947]: I1129 06:35:14.911674 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:14 crc kubenswrapper[4947]: I1129 06:35:14.912017 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:14 crc kubenswrapper[4947]: I1129 06:35:14.912159 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:14 crc kubenswrapper[4947]: I1129 06:35:14.912363 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:14 crc kubenswrapper[4947]: I1129 06:35:14.912505 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:14Z","lastTransitionTime":"2025-11-29T06:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:15 crc kubenswrapper[4947]: I1129 06:35:15.014817 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:15 crc kubenswrapper[4947]: I1129 06:35:15.015145 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:15 crc kubenswrapper[4947]: I1129 06:35:15.015310 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:15 crc kubenswrapper[4947]: I1129 06:35:15.015510 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:15 crc kubenswrapper[4947]: I1129 06:35:15.015648 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:15Z","lastTransitionTime":"2025-11-29T06:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:15 crc kubenswrapper[4947]: I1129 06:35:15.118350 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:15 crc kubenswrapper[4947]: I1129 06:35:15.118395 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:15 crc kubenswrapper[4947]: I1129 06:35:15.118404 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:15 crc kubenswrapper[4947]: I1129 06:35:15.118420 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:15 crc kubenswrapper[4947]: I1129 06:35:15.118429 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:15Z","lastTransitionTime":"2025-11-29T06:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:15 crc kubenswrapper[4947]: I1129 06:35:15.177793 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:35:15 crc kubenswrapper[4947]: E1129 06:35:15.177919 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 06:35:15 crc kubenswrapper[4947]: I1129 06:35:15.220598 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:15 crc kubenswrapper[4947]: I1129 06:35:15.220668 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:15 crc kubenswrapper[4947]: I1129 06:35:15.220678 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:15 crc kubenswrapper[4947]: I1129 06:35:15.220712 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:15 crc kubenswrapper[4947]: I1129 06:35:15.220724 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:15Z","lastTransitionTime":"2025-11-29T06:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:15 crc kubenswrapper[4947]: I1129 06:35:15.323001 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:15 crc kubenswrapper[4947]: I1129 06:35:15.323045 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:15 crc kubenswrapper[4947]: I1129 06:35:15.323057 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:15 crc kubenswrapper[4947]: I1129 06:35:15.323074 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:15 crc kubenswrapper[4947]: I1129 06:35:15.323089 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:15Z","lastTransitionTime":"2025-11-29T06:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:15 crc kubenswrapper[4947]: I1129 06:35:15.424721 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:15 crc kubenswrapper[4947]: I1129 06:35:15.424749 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:15 crc kubenswrapper[4947]: I1129 06:35:15.424758 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:15 crc kubenswrapper[4947]: I1129 06:35:15.424771 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:15 crc kubenswrapper[4947]: I1129 06:35:15.424780 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:15Z","lastTransitionTime":"2025-11-29T06:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:15 crc kubenswrapper[4947]: I1129 06:35:15.527969 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:15 crc kubenswrapper[4947]: I1129 06:35:15.528349 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:15 crc kubenswrapper[4947]: I1129 06:35:15.528487 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:15 crc kubenswrapper[4947]: I1129 06:35:15.528602 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:15 crc kubenswrapper[4947]: I1129 06:35:15.528708 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:15Z","lastTransitionTime":"2025-11-29T06:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:15 crc kubenswrapper[4947]: I1129 06:35:15.630766 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:15 crc kubenswrapper[4947]: I1129 06:35:15.630798 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:15 crc kubenswrapper[4947]: I1129 06:35:15.630808 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:15 crc kubenswrapper[4947]: I1129 06:35:15.630823 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:15 crc kubenswrapper[4947]: I1129 06:35:15.630833 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:15Z","lastTransitionTime":"2025-11-29T06:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:15 crc kubenswrapper[4947]: I1129 06:35:15.733497 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:15 crc kubenswrapper[4947]: I1129 06:35:15.733559 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:15 crc kubenswrapper[4947]: I1129 06:35:15.733580 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:15 crc kubenswrapper[4947]: I1129 06:35:15.733605 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:15 crc kubenswrapper[4947]: I1129 06:35:15.733622 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:15Z","lastTransitionTime":"2025-11-29T06:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:15 crc kubenswrapper[4947]: I1129 06:35:15.836355 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:15 crc kubenswrapper[4947]: I1129 06:35:15.836407 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:15 crc kubenswrapper[4947]: I1129 06:35:15.836420 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:15 crc kubenswrapper[4947]: I1129 06:35:15.836438 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:15 crc kubenswrapper[4947]: I1129 06:35:15.836454 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:15Z","lastTransitionTime":"2025-11-29T06:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:15 crc kubenswrapper[4947]: I1129 06:35:15.939583 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:15 crc kubenswrapper[4947]: I1129 06:35:15.939633 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:15 crc kubenswrapper[4947]: I1129 06:35:15.939653 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:15 crc kubenswrapper[4947]: I1129 06:35:15.939671 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:15 crc kubenswrapper[4947]: I1129 06:35:15.939683 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:15Z","lastTransitionTime":"2025-11-29T06:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:16 crc kubenswrapper[4947]: I1129 06:35:16.042590 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:16 crc kubenswrapper[4947]: I1129 06:35:16.042621 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:16 crc kubenswrapper[4947]: I1129 06:35:16.042632 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:16 crc kubenswrapper[4947]: I1129 06:35:16.042649 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:16 crc kubenswrapper[4947]: I1129 06:35:16.042661 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:16Z","lastTransitionTime":"2025-11-29T06:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:16 crc kubenswrapper[4947]: I1129 06:35:16.048073 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/53a3bcac-8ad0-47ce-abee-ee56fd152ea8-metrics-certs\") pod \"network-metrics-daemon-2fbj5\" (UID: \"53a3bcac-8ad0-47ce-abee-ee56fd152ea8\") " pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:35:16 crc kubenswrapper[4947]: E1129 06:35:16.048243 4947 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 06:35:16 crc kubenswrapper[4947]: E1129 06:35:16.048294 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/53a3bcac-8ad0-47ce-abee-ee56fd152ea8-metrics-certs podName:53a3bcac-8ad0-47ce-abee-ee56fd152ea8 nodeName:}" failed. No retries permitted until 2025-11-29 06:35:48.048279983 +0000 UTC m=+99.092662064 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/53a3bcac-8ad0-47ce-abee-ee56fd152ea8-metrics-certs") pod "network-metrics-daemon-2fbj5" (UID: "53a3bcac-8ad0-47ce-abee-ee56fd152ea8") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 06:35:16 crc kubenswrapper[4947]: I1129 06:35:16.145689 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:16 crc kubenswrapper[4947]: I1129 06:35:16.145752 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:16 crc kubenswrapper[4947]: I1129 06:35:16.145773 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:16 crc kubenswrapper[4947]: I1129 06:35:16.145800 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:16 crc kubenswrapper[4947]: I1129 06:35:16.145820 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:16Z","lastTransitionTime":"2025-11-29T06:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:16 crc kubenswrapper[4947]: I1129 06:35:16.178432 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:35:16 crc kubenswrapper[4947]: E1129 06:35:16.178570 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fbj5" podUID="53a3bcac-8ad0-47ce-abee-ee56fd152ea8" Nov 29 06:35:16 crc kubenswrapper[4947]: I1129 06:35:16.178588 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:35:16 crc kubenswrapper[4947]: E1129 06:35:16.178735 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 06:35:16 crc kubenswrapper[4947]: I1129 06:35:16.179147 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:35:16 crc kubenswrapper[4947]: E1129 06:35:16.179583 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 06:35:16 crc kubenswrapper[4947]: I1129 06:35:16.248173 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:16 crc kubenswrapper[4947]: I1129 06:35:16.248500 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:16 crc kubenswrapper[4947]: I1129 06:35:16.248574 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:16 crc kubenswrapper[4947]: I1129 06:35:16.248651 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:16 crc kubenswrapper[4947]: I1129 06:35:16.248723 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:16Z","lastTransitionTime":"2025-11-29T06:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:16 crc kubenswrapper[4947]: I1129 06:35:16.350631 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:16 crc kubenswrapper[4947]: I1129 06:35:16.350675 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:16 crc kubenswrapper[4947]: I1129 06:35:16.350687 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:16 crc kubenswrapper[4947]: I1129 06:35:16.350703 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:16 crc kubenswrapper[4947]: I1129 06:35:16.350713 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:16Z","lastTransitionTime":"2025-11-29T06:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:16 crc kubenswrapper[4947]: I1129 06:35:16.452963 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:16 crc kubenswrapper[4947]: I1129 06:35:16.453006 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:16 crc kubenswrapper[4947]: I1129 06:35:16.453015 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:16 crc kubenswrapper[4947]: I1129 06:35:16.453035 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:16 crc kubenswrapper[4947]: I1129 06:35:16.453045 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:16Z","lastTransitionTime":"2025-11-29T06:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:16 crc kubenswrapper[4947]: I1129 06:35:16.555588 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:16 crc kubenswrapper[4947]: I1129 06:35:16.555629 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:16 crc kubenswrapper[4947]: I1129 06:35:16.555639 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:16 crc kubenswrapper[4947]: I1129 06:35:16.555656 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:16 crc kubenswrapper[4947]: I1129 06:35:16.555666 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:16Z","lastTransitionTime":"2025-11-29T06:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:16 crc kubenswrapper[4947]: I1129 06:35:16.657660 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:16 crc kubenswrapper[4947]: I1129 06:35:16.657706 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:16 crc kubenswrapper[4947]: I1129 06:35:16.657722 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:16 crc kubenswrapper[4947]: I1129 06:35:16.657739 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:16 crc kubenswrapper[4947]: I1129 06:35:16.657752 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:16Z","lastTransitionTime":"2025-11-29T06:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:16 crc kubenswrapper[4947]: I1129 06:35:16.760868 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:16 crc kubenswrapper[4947]: I1129 06:35:16.760923 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:16 crc kubenswrapper[4947]: I1129 06:35:16.760941 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:16 crc kubenswrapper[4947]: I1129 06:35:16.760970 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:16 crc kubenswrapper[4947]: I1129 06:35:16.760987 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:16Z","lastTransitionTime":"2025-11-29T06:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:16 crc kubenswrapper[4947]: I1129 06:35:16.863815 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:16 crc kubenswrapper[4947]: I1129 06:35:16.863865 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:16 crc kubenswrapper[4947]: I1129 06:35:16.863883 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:16 crc kubenswrapper[4947]: I1129 06:35:16.863908 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:16 crc kubenswrapper[4947]: I1129 06:35:16.863925 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:16Z","lastTransitionTime":"2025-11-29T06:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:16 crc kubenswrapper[4947]: I1129 06:35:16.966265 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:16 crc kubenswrapper[4947]: I1129 06:35:16.966300 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:16 crc kubenswrapper[4947]: I1129 06:35:16.966308 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:16 crc kubenswrapper[4947]: I1129 06:35:16.966322 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:16 crc kubenswrapper[4947]: I1129 06:35:16.966347 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:16Z","lastTransitionTime":"2025-11-29T06:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.068892 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.068938 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.068949 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.068963 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.068973 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:17Z","lastTransitionTime":"2025-11-29T06:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.171935 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.171999 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.172014 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.172035 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.172046 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:17Z","lastTransitionTime":"2025-11-29T06:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.178350 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:35:17 crc kubenswrapper[4947]: E1129 06:35:17.178548 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.274351 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.274410 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.274432 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.274460 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.274480 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:17Z","lastTransitionTime":"2025-11-29T06:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.376953 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.376989 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.376997 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.377009 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.377018 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:17Z","lastTransitionTime":"2025-11-29T06:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.479448 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.479498 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.479510 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.479527 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.479538 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:17Z","lastTransitionTime":"2025-11-29T06:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.582327 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.582358 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.582369 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.582386 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.582396 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:17Z","lastTransitionTime":"2025-11-29T06:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.586202 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xlg45_2cbb3532-a15b-4cca-bde1-aa1ae20698f1/kube-multus/0.log" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.586263 4947 generic.go:334] "Generic (PLEG): container finished" podID="2cbb3532-a15b-4cca-bde1-aa1ae20698f1" containerID="35c22792f1661e18f407880f3a743a03890141d9f5c6062e8ba6b06204b4e3da" exitCode=1 Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.586288 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xlg45" event={"ID":"2cbb3532-a15b-4cca-bde1-aa1ae20698f1","Type":"ContainerDied","Data":"35c22792f1661e18f407880f3a743a03890141d9f5c6062e8ba6b06204b4e3da"} Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.586630 4947 scope.go:117] "RemoveContainer" containerID="35c22792f1661e18f407880f3a743a03890141d9f5c6062e8ba6b06204b4e3da" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.600940 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2fbj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53a3bcac-8ad0-47ce-abee-ee56fd152ea8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nxj27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nxj27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2fbj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:17Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.617381 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21d5af32-2f1c-4aa8-a9b3-c436a5b3cb32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8225fda5f45611099716e891065acf0ffa2db6d2c9b79192b5955c1bac77daf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d79edf1f405f23d4ec3ff1a52d9583fc26d31d1fb690337a7b081868c397d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfae5cdad70e7948b6383df2f934a477b237c38b94cf3d3eda22d11fe02826e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20235c8af3fdd17aa70bd0744d8faf37aa7781f63ef194f6e467721a47e388d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:17Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.634027 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"682a3ca0-7f80-4aa3-8627-44c5f9d6c661\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0decb70631f9a60b5a44909b2cd152c099aa6955393b715617a93d2639a8f211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7347898f9e11318a33ea5f24ef489a4e58da64e0631ac46aa91f30f5691ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://836bfd5239874f47639673b177b0d441dff3d84e255c7c6d1983c9e0db5134fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0e4509596cc7d5e28048c72689ccfc8c249cf06f856142be2b48103608b05f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d0e4509596cc7d5e28048c72689ccfc8c249cf06f856142be2b48103608b05f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:17Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.648260 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:17Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.686031 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.686066 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.686076 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.686092 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.686102 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:17Z","lastTransitionTime":"2025-11-29T06:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.690922 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26f82f9be693d04a2b2536773e664c7e2525ea3cf90f6135e5ed8a41e0def93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffeb939b1928f4e71530473d93e5b142eb70a09d929bee5bea36c407c0fb44a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b624f494cbf18c5d0c15ee7d074273ebb685465bc73aeebae77758840c958415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03aa7fc02c19732bcc2f291ae3e1eb8ee53ac2f8e4b6138a24c73eb69201fa5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba937f20fd42fdc9c935084319c2d3c455813e35002ed22a4be631cc74fab26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5372e6932fe776f4e4d18124f45a05be3f374919ce6df4426d89a023353836\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee517e6c1aeb923ec34bc4ffa9dd6d445887230476b7ea3be7f09d82f1465442\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee517e6c1aeb923ec34bc4ffa9dd6d445887230476b7ea3be7f09d82f1465442\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T06:34:56Z\\\",\\\"message\\\":\\\"tor *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 06:34:56.438063 6572 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1129 06:34:56.438071 6572 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1129 06:34:56.438097 6572 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1129 06:34:56.438057 6572 factory.go:656] Stopping watch factory\\\\nI1129 06:34:56.438088 6572 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1129 06:34:56.438055 6572 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1129 06:34:56.438275 6572 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 06:34:56.438450 6572 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 06:34:56.438512 6572 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 06:34:56.438613 6572 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-z4rxq_openshift-ovn-kubernetes(dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fadf23d78dff09f3edcb6159171e1f55f5a84b42d3dbc6804a2e1764d6c71797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z4rxq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:17Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.721325 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabaff26-a896-4929-8b32-6e32efe02ffc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e518bb57f917b37cc08b0f099557c012da1b6c08a168ff3acb262abc2c9d6983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73fed24d3da601900cf1f54e0adc59daba09bd9a50813cc39508f2d8c66ae5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187288a1894fa8a8aaf843f1889edb54adeb0771fb0a5759a1346d1ae04f9970\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb63d3fae21b626041036425ac0dbe4cc0fba792aa82812037eb0cf64036984\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce17a0c77733e53cc77e1ced6944f39b53419840d78420bc7cf4ad34fec82e8b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 06:34:22.756831 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 06:34:22.759210 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196230451/tls.crt::/tmp/serving-cert-4196230451/tls.key\\\\\\\"\\\\nI1129 06:34:28.364401 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 06:34:28.369411 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 06:34:28.369441 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 06:34:28.369468 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 06:34:28.369474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 06:34:28.374098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 06:34:28.374134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374148 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 06:34:28.374153 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 06:34:28.374157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 06:34:28.374161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 06:34:28.374166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 06:34:28.376189 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ce7bb31402783842d6a20f7d0667426065729bdee1f7cf4fcbe636173e79da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:17Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.733116 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de17371c2f08b16e96df9f76ce3eaf0981cc8590aca0784ea95598b7842812eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:17Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.749166 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:17Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.759302 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f4d791f-bb61-4aaa-a09c-3007b59645a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247b6579653599785a44fc9e8e0d47f93802f01f2002465a9ba3825d487a324b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bade16afee9bcb203991caf838d5e9e5302dd0b35462b93b45c34cf98e89c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5zgvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:17Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.766518 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ttw9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8440d6ae-a357-461e-a91f-a48625b4a9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c526243829deb889e7afc49647d8bf9960f886b6abc9aa7cba8a69c8d5b3ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mxr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ttw9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:17Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.789503 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.789521 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.789529 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.789568 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.789581 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:17Z","lastTransitionTime":"2025-11-29T06:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.804565 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b25cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a287baf-0c87-4698-9553-6f94927fbf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69ea28bcbeb8379671147cd41f131b8b37b41a285319b082c43381a56cdc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbmt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754f770b2635be1fc785e0cc958e0c885dd7516ba54de760493ec7778d738708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbmt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b25cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:17Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.816172 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84be169c8b603ce62c3c60b6c67eea559c20cbcfc7ba2d0965bcf6acfb00e99b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:17Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.830649 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zb4cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6063fc55-4365-4f22-a005-bfac3812fdce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c5fd1426004597dc139d078e4f9b5bb7fec8ab12162ca6b052f5eb43025b6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fe3a8d1d8d729e919ab73a034943a441b5bc44409af32b5671d73b80e2bd6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe3a8d1d8d729e919ab73a034943a441b5bc44409af32b5671d73b80e2bd6ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49d19e2f799deb4cf6b8154a2f3c0485c06ad44d0b3e0bac5b2b85f16228f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a49d19e2f799deb4cf6b8154a2f3c0485c06ad44d0b3e0bac5b2b85f16228f38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zb4cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:17Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.840833 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sxdk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f180a1-2fb0-4b96-85ed-1116677a7c99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b80d32b0abeeab39bc9fd8c24510ec865e957006ecac31da11f86b6e0fb983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swgjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sxdk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:17Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.853940 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlg45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c22792f1661e18f407880f3a743a03890141d9f5c6062e8ba6b06204b4e3da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35c22792f1661e18f407880f3a743a03890141d9f5c6062e8ba6b06204b4e3da\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T06:35:16Z\\\",\\\"message\\\":\\\"2025-11-29T06:34:31+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c66371c3-af0c-4d62-8105-c20fdc1c09f0\\\\n2025-11-29T06:34:31+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c66371c3-af0c-4d62-8105-c20fdc1c09f0 to /host/opt/cni/bin/\\\\n2025-11-29T06:34:31Z [verbose] multus-daemon started\\\\n2025-11-29T06:34:31Z [verbose] Readiness Indicator file check\\\\n2025-11-29T06:35:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rndhc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlg45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:17Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.875025 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"268b8d19-01a0-4696-8b9e-0efed4129d56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06bdbb1b45cdc328f5c4345797e222ac33d9f1ea052011c183a229c02134cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa666baf286a9b54dbafbe35f35c3be377481392a4e3811f72cf189d1adbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ef81ddc4f773e06ee9c296ecc2e67ce76bf795fbd8357236cad6c0462d7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360e1f2c76ee825b275816919d9ab1473c6db5da5ab22b43134738150306f87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4712b1e44979f37b926f325313325a788ecb9e01b30583bf8210ab6c53d71d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:17Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.890450 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92f526fea4a10baf89765a67868ca0fe47ce19bbde73cd538bd65add7578f000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3934144663bb1471d168783c160c42654884d04d2510cbe1e5f2f6e2cfb94f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:17Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.891851 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.891881 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.891892 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.891908 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.891920 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:17Z","lastTransitionTime":"2025-11-29T06:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.904777 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:17Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.995146 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.995189 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.995198 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.995211 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:17 crc kubenswrapper[4947]: I1129 06:35:17.995239 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:17Z","lastTransitionTime":"2025-11-29T06:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.098568 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.098619 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.098635 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.098657 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.098674 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:18Z","lastTransitionTime":"2025-11-29T06:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.177693 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.177756 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:35:18 crc kubenswrapper[4947]: E1129 06:35:18.177822 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.177760 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:35:18 crc kubenswrapper[4947]: E1129 06:35:18.177937 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fbj5" podUID="53a3bcac-8ad0-47ce-abee-ee56fd152ea8" Nov 29 06:35:18 crc kubenswrapper[4947]: E1129 06:35:18.178060 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.201779 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.201829 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.201841 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.201860 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.201881 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:18Z","lastTransitionTime":"2025-11-29T06:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.305089 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.305135 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.305143 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.305161 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.305171 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:18Z","lastTransitionTime":"2025-11-29T06:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.408634 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.408672 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.408682 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.408702 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.408713 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:18Z","lastTransitionTime":"2025-11-29T06:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.511477 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.511527 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.511539 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.511554 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.511565 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:18Z","lastTransitionTime":"2025-11-29T06:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.591018 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xlg45_2cbb3532-a15b-4cca-bde1-aa1ae20698f1/kube-multus/0.log" Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.591079 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xlg45" event={"ID":"2cbb3532-a15b-4cca-bde1-aa1ae20698f1","Type":"ContainerStarted","Data":"63ddd0da1118c2e86da1aea51f8248927f80bc1d21790723952b4d59f294cd76"} Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.603809 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sxdk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f180a1-2fb0-4b96-85ed-1116677a7c99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b80d32b0abeeab39bc9fd8c24510ec865e957006ecac31da11f86b6e0fb983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swgjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sxdk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:18Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.616485 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.616551 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.616569 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.616592 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.616610 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:18Z","lastTransitionTime":"2025-11-29T06:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.617023 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlg45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63ddd0da1118c2e86da1aea51f8248927f80bc1d21790723952b4d59f294cd76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35c22792f1661e18f407880f3a743a03890141d9f5c6062e8ba6b06204b4e3da\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T06:35:16Z\\\",\\\"message\\\":\\\"2025-11-29T06:34:31+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c66371c3-af0c-4d62-8105-c20fdc1c09f0\\\\n2025-11-29T06:34:31+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c66371c3-af0c-4d62-8105-c20fdc1c09f0 to /host/opt/cni/bin/\\\\n2025-11-29T06:34:31Z [verbose] multus-daemon started\\\\n2025-11-29T06:34:31Z [verbose] Readiness Indicator file check\\\\n2025-11-29T06:35:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rndhc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlg45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:18Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.637192 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"268b8d19-01a0-4696-8b9e-0efed4129d56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06bdbb1b45cdc328f5c4345797e222ac33d9f1ea052011c183a229c02134cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa666baf286a9b54dbafbe35f35c3be377481392a4e3811f72cf189d1adbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ef81ddc4f773e06ee9c296ecc2e67ce76bf795fbd8357236cad6c0462d7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360e1f2c76ee825b275816919d9ab1473c6db5da5ab22b43134738150306f87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4712b1e44979f37b926f325313325a788ecb9e01b30583bf8210ab6c53d71d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:18Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.649677 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92f526fea4a10baf89765a67868ca0fe47ce19bbde73cd538bd65add7578f000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3934144663bb1471d168783c160c42654884d04d2510cbe1e5f2f6e2cfb94f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:18Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.664194 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:18Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.674855 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2fbj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53a3bcac-8ad0-47ce-abee-ee56fd152ea8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nxj27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nxj27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2fbj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:18Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.687588 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21d5af32-2f1c-4aa8-a9b3-c436a5b3cb32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8225fda5f45611099716e891065acf0ffa2db6d2c9b79192b5955c1bac77daf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d79edf1f405f23d4ec3ff1a52d9583fc26d31d1fb690337a7b081868c397d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfae5cdad70e7948b6383df2f934a477b237c38b94cf3d3eda22d11fe02826e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20235c8af3fdd17aa70bd0744d8faf37aa7781f63ef194f6e467721a47e388d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:18Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.698118 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"682a3ca0-7f80-4aa3-8627-44c5f9d6c661\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0decb70631f9a60b5a44909b2cd152c099aa6955393b715617a93d2639a8f211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7347898f9e11318a33ea5f24ef489a4e58da64e0631ac46aa91f30f5691ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://836bfd5239874f47639673b177b0d441dff3d84e255c7c6d1983c9e0db5134fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0e4509596cc7d5e28048c72689ccfc8c249cf06f856142be2b48103608b05f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d0e4509596cc7d5e28048c72689ccfc8c249cf06f856142be2b48103608b05f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:18Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.709404 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:18Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.719061 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.719089 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.719098 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.719113 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.719122 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:18Z","lastTransitionTime":"2025-11-29T06:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.728726 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26f82f9be693d04a2b2536773e664c7e2525ea3cf90f6135e5ed8a41e0def93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffeb939b1928f4e71530473d93e5b142eb70a09d929bee5bea36c407c0fb44a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b624f494cbf18c5d0c15ee7d074273ebb685465bc73aeebae77758840c958415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03aa7fc02c19732bcc2f291ae3e1eb8ee53ac2f8e4b6138a24c73eb69201fa5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba937f20fd42fdc9c935084319c2d3c455813e35002ed22a4be631cc74fab26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5372e6932fe776f4e4d18124f45a05be3f374919ce6df4426d89a023353836\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee517e6c1aeb923ec34bc4ffa9dd6d445887230476b7ea3be7f09d82f1465442\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee517e6c1aeb923ec34bc4ffa9dd6d445887230476b7ea3be7f09d82f1465442\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T06:34:56Z\\\",\\\"message\\\":\\\"tor *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 06:34:56.438063 6572 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1129 06:34:56.438071 6572 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1129 06:34:56.438097 6572 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1129 06:34:56.438057 6572 factory.go:656] Stopping watch factory\\\\nI1129 06:34:56.438088 6572 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1129 06:34:56.438055 6572 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1129 06:34:56.438275 6572 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 06:34:56.438450 6572 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 06:34:56.438512 6572 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 06:34:56.438613 6572 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-z4rxq_openshift-ovn-kubernetes(dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fadf23d78dff09f3edcb6159171e1f55f5a84b42d3dbc6804a2e1764d6c71797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z4rxq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:18Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.742137 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabaff26-a896-4929-8b32-6e32efe02ffc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e518bb57f917b37cc08b0f099557c012da1b6c08a168ff3acb262abc2c9d6983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73fed24d3da601900cf1f54e0adc59daba09bd9a50813cc39508f2d8c66ae5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187288a1894fa8a8aaf843f1889edb54adeb0771fb0a5759a1346d1ae04f9970\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb63d3fae21b626041036425ac0dbe4cc0fba792aa82812037eb0cf64036984\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce17a0c77733e53cc77e1ced6944f39b53419840d78420bc7cf4ad34fec82e8b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 06:34:22.756831 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 06:34:22.759210 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196230451/tls.crt::/tmp/serving-cert-4196230451/tls.key\\\\\\\"\\\\nI1129 06:34:28.364401 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 06:34:28.369411 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 06:34:28.369441 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 06:34:28.369468 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 06:34:28.369474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 06:34:28.374098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 06:34:28.374134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374148 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 06:34:28.374153 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 06:34:28.374157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 06:34:28.374161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 06:34:28.374166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 06:34:28.376189 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ce7bb31402783842d6a20f7d0667426065729bdee1f7cf4fcbe636173e79da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:18Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.754074 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de17371c2f08b16e96df9f76ce3eaf0981cc8590aca0784ea95598b7842812eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:18Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.767188 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:18Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.777012 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f4d791f-bb61-4aaa-a09c-3007b59645a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247b6579653599785a44fc9e8e0d47f93802f01f2002465a9ba3825d487a324b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bade16afee9bcb203991caf838d5e9e5302dd0b35462b93b45c34cf98e89c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5zgvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:18Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.786113 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ttw9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8440d6ae-a357-461e-a91f-a48625b4a9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c526243829deb889e7afc49647d8bf9960f886b6abc9aa7cba8a69c8d5b3ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mxr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ttw9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:18Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.796620 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b25cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a287baf-0c87-4698-9553-6f94927fbf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69ea28bcbeb8379671147cd41f131b8b37b41a285319b082c43381a56cdc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbmt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754f770b2635be1fc785e0cc958e0c885dd7516ba54de760493ec7778d738708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbmt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b25cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:18Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.807207 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84be169c8b603ce62c3c60b6c67eea559c20cbcfc7ba2d0965bcf6acfb00e99b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:18Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.820931 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.820980 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.820994 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.821012 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.821032 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:18Z","lastTransitionTime":"2025-11-29T06:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.826382 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zb4cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6063fc55-4365-4f22-a005-bfac3812fdce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c5fd1426004597dc139d078e4f9b5bb7fec8ab12162ca6b052f5eb43025b6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fe3a8d1d8d729e919ab73a034943a441b5bc44409af32b5671d73b80e2bd6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe3a8d1d8d729e919ab73a034943a441b5bc44409af32b5671d73b80e2bd6ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49d19e2f799deb4cf6b8154a2f3c0485c06ad44d0b3e0bac5b2b85f16228f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a49d19e2f799deb4cf6b8154a2f3c0485c06ad44d0b3e0bac5b2b85f16228f38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zb4cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:18Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.924299 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.924335 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.924347 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.924370 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:18 crc kubenswrapper[4947]: I1129 06:35:18.924382 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:18Z","lastTransitionTime":"2025-11-29T06:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.026836 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.026887 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.027301 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.030852 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.030876 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:19Z","lastTransitionTime":"2025-11-29T06:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.133123 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.133185 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.133195 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.133208 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.133246 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:19Z","lastTransitionTime":"2025-11-29T06:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.183321 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:35:19 crc kubenswrapper[4947]: E1129 06:35:19.183504 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.214285 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"268b8d19-01a0-4696-8b9e-0efed4129d56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06bdbb1b45cdc328f5c4345797e222ac33d9f1ea052011c183a229c02134cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa666baf286a9b54dbafbe35f35c3be377481392a4e3811f72cf189d1adbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ef81ddc4f773e06ee9c296ecc2e67ce76bf795fbd8357236cad6c0462d7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360e1f2c76ee825b275816919d9ab1473c6db5da5ab22b43134738150306f87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4712b1e44979f37b926f325313325a788ecb9e01b30583bf8210ab6c53d71d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:19Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.227695 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92f526fea4a10baf89765a67868ca0fe47ce19bbde73cd538bd65add7578f000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3934144663bb1471d168783c160c42654884d04d2510cbe1e5f2f6e2cfb94f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:19Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.235660 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.235694 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.235705 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.235723 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.235734 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:19Z","lastTransitionTime":"2025-11-29T06:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.241556 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:19Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.253647 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sxdk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f180a1-2fb0-4b96-85ed-1116677a7c99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b80d32b0abeeab39bc9fd8c24510ec865e957006ecac31da11f86b6e0fb983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swgjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sxdk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:19Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.268410 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlg45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63ddd0da1118c2e86da1aea51f8248927f80bc1d21790723952b4d59f294cd76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35c22792f1661e18f407880f3a743a03890141d9f5c6062e8ba6b06204b4e3da\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T06:35:16Z\\\",\\\"message\\\":\\\"2025-11-29T06:34:31+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c66371c3-af0c-4d62-8105-c20fdc1c09f0\\\\n2025-11-29T06:34:31+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c66371c3-af0c-4d62-8105-c20fdc1c09f0 to /host/opt/cni/bin/\\\\n2025-11-29T06:34:31Z [verbose] multus-daemon started\\\\n2025-11-29T06:34:31Z [verbose] Readiness Indicator file check\\\\n2025-11-29T06:35:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rndhc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlg45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:19Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.282128 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21d5af32-2f1c-4aa8-a9b3-c436a5b3cb32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8225fda5f45611099716e891065acf0ffa2db6d2c9b79192b5955c1bac77daf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d79edf1f405f23d4ec3ff1a52d9583fc26d31d1fb690337a7b081868c397d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfae5cdad70e7948b6383df2f934a477b237c38b94cf3d3eda22d11fe02826e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20235c8af3fdd17aa70bd0744d8faf37aa7781f63ef194f6e467721a47e388d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:19Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.294435 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"682a3ca0-7f80-4aa3-8627-44c5f9d6c661\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0decb70631f9a60b5a44909b2cd152c099aa6955393b715617a93d2639a8f211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7347898f9e11318a33ea5f24ef489a4e58da64e0631ac46aa91f30f5691ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://836bfd5239874f47639673b177b0d441dff3d84e255c7c6d1983c9e0db5134fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0e4509596cc7d5e28048c72689ccfc8c249cf06f856142be2b48103608b05f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d0e4509596cc7d5e28048c72689ccfc8c249cf06f856142be2b48103608b05f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:19Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.309067 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:19Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.319605 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2fbj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53a3bcac-8ad0-47ce-abee-ee56fd152ea8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nxj27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nxj27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2fbj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:19Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.332460 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabaff26-a896-4929-8b32-6e32efe02ffc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e518bb57f917b37cc08b0f099557c012da1b6c08a168ff3acb262abc2c9d6983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73fed24d3da601900cf1f54e0adc59daba09bd9a50813cc39508f2d8c66ae5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187288a1894fa8a8aaf843f1889edb54adeb0771fb0a5759a1346d1ae04f9970\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb63d3fae21b626041036425ac0dbe4cc0fba792aa82812037eb0cf64036984\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce17a0c77733e53cc77e1ced6944f39b53419840d78420bc7cf4ad34fec82e8b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 06:34:22.756831 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 06:34:22.759210 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196230451/tls.crt::/tmp/serving-cert-4196230451/tls.key\\\\\\\"\\\\nI1129 06:34:28.364401 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 06:34:28.369411 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 06:34:28.369441 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 06:34:28.369468 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 06:34:28.369474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 06:34:28.374098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 06:34:28.374134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374148 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 06:34:28.374153 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 06:34:28.374157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 06:34:28.374161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 06:34:28.374166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 06:34:28.376189 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ce7bb31402783842d6a20f7d0667426065729bdee1f7cf4fcbe636173e79da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:19Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.337474 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.337504 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.337513 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.337527 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.337536 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:19Z","lastTransitionTime":"2025-11-29T06:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.354696 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de17371c2f08b16e96df9f76ce3eaf0981cc8590aca0784ea95598b7842812eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:19Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.365873 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:19Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.380939 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26f82f9be693d04a2b2536773e664c7e2525ea3cf90f6135e5ed8a41e0def93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffeb939b1928f4e71530473d93e5b142eb70a09d929bee5bea36c407c0fb44a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b624f494cbf18c5d0c15ee7d074273ebb685465bc73aeebae77758840c958415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03aa7fc02c19732bcc2f291ae3e1eb8ee53ac2f8e4b6138a24c73eb69201fa5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba937f20fd42fdc9c935084319c2d3c455813e35002ed22a4be631cc74fab26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5372e6932fe776f4e4d18124f45a05be3f374919ce6df4426d89a023353836\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee517e6c1aeb923ec34bc4ffa9dd6d445887230476b7ea3be7f09d82f1465442\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee517e6c1aeb923ec34bc4ffa9dd6d445887230476b7ea3be7f09d82f1465442\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T06:34:56Z\\\",\\\"message\\\":\\\"tor *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 06:34:56.438063 6572 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1129 06:34:56.438071 6572 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1129 06:34:56.438097 6572 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1129 06:34:56.438057 6572 factory.go:656] Stopping watch factory\\\\nI1129 06:34:56.438088 6572 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1129 06:34:56.438055 6572 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1129 06:34:56.438275 6572 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 06:34:56.438450 6572 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 06:34:56.438512 6572 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 06:34:56.438613 6572 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-z4rxq_openshift-ovn-kubernetes(dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fadf23d78dff09f3edcb6159171e1f55f5a84b42d3dbc6804a2e1764d6c71797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z4rxq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:19Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.395697 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84be169c8b603ce62c3c60b6c67eea559c20cbcfc7ba2d0965bcf6acfb00e99b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:19Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.417199 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zb4cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6063fc55-4365-4f22-a005-bfac3812fdce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c5fd1426004597dc139d078e4f9b5bb7fec8ab12162ca6b052f5eb43025b6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fe3a8d1d8d729e919ab73a034943a441b5bc44409af32b5671d73b80e2bd6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe3a8d1d8d729e919ab73a034943a441b5bc44409af32b5671d73b80e2bd6ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49d19e2f799deb4cf6b8154a2f3c0485c06ad44d0b3e0bac5b2b85f16228f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a49d19e2f799deb4cf6b8154a2f3c0485c06ad44d0b3e0bac5b2b85f16228f38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zb4cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:19Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.430696 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f4d791f-bb61-4aaa-a09c-3007b59645a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247b6579653599785a44fc9e8e0d47f93802f01f2002465a9ba3825d487a324b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bade16afee9bcb203991caf838d5e9e5302dd0b35462b93b45c34cf98e89c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5zgvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:19Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.440209 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.440267 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.440283 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.440306 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.440316 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:19Z","lastTransitionTime":"2025-11-29T06:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.443617 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ttw9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8440d6ae-a357-461e-a91f-a48625b4a9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c526243829deb889e7afc49647d8bf9960f886b6abc9aa7cba8a69c8d5b3ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mxr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ttw9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:19Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.456494 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b25cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a287baf-0c87-4698-9553-6f94927fbf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69ea28bcbeb8379671147cd41f131b8b37b41a285319b082c43381a56cdc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbmt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754f770b2635be1fc785e0cc958e0c885dd7516ba54de760493ec7778d738708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbmt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b25cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:19Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.541987 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.542023 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.542033 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.542050 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.542061 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:19Z","lastTransitionTime":"2025-11-29T06:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.643995 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.644042 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.644053 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.644071 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.644084 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:19Z","lastTransitionTime":"2025-11-29T06:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.747291 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.747343 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.747352 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.747370 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.747382 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:19Z","lastTransitionTime":"2025-11-29T06:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.849428 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.849473 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.849486 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.849501 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.849513 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:19Z","lastTransitionTime":"2025-11-29T06:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.952330 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.952369 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.952377 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.952391 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:19 crc kubenswrapper[4947]: I1129 06:35:19.952399 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:19Z","lastTransitionTime":"2025-11-29T06:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:20 crc kubenswrapper[4947]: I1129 06:35:20.055284 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:20 crc kubenswrapper[4947]: I1129 06:35:20.055365 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:20 crc kubenswrapper[4947]: I1129 06:35:20.055382 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:20 crc kubenswrapper[4947]: I1129 06:35:20.055402 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:20 crc kubenswrapper[4947]: I1129 06:35:20.055417 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:20Z","lastTransitionTime":"2025-11-29T06:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:20 crc kubenswrapper[4947]: I1129 06:35:20.158457 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:20 crc kubenswrapper[4947]: I1129 06:35:20.158501 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:20 crc kubenswrapper[4947]: I1129 06:35:20.158512 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:20 crc kubenswrapper[4947]: I1129 06:35:20.158528 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:20 crc kubenswrapper[4947]: I1129 06:35:20.158540 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:20Z","lastTransitionTime":"2025-11-29T06:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:20 crc kubenswrapper[4947]: I1129 06:35:20.178686 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:35:20 crc kubenswrapper[4947]: I1129 06:35:20.178755 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:35:20 crc kubenswrapper[4947]: I1129 06:35:20.178686 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:35:20 crc kubenswrapper[4947]: E1129 06:35:20.178822 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fbj5" podUID="53a3bcac-8ad0-47ce-abee-ee56fd152ea8" Nov 29 06:35:20 crc kubenswrapper[4947]: E1129 06:35:20.178933 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 06:35:20 crc kubenswrapper[4947]: E1129 06:35:20.179075 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 06:35:20 crc kubenswrapper[4947]: I1129 06:35:20.260867 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:20 crc kubenswrapper[4947]: I1129 06:35:20.260899 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:20 crc kubenswrapper[4947]: I1129 06:35:20.260907 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:20 crc kubenswrapper[4947]: I1129 06:35:20.260920 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:20 crc kubenswrapper[4947]: I1129 06:35:20.260928 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:20Z","lastTransitionTime":"2025-11-29T06:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:20 crc kubenswrapper[4947]: I1129 06:35:20.363326 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:20 crc kubenswrapper[4947]: I1129 06:35:20.363366 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:20 crc kubenswrapper[4947]: I1129 06:35:20.363377 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:20 crc kubenswrapper[4947]: I1129 06:35:20.363393 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:20 crc kubenswrapper[4947]: I1129 06:35:20.363404 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:20Z","lastTransitionTime":"2025-11-29T06:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:20 crc kubenswrapper[4947]: I1129 06:35:20.465157 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:20 crc kubenswrapper[4947]: I1129 06:35:20.465217 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:20 crc kubenswrapper[4947]: I1129 06:35:20.465239 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:20 crc kubenswrapper[4947]: I1129 06:35:20.465254 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:20 crc kubenswrapper[4947]: I1129 06:35:20.465264 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:20Z","lastTransitionTime":"2025-11-29T06:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:20 crc kubenswrapper[4947]: I1129 06:35:20.567379 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:20 crc kubenswrapper[4947]: I1129 06:35:20.567415 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:20 crc kubenswrapper[4947]: I1129 06:35:20.567423 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:20 crc kubenswrapper[4947]: I1129 06:35:20.567437 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:20 crc kubenswrapper[4947]: I1129 06:35:20.567445 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:20Z","lastTransitionTime":"2025-11-29T06:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:20 crc kubenswrapper[4947]: I1129 06:35:20.670160 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:20 crc kubenswrapper[4947]: I1129 06:35:20.670205 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:20 crc kubenswrapper[4947]: I1129 06:35:20.670236 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:20 crc kubenswrapper[4947]: I1129 06:35:20.670253 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:20 crc kubenswrapper[4947]: I1129 06:35:20.670262 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:20Z","lastTransitionTime":"2025-11-29T06:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:20 crc kubenswrapper[4947]: I1129 06:35:20.773338 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:20 crc kubenswrapper[4947]: I1129 06:35:20.773407 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:20 crc kubenswrapper[4947]: I1129 06:35:20.773428 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:20 crc kubenswrapper[4947]: I1129 06:35:20.773455 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:20 crc kubenswrapper[4947]: I1129 06:35:20.773474 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:20Z","lastTransitionTime":"2025-11-29T06:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:20 crc kubenswrapper[4947]: I1129 06:35:20.876424 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:20 crc kubenswrapper[4947]: I1129 06:35:20.876477 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:20 crc kubenswrapper[4947]: I1129 06:35:20.876489 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:20 crc kubenswrapper[4947]: I1129 06:35:20.876507 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:20 crc kubenswrapper[4947]: I1129 06:35:20.876519 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:20Z","lastTransitionTime":"2025-11-29T06:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:20 crc kubenswrapper[4947]: I1129 06:35:20.979348 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:20 crc kubenswrapper[4947]: I1129 06:35:20.979391 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:20 crc kubenswrapper[4947]: I1129 06:35:20.979402 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:20 crc kubenswrapper[4947]: I1129 06:35:20.979419 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:20 crc kubenswrapper[4947]: I1129 06:35:20.979430 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:20Z","lastTransitionTime":"2025-11-29T06:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:21 crc kubenswrapper[4947]: I1129 06:35:21.083166 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:21 crc kubenswrapper[4947]: I1129 06:35:21.083251 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:21 crc kubenswrapper[4947]: I1129 06:35:21.083269 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:21 crc kubenswrapper[4947]: I1129 06:35:21.083296 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:21 crc kubenswrapper[4947]: I1129 06:35:21.083315 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:21Z","lastTransitionTime":"2025-11-29T06:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:21 crc kubenswrapper[4947]: I1129 06:35:21.178564 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:35:21 crc kubenswrapper[4947]: E1129 06:35:21.178801 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 06:35:21 crc kubenswrapper[4947]: I1129 06:35:21.184882 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:21 crc kubenswrapper[4947]: I1129 06:35:21.184910 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:21 crc kubenswrapper[4947]: I1129 06:35:21.184918 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:21 crc kubenswrapper[4947]: I1129 06:35:21.184931 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:21 crc kubenswrapper[4947]: I1129 06:35:21.184940 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:21Z","lastTransitionTime":"2025-11-29T06:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:21 crc kubenswrapper[4947]: I1129 06:35:21.287704 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:21 crc kubenswrapper[4947]: I1129 06:35:21.287757 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:21 crc kubenswrapper[4947]: I1129 06:35:21.287770 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:21 crc kubenswrapper[4947]: I1129 06:35:21.287791 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:21 crc kubenswrapper[4947]: I1129 06:35:21.287805 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:21Z","lastTransitionTime":"2025-11-29T06:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:21 crc kubenswrapper[4947]: I1129 06:35:21.391258 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:21 crc kubenswrapper[4947]: I1129 06:35:21.391303 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:21 crc kubenswrapper[4947]: I1129 06:35:21.391311 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:21 crc kubenswrapper[4947]: I1129 06:35:21.391326 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:21 crc kubenswrapper[4947]: I1129 06:35:21.391336 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:21Z","lastTransitionTime":"2025-11-29T06:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:21 crc kubenswrapper[4947]: I1129 06:35:21.494290 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:21 crc kubenswrapper[4947]: I1129 06:35:21.494350 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:21 crc kubenswrapper[4947]: I1129 06:35:21.494364 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:21 crc kubenswrapper[4947]: I1129 06:35:21.494389 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:21 crc kubenswrapper[4947]: I1129 06:35:21.494410 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:21Z","lastTransitionTime":"2025-11-29T06:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:21 crc kubenswrapper[4947]: I1129 06:35:21.597066 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:21 crc kubenswrapper[4947]: I1129 06:35:21.597107 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:21 crc kubenswrapper[4947]: I1129 06:35:21.597119 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:21 crc kubenswrapper[4947]: I1129 06:35:21.597136 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:21 crc kubenswrapper[4947]: I1129 06:35:21.597148 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:21Z","lastTransitionTime":"2025-11-29T06:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:21 crc kubenswrapper[4947]: I1129 06:35:21.699657 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:21 crc kubenswrapper[4947]: I1129 06:35:21.699697 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:21 crc kubenswrapper[4947]: I1129 06:35:21.699708 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:21 crc kubenswrapper[4947]: I1129 06:35:21.699741 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:21 crc kubenswrapper[4947]: I1129 06:35:21.699753 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:21Z","lastTransitionTime":"2025-11-29T06:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:21 crc kubenswrapper[4947]: I1129 06:35:21.803462 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:21 crc kubenswrapper[4947]: I1129 06:35:21.803509 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:21 crc kubenswrapper[4947]: I1129 06:35:21.803521 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:21 crc kubenswrapper[4947]: I1129 06:35:21.803536 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:21 crc kubenswrapper[4947]: I1129 06:35:21.803548 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:21Z","lastTransitionTime":"2025-11-29T06:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:21 crc kubenswrapper[4947]: I1129 06:35:21.905890 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:21 crc kubenswrapper[4947]: I1129 06:35:21.905929 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:21 crc kubenswrapper[4947]: I1129 06:35:21.905940 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:21 crc kubenswrapper[4947]: I1129 06:35:21.905955 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:21 crc kubenswrapper[4947]: I1129 06:35:21.905965 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:21Z","lastTransitionTime":"2025-11-29T06:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.010424 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.010493 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.010515 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.010553 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.010575 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:22Z","lastTransitionTime":"2025-11-29T06:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.113553 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.113616 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.113633 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.113688 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.113720 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:22Z","lastTransitionTime":"2025-11-29T06:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.178398 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.178415 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:35:22 crc kubenswrapper[4947]: E1129 06:35:22.178512 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.178582 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:35:22 crc kubenswrapper[4947]: E1129 06:35:22.178706 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 06:35:22 crc kubenswrapper[4947]: E1129 06:35:22.179495 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fbj5" podUID="53a3bcac-8ad0-47ce-abee-ee56fd152ea8" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.179985 4947 scope.go:117] "RemoveContainer" containerID="ee517e6c1aeb923ec34bc4ffa9dd6d445887230476b7ea3be7f09d82f1465442" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.216425 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.216492 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.216509 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.216532 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.216549 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:22Z","lastTransitionTime":"2025-11-29T06:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.319551 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.319578 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.319586 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.319600 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.319609 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:22Z","lastTransitionTime":"2025-11-29T06:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.422011 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.422068 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.422081 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.422098 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.422110 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:22Z","lastTransitionTime":"2025-11-29T06:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.527166 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.527211 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.527286 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.527308 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.527325 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:22Z","lastTransitionTime":"2025-11-29T06:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.603490 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z4rxq_dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0/ovnkube-controller/2.log" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.605959 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" event={"ID":"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0","Type":"ContainerStarted","Data":"f688195e20b1f49c08f3c1b7e2348e4ffdaa7417f17c1a626fde1c81232d1920"} Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.606416 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.630082 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.630117 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.630129 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.630144 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.630155 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:22Z","lastTransitionTime":"2025-11-29T06:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.631756 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"268b8d19-01a0-4696-8b9e-0efed4129d56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06bdbb1b45cdc328f5c4345797e222ac33d9f1ea052011c183a229c02134cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa666baf286a9b54dbafbe35f35c3be377481392a4e3811f72cf189d1adbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ef81ddc4f773e06ee9c296ecc2e67ce76bf795fbd8357236cad6c0462d7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360e1f2c76ee825b275816919d9ab1473c6db5da5ab22b43134738150306f87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4712b1e44979f37b926f325313325a788ecb9e01b30583bf8210ab6c53d71d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:22Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.652352 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92f526fea4a10baf89765a67868ca0fe47ce19bbde73cd538bd65add7578f000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3934144663bb1471d168783c160c42654884d04d2510cbe1e5f2f6e2cfb94f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:22Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.665631 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:22Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.676602 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sxdk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f180a1-2fb0-4b96-85ed-1116677a7c99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b80d32b0abeeab39bc9fd8c24510ec865e957006ecac31da11f86b6e0fb983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swgjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sxdk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:22Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.688548 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlg45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63ddd0da1118c2e86da1aea51f8248927f80bc1d21790723952b4d59f294cd76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35c22792f1661e18f407880f3a743a03890141d9f5c6062e8ba6b06204b4e3da\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T06:35:16Z\\\",\\\"message\\\":\\\"2025-11-29T06:34:31+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c66371c3-af0c-4d62-8105-c20fdc1c09f0\\\\n2025-11-29T06:34:31+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c66371c3-af0c-4d62-8105-c20fdc1c09f0 to /host/opt/cni/bin/\\\\n2025-11-29T06:34:31Z [verbose] multus-daemon started\\\\n2025-11-29T06:34:31Z [verbose] Readiness Indicator file check\\\\n2025-11-29T06:35:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rndhc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlg45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:22Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.700329 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21d5af32-2f1c-4aa8-a9b3-c436a5b3cb32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8225fda5f45611099716e891065acf0ffa2db6d2c9b79192b5955c1bac77daf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d79edf1f405f23d4ec3ff1a52d9583fc26d31d1fb690337a7b081868c397d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfae5cdad70e7948b6383df2f934a477b237c38b94cf3d3eda22d11fe02826e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20235c8af3fdd17aa70bd0744d8faf37aa7781f63ef194f6e467721a47e388d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:22Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.714066 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"682a3ca0-7f80-4aa3-8627-44c5f9d6c661\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0decb70631f9a60b5a44909b2cd152c099aa6955393b715617a93d2639a8f211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7347898f9e11318a33ea5f24ef489a4e58da64e0631ac46aa91f30f5691ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://836bfd5239874f47639673b177b0d441dff3d84e255c7c6d1983c9e0db5134fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0e4509596cc7d5e28048c72689ccfc8c249cf06f856142be2b48103608b05f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d0e4509596cc7d5e28048c72689ccfc8c249cf06f856142be2b48103608b05f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:22Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.726950 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:22Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.733073 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.733125 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.733136 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.733157 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.733170 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:22Z","lastTransitionTime":"2025-11-29T06:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.739211 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2fbj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53a3bcac-8ad0-47ce-abee-ee56fd152ea8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nxj27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nxj27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2fbj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:22Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.754613 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabaff26-a896-4929-8b32-6e32efe02ffc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e518bb57f917b37cc08b0f099557c012da1b6c08a168ff3acb262abc2c9d6983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73fed24d3da601900cf1f54e0adc59daba09bd9a50813cc39508f2d8c66ae5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187288a1894fa8a8aaf843f1889edb54adeb0771fb0a5759a1346d1ae04f9970\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb63d3fae21b626041036425ac0dbe4cc0fba792aa82812037eb0cf64036984\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce17a0c77733e53cc77e1ced6944f39b53419840d78420bc7cf4ad34fec82e8b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 06:34:22.756831 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 06:34:22.759210 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196230451/tls.crt::/tmp/serving-cert-4196230451/tls.key\\\\\\\"\\\\nI1129 06:34:28.364401 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 06:34:28.369411 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 06:34:28.369441 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 06:34:28.369468 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 06:34:28.369474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 06:34:28.374098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 06:34:28.374134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374148 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 06:34:28.374153 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 06:34:28.374157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 06:34:28.374161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 06:34:28.374166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 06:34:28.376189 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ce7bb31402783842d6a20f7d0667426065729bdee1f7cf4fcbe636173e79da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:22Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.773331 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de17371c2f08b16e96df9f76ce3eaf0981cc8590aca0784ea95598b7842812eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:22Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.792809 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:22Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.815583 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26f82f9be693d04a2b2536773e664c7e2525ea3cf90f6135e5ed8a41e0def93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffeb939b1928f4e71530473d93e5b142eb70a09d929bee5bea36c407c0fb44a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b624f494cbf18c5d0c15ee7d074273ebb685465bc73aeebae77758840c958415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03aa7fc02c19732bcc2f291ae3e1eb8ee53ac2f8e4b6138a24c73eb69201fa5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba937f20fd42fdc9c935084319c2d3c455813e35002ed22a4be631cc74fab26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5372e6932fe776f4e4d18124f45a05be3f374919ce6df4426d89a023353836\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f688195e20b1f49c08f3c1b7e2348e4ffdaa7417f17c1a626fde1c81232d1920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee517e6c1aeb923ec34bc4ffa9dd6d445887230476b7ea3be7f09d82f1465442\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T06:34:56Z\\\",\\\"message\\\":\\\"tor *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 06:34:56.438063 6572 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1129 06:34:56.438071 6572 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1129 06:34:56.438097 6572 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1129 06:34:56.438057 6572 factory.go:656] Stopping watch factory\\\\nI1129 06:34:56.438088 6572 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1129 06:34:56.438055 6572 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1129 06:34:56.438275 6572 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 06:34:56.438450 6572 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 06:34:56.438512 6572 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 06:34:56.438613 6572 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fadf23d78dff09f3edcb6159171e1f55f5a84b42d3dbc6804a2e1764d6c71797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z4rxq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:22Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.830499 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84be169c8b603ce62c3c60b6c67eea559c20cbcfc7ba2d0965bcf6acfb00e99b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:22Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.835342 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.835451 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.835564 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.835688 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.835803 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:22Z","lastTransitionTime":"2025-11-29T06:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.845829 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zb4cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6063fc55-4365-4f22-a005-bfac3812fdce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c5fd1426004597dc139d078e4f9b5bb7fec8ab12162ca6b052f5eb43025b6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fe3a8d1d8d729e919ab73a034943a441b5bc44409af32b5671d73b80e2bd6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe3a8d1d8d729e919ab73a034943a441b5bc44409af32b5671d73b80e2bd6ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49d19e2f799deb4cf6b8154a2f3c0485c06ad44d0b3e0bac5b2b85f16228f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a49d19e2f799deb4cf6b8154a2f3c0485c06ad44d0b3e0bac5b2b85f16228f38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zb4cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:22Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.854629 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f4d791f-bb61-4aaa-a09c-3007b59645a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247b6579653599785a44fc9e8e0d47f93802f01f2002465a9ba3825d487a324b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bade16afee9bcb203991caf838d5e9e5302dd0b35462b93b45c34cf98e89c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5zgvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:22Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.862481 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ttw9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8440d6ae-a357-461e-a91f-a48625b4a9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c526243829deb889e7afc49647d8bf9960f886b6abc9aa7cba8a69c8d5b3ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mxr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ttw9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:22Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.872280 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b25cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a287baf-0c87-4698-9553-6f94927fbf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69ea28bcbeb8379671147cd41f131b8b37b41a285319b082c43381a56cdc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbmt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754f770b2635be1fc785e0cc958e0c885dd7516ba54de760493ec7778d738708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbmt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b25cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:22Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.938438 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.938668 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.938730 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.938788 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:22 crc kubenswrapper[4947]: I1129 06:35:22.938905 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:22Z","lastTransitionTime":"2025-11-29T06:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.041835 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.041871 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.041881 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.041899 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.041914 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:23Z","lastTransitionTime":"2025-11-29T06:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.144626 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.144668 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.144680 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.144732 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.144744 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:23Z","lastTransitionTime":"2025-11-29T06:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.178480 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:35:23 crc kubenswrapper[4947]: E1129 06:35:23.178669 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.248401 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.248892 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.249395 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.249740 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.249949 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:23Z","lastTransitionTime":"2025-11-29T06:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.353900 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.354174 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.354288 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.354383 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.354442 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:23Z","lastTransitionTime":"2025-11-29T06:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.431387 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.431417 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.431425 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.431440 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.431449 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:23Z","lastTransitionTime":"2025-11-29T06:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:23 crc kubenswrapper[4947]: E1129 06:35:23.448622 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ead809c-947a-4269-a0f3-b817113e9662\\\",\\\"systemUUID\\\":\\\"f0748469-4a41-446c-a5c3-776c2ca32148\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:23Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.453245 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.453287 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.453300 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.453319 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.453335 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:23Z","lastTransitionTime":"2025-11-29T06:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:23 crc kubenswrapper[4947]: E1129 06:35:23.471393 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ead809c-947a-4269-a0f3-b817113e9662\\\",\\\"systemUUID\\\":\\\"f0748469-4a41-446c-a5c3-776c2ca32148\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:23Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.475901 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.476020 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.476100 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.476177 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.476265 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:23Z","lastTransitionTime":"2025-11-29T06:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:23 crc kubenswrapper[4947]: E1129 06:35:23.495367 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ead809c-947a-4269-a0f3-b817113e9662\\\",\\\"systemUUID\\\":\\\"f0748469-4a41-446c-a5c3-776c2ca32148\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:23Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.502630 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.502705 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.502724 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.502749 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.502767 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:23Z","lastTransitionTime":"2025-11-29T06:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:23 crc kubenswrapper[4947]: E1129 06:35:23.523814 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ead809c-947a-4269-a0f3-b817113e9662\\\",\\\"systemUUID\\\":\\\"f0748469-4a41-446c-a5c3-776c2ca32148\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:23Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.529672 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.529737 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.529791 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.529815 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.529831 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:23Z","lastTransitionTime":"2025-11-29T06:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:23 crc kubenswrapper[4947]: E1129 06:35:23.549923 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ead809c-947a-4269-a0f3-b817113e9662\\\",\\\"systemUUID\\\":\\\"f0748469-4a41-446c-a5c3-776c2ca32148\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:23Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:23 crc kubenswrapper[4947]: E1129 06:35:23.550089 4947 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.552377 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.552453 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.552472 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.552499 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.552519 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:23Z","lastTransitionTime":"2025-11-29T06:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.655182 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.655263 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.655283 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.655306 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.655321 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:23Z","lastTransitionTime":"2025-11-29T06:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.758287 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.758362 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.758373 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.758390 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.758401 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:23Z","lastTransitionTime":"2025-11-29T06:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.862049 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.862112 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.862131 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.862157 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.862174 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:23Z","lastTransitionTime":"2025-11-29T06:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.965902 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.965961 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.965978 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.966005 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:23 crc kubenswrapper[4947]: I1129 06:35:23.966020 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:23Z","lastTransitionTime":"2025-11-29T06:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.068587 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.068651 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.068674 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.068725 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.068748 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:24Z","lastTransitionTime":"2025-11-29T06:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.171205 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.171255 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.171289 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.171306 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.171319 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:24Z","lastTransitionTime":"2025-11-29T06:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.178073 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.178164 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.178174 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:35:24 crc kubenswrapper[4947]: E1129 06:35:24.178296 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 06:35:24 crc kubenswrapper[4947]: E1129 06:35:24.178533 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fbj5" podUID="53a3bcac-8ad0-47ce-abee-ee56fd152ea8" Nov 29 06:35:24 crc kubenswrapper[4947]: E1129 06:35:24.178585 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.274063 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.274437 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.274546 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.274693 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.274797 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:24Z","lastTransitionTime":"2025-11-29T06:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.377622 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.377694 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.377711 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.377734 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.377754 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:24Z","lastTransitionTime":"2025-11-29T06:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.480681 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.481031 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.481268 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.481563 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.481727 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:24Z","lastTransitionTime":"2025-11-29T06:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.584065 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.584370 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.584514 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.584611 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.584677 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:24Z","lastTransitionTime":"2025-11-29T06:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.615854 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z4rxq_dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0/ovnkube-controller/3.log" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.616886 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z4rxq_dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0/ovnkube-controller/2.log" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.620128 4947 generic.go:334] "Generic (PLEG): container finished" podID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" containerID="f688195e20b1f49c08f3c1b7e2348e4ffdaa7417f17c1a626fde1c81232d1920" exitCode=1 Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.620172 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" event={"ID":"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0","Type":"ContainerDied","Data":"f688195e20b1f49c08f3c1b7e2348e4ffdaa7417f17c1a626fde1c81232d1920"} Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.620257 4947 scope.go:117] "RemoveContainer" containerID="ee517e6c1aeb923ec34bc4ffa9dd6d445887230476b7ea3be7f09d82f1465442" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.621299 4947 scope.go:117] "RemoveContainer" containerID="f688195e20b1f49c08f3c1b7e2348e4ffdaa7417f17c1a626fde1c81232d1920" Nov 29 06:35:24 crc kubenswrapper[4947]: E1129 06:35:24.621527 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-z4rxq_openshift-ovn-kubernetes(dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" podUID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.651427 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"268b8d19-01a0-4696-8b9e-0efed4129d56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06bdbb1b45cdc328f5c4345797e222ac33d9f1ea052011c183a229c02134cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa666baf286a9b54dbafbe35f35c3be377481392a4e3811f72cf189d1adbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ef81ddc4f773e06ee9c296ecc2e67ce76bf795fbd8357236cad6c0462d7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360e1f2c76ee825b275816919d9ab1473c6db5da5ab22b43134738150306f87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4712b1e44979f37b926f325313325a788ecb9e01b30583bf8210ab6c53d71d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:24Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.672847 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92f526fea4a10baf89765a67868ca0fe47ce19bbde73cd538bd65add7578f000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3934144663bb1471d168783c160c42654884d04d2510cbe1e5f2f6e2cfb94f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:24Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.687026 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.687085 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.687103 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.687128 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.687146 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:24Z","lastTransitionTime":"2025-11-29T06:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.688511 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:24Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.703393 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sxdk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f180a1-2fb0-4b96-85ed-1116677a7c99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b80d32b0abeeab39bc9fd8c24510ec865e957006ecac31da11f86b6e0fb983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swgjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sxdk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:24Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.720031 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlg45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63ddd0da1118c2e86da1aea51f8248927f80bc1d21790723952b4d59f294cd76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35c22792f1661e18f407880f3a743a03890141d9f5c6062e8ba6b06204b4e3da\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T06:35:16Z\\\",\\\"message\\\":\\\"2025-11-29T06:34:31+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c66371c3-af0c-4d62-8105-c20fdc1c09f0\\\\n2025-11-29T06:34:31+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c66371c3-af0c-4d62-8105-c20fdc1c09f0 to /host/opt/cni/bin/\\\\n2025-11-29T06:34:31Z [verbose] multus-daemon started\\\\n2025-11-29T06:34:31Z [verbose] Readiness Indicator file check\\\\n2025-11-29T06:35:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rndhc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlg45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:24Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.741104 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21d5af32-2f1c-4aa8-a9b3-c436a5b3cb32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8225fda5f45611099716e891065acf0ffa2db6d2c9b79192b5955c1bac77daf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d79edf1f405f23d4ec3ff1a52d9583fc26d31d1fb690337a7b081868c397d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfae5cdad70e7948b6383df2f934a477b237c38b94cf3d3eda22d11fe02826e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20235c8af3fdd17aa70bd0744d8faf37aa7781f63ef194f6e467721a47e388d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:24Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.760131 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"682a3ca0-7f80-4aa3-8627-44c5f9d6c661\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0decb70631f9a60b5a44909b2cd152c099aa6955393b715617a93d2639a8f211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7347898f9e11318a33ea5f24ef489a4e58da64e0631ac46aa91f30f5691ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://836bfd5239874f47639673b177b0d441dff3d84e255c7c6d1983c9e0db5134fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0e4509596cc7d5e28048c72689ccfc8c249cf06f856142be2b48103608b05f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d0e4509596cc7d5e28048c72689ccfc8c249cf06f856142be2b48103608b05f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:24Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.777207 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:24Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.790045 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.790538 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.790560 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.790584 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.790595 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:24Z","lastTransitionTime":"2025-11-29T06:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.791645 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2fbj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53a3bcac-8ad0-47ce-abee-ee56fd152ea8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nxj27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nxj27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2fbj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:24Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.809685 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabaff26-a896-4929-8b32-6e32efe02ffc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e518bb57f917b37cc08b0f099557c012da1b6c08a168ff3acb262abc2c9d6983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73fed24d3da601900cf1f54e0adc59daba09bd9a50813cc39508f2d8c66ae5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187288a1894fa8a8aaf843f1889edb54adeb0771fb0a5759a1346d1ae04f9970\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb63d3fae21b626041036425ac0dbe4cc0fba792aa82812037eb0cf64036984\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce17a0c77733e53cc77e1ced6944f39b53419840d78420bc7cf4ad34fec82e8b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 06:34:22.756831 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 06:34:22.759210 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196230451/tls.crt::/tmp/serving-cert-4196230451/tls.key\\\\\\\"\\\\nI1129 06:34:28.364401 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 06:34:28.369411 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 06:34:28.369441 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 06:34:28.369468 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 06:34:28.369474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 06:34:28.374098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 06:34:28.374134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374148 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 06:34:28.374153 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 06:34:28.374157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 06:34:28.374161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 06:34:28.374166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 06:34:28.376189 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ce7bb31402783842d6a20f7d0667426065729bdee1f7cf4fcbe636173e79da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:24Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.824715 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de17371c2f08b16e96df9f76ce3eaf0981cc8590aca0784ea95598b7842812eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:24Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.843381 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:24Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.870039 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26f82f9be693d04a2b2536773e664c7e2525ea3cf90f6135e5ed8a41e0def93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffeb939b1928f4e71530473d93e5b142eb70a09d929bee5bea36c407c0fb44a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b624f494cbf18c5d0c15ee7d074273ebb685465bc73aeebae77758840c958415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03aa7fc02c19732bcc2f291ae3e1eb8ee53ac2f8e4b6138a24c73eb69201fa5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba937f20fd42fdc9c935084319c2d3c455813e35002ed22a4be631cc74fab26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5372e6932fe776f4e4d18124f45a05be3f374919ce6df4426d89a023353836\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f688195e20b1f49c08f3c1b7e2348e4ffdaa7417f17c1a626fde1c81232d1920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee517e6c1aeb923ec34bc4ffa9dd6d445887230476b7ea3be7f09d82f1465442\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T06:34:56Z\\\",\\\"message\\\":\\\"tor *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 06:34:56.438063 6572 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1129 06:34:56.438071 6572 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1129 06:34:56.438097 6572 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1129 06:34:56.438057 6572 factory.go:656] Stopping watch factory\\\\nI1129 06:34:56.438088 6572 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1129 06:34:56.438055 6572 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1129 06:34:56.438275 6572 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 06:34:56.438450 6572 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 06:34:56.438512 6572 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 06:34:56.438613 6572 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f688195e20b1f49c08f3c1b7e2348e4ffdaa7417f17c1a626fde1c81232d1920\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T06:35:23Z\\\",\\\"message\\\":\\\"go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:22Z is after 2025-08-24T17:21:41Z]\\\\nI1129 06:35:23.026027 6879 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1129 06:35:23.026155 6879 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI1129 06:35:23.026193 6879 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI1129 06:35:23.026005 6879 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-5zgvc in node crc\\\\nI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fadf23d78dff09f3edcb6159171e1f55f5a84b42d3dbc6804a2e1764d6c71797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z4rxq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:24Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.888069 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84be169c8b603ce62c3c60b6c67eea559c20cbcfc7ba2d0965bcf6acfb00e99b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:24Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.893209 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.893281 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.893295 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.893333 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.893346 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:24Z","lastTransitionTime":"2025-11-29T06:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.903036 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zb4cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6063fc55-4365-4f22-a005-bfac3812fdce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c5fd1426004597dc139d078e4f9b5bb7fec8ab12162ca6b052f5eb43025b6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fe3a8d1d8d729e919ab73a034943a441b5bc44409af32b5671d73b80e2bd6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe3a8d1d8d729e919ab73a034943a441b5bc44409af32b5671d73b80e2bd6ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49d19e2f799deb4cf6b8154a2f3c0485c06ad44d0b3e0bac5b2b85f16228f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a49d19e2f799deb4cf6b8154a2f3c0485c06ad44d0b3e0bac5b2b85f16228f38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zb4cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:24Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.915576 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f4d791f-bb61-4aaa-a09c-3007b59645a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247b6579653599785a44fc9e8e0d47f93802f01f2002465a9ba3825d487a324b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bade16afee9bcb203991caf838d5e9e5302dd0b35462b93b45c34cf98e89c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5zgvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:24Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.926600 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ttw9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8440d6ae-a357-461e-a91f-a48625b4a9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c526243829deb889e7afc49647d8bf9960f886b6abc9aa7cba8a69c8d5b3ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mxr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ttw9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:24Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.937726 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b25cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a287baf-0c87-4698-9553-6f94927fbf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69ea28bcbeb8379671147cd41f131b8b37b41a285319b082c43381a56cdc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbmt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754f770b2635be1fc785e0cc958e0c885dd7516ba54de760493ec7778d738708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbmt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b25cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:24Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.995422 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.995465 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.995476 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.995491 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:24 crc kubenswrapper[4947]: I1129 06:35:24.995503 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:24Z","lastTransitionTime":"2025-11-29T06:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:25 crc kubenswrapper[4947]: I1129 06:35:25.098676 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:25 crc kubenswrapper[4947]: I1129 06:35:25.098721 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:25 crc kubenswrapper[4947]: I1129 06:35:25.098731 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:25 crc kubenswrapper[4947]: I1129 06:35:25.098746 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:25 crc kubenswrapper[4947]: I1129 06:35:25.098757 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:25Z","lastTransitionTime":"2025-11-29T06:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:25 crc kubenswrapper[4947]: I1129 06:35:25.178614 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:35:25 crc kubenswrapper[4947]: E1129 06:35:25.178760 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 06:35:25 crc kubenswrapper[4947]: I1129 06:35:25.202637 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:25 crc kubenswrapper[4947]: I1129 06:35:25.202728 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:25 crc kubenswrapper[4947]: I1129 06:35:25.202743 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:25 crc kubenswrapper[4947]: I1129 06:35:25.202767 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:25 crc kubenswrapper[4947]: I1129 06:35:25.202786 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:25Z","lastTransitionTime":"2025-11-29T06:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:25 crc kubenswrapper[4947]: I1129 06:35:25.306312 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:25 crc kubenswrapper[4947]: I1129 06:35:25.306381 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:25 crc kubenswrapper[4947]: I1129 06:35:25.306399 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:25 crc kubenswrapper[4947]: I1129 06:35:25.306446 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:25 crc kubenswrapper[4947]: I1129 06:35:25.306483 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:25Z","lastTransitionTime":"2025-11-29T06:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:25 crc kubenswrapper[4947]: I1129 06:35:25.409937 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:25 crc kubenswrapper[4947]: I1129 06:35:25.409996 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:25 crc kubenswrapper[4947]: I1129 06:35:25.410014 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:25 crc kubenswrapper[4947]: I1129 06:35:25.410032 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:25 crc kubenswrapper[4947]: I1129 06:35:25.410047 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:25Z","lastTransitionTime":"2025-11-29T06:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:25 crc kubenswrapper[4947]: I1129 06:35:25.513388 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:25 crc kubenswrapper[4947]: I1129 06:35:25.513436 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:25 crc kubenswrapper[4947]: I1129 06:35:25.513445 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:25 crc kubenswrapper[4947]: I1129 06:35:25.513655 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:25 crc kubenswrapper[4947]: I1129 06:35:25.513672 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:25Z","lastTransitionTime":"2025-11-29T06:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:25 crc kubenswrapper[4947]: I1129 06:35:25.617030 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:25 crc kubenswrapper[4947]: I1129 06:35:25.617106 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:25 crc kubenswrapper[4947]: I1129 06:35:25.617129 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:25 crc kubenswrapper[4947]: I1129 06:35:25.617162 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:25 crc kubenswrapper[4947]: I1129 06:35:25.617185 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:25Z","lastTransitionTime":"2025-11-29T06:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:25 crc kubenswrapper[4947]: I1129 06:35:25.627093 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z4rxq_dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0/ovnkube-controller/3.log" Nov 29 06:35:25 crc kubenswrapper[4947]: I1129 06:35:25.720729 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:25 crc kubenswrapper[4947]: I1129 06:35:25.720798 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:25 crc kubenswrapper[4947]: I1129 06:35:25.720816 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:25 crc kubenswrapper[4947]: I1129 06:35:25.720841 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:25 crc kubenswrapper[4947]: I1129 06:35:25.720858 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:25Z","lastTransitionTime":"2025-11-29T06:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:25 crc kubenswrapper[4947]: I1129 06:35:25.824487 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:25 crc kubenswrapper[4947]: I1129 06:35:25.824558 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:25 crc kubenswrapper[4947]: I1129 06:35:25.824582 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:25 crc kubenswrapper[4947]: I1129 06:35:25.824615 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:25 crc kubenswrapper[4947]: I1129 06:35:25.824638 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:25Z","lastTransitionTime":"2025-11-29T06:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:25 crc kubenswrapper[4947]: I1129 06:35:25.927589 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:25 crc kubenswrapper[4947]: I1129 06:35:25.927634 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:25 crc kubenswrapper[4947]: I1129 06:35:25.927647 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:25 crc kubenswrapper[4947]: I1129 06:35:25.927664 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:25 crc kubenswrapper[4947]: I1129 06:35:25.927682 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:25Z","lastTransitionTime":"2025-11-29T06:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:26 crc kubenswrapper[4947]: I1129 06:35:26.030331 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:26 crc kubenswrapper[4947]: I1129 06:35:26.030423 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:26 crc kubenswrapper[4947]: I1129 06:35:26.030456 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:26 crc kubenswrapper[4947]: I1129 06:35:26.030488 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:26 crc kubenswrapper[4947]: I1129 06:35:26.030508 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:26Z","lastTransitionTime":"2025-11-29T06:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:26 crc kubenswrapper[4947]: I1129 06:35:26.134206 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:26 crc kubenswrapper[4947]: I1129 06:35:26.134310 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:26 crc kubenswrapper[4947]: I1129 06:35:26.134339 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:26 crc kubenswrapper[4947]: I1129 06:35:26.134377 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:26 crc kubenswrapper[4947]: I1129 06:35:26.134400 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:26Z","lastTransitionTime":"2025-11-29T06:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:26 crc kubenswrapper[4947]: I1129 06:35:26.177894 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:35:26 crc kubenswrapper[4947]: I1129 06:35:26.177984 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:35:26 crc kubenswrapper[4947]: I1129 06:35:26.178093 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:35:26 crc kubenswrapper[4947]: E1129 06:35:26.178099 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 06:35:26 crc kubenswrapper[4947]: E1129 06:35:26.178262 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fbj5" podUID="53a3bcac-8ad0-47ce-abee-ee56fd152ea8" Nov 29 06:35:26 crc kubenswrapper[4947]: E1129 06:35:26.178366 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 06:35:26 crc kubenswrapper[4947]: I1129 06:35:26.237176 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:26 crc kubenswrapper[4947]: I1129 06:35:26.237277 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:26 crc kubenswrapper[4947]: I1129 06:35:26.237296 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:26 crc kubenswrapper[4947]: I1129 06:35:26.237323 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:26 crc kubenswrapper[4947]: I1129 06:35:26.237343 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:26Z","lastTransitionTime":"2025-11-29T06:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:26 crc kubenswrapper[4947]: I1129 06:35:26.340717 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:26 crc kubenswrapper[4947]: I1129 06:35:26.340781 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:26 crc kubenswrapper[4947]: I1129 06:35:26.340804 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:26 crc kubenswrapper[4947]: I1129 06:35:26.340836 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:26 crc kubenswrapper[4947]: I1129 06:35:26.340856 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:26Z","lastTransitionTime":"2025-11-29T06:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:26 crc kubenswrapper[4947]: I1129 06:35:26.443141 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:26 crc kubenswrapper[4947]: I1129 06:35:26.443169 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:26 crc kubenswrapper[4947]: I1129 06:35:26.443178 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:26 crc kubenswrapper[4947]: I1129 06:35:26.443192 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:26 crc kubenswrapper[4947]: I1129 06:35:26.443201 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:26Z","lastTransitionTime":"2025-11-29T06:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:26 crc kubenswrapper[4947]: I1129 06:35:26.546360 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:26 crc kubenswrapper[4947]: I1129 06:35:26.546401 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:26 crc kubenswrapper[4947]: I1129 06:35:26.546414 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:26 crc kubenswrapper[4947]: I1129 06:35:26.546429 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:26 crc kubenswrapper[4947]: I1129 06:35:26.546440 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:26Z","lastTransitionTime":"2025-11-29T06:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:26 crc kubenswrapper[4947]: I1129 06:35:26.650736 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:26 crc kubenswrapper[4947]: I1129 06:35:26.650791 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:26 crc kubenswrapper[4947]: I1129 06:35:26.650802 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:26 crc kubenswrapper[4947]: I1129 06:35:26.650823 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:26 crc kubenswrapper[4947]: I1129 06:35:26.650835 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:26Z","lastTransitionTime":"2025-11-29T06:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:26 crc kubenswrapper[4947]: I1129 06:35:26.754360 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:26 crc kubenswrapper[4947]: I1129 06:35:26.754420 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:26 crc kubenswrapper[4947]: I1129 06:35:26.754436 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:26 crc kubenswrapper[4947]: I1129 06:35:26.754459 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:26 crc kubenswrapper[4947]: I1129 06:35:26.754478 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:26Z","lastTransitionTime":"2025-11-29T06:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:26 crc kubenswrapper[4947]: I1129 06:35:26.857628 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:26 crc kubenswrapper[4947]: I1129 06:35:26.857684 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:26 crc kubenswrapper[4947]: I1129 06:35:26.857695 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:26 crc kubenswrapper[4947]: I1129 06:35:26.857714 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:26 crc kubenswrapper[4947]: I1129 06:35:26.857727 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:26Z","lastTransitionTime":"2025-11-29T06:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:26 crc kubenswrapper[4947]: I1129 06:35:26.961413 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:26 crc kubenswrapper[4947]: I1129 06:35:26.961538 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:26 crc kubenswrapper[4947]: I1129 06:35:26.961560 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:26 crc kubenswrapper[4947]: I1129 06:35:26.961591 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:26 crc kubenswrapper[4947]: I1129 06:35:26.961610 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:26Z","lastTransitionTime":"2025-11-29T06:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:27 crc kubenswrapper[4947]: I1129 06:35:27.064139 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:27 crc kubenswrapper[4947]: I1129 06:35:27.064199 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:27 crc kubenswrapper[4947]: I1129 06:35:27.064214 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:27 crc kubenswrapper[4947]: I1129 06:35:27.064259 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:27 crc kubenswrapper[4947]: I1129 06:35:27.064275 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:27Z","lastTransitionTime":"2025-11-29T06:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:27 crc kubenswrapper[4947]: I1129 06:35:27.166930 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:27 crc kubenswrapper[4947]: I1129 06:35:27.166970 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:27 crc kubenswrapper[4947]: I1129 06:35:27.166981 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:27 crc kubenswrapper[4947]: I1129 06:35:27.166998 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:27 crc kubenswrapper[4947]: I1129 06:35:27.167012 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:27Z","lastTransitionTime":"2025-11-29T06:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:27 crc kubenswrapper[4947]: I1129 06:35:27.178606 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:35:27 crc kubenswrapper[4947]: E1129 06:35:27.178759 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 06:35:27 crc kubenswrapper[4947]: I1129 06:35:27.269373 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:27 crc kubenswrapper[4947]: I1129 06:35:27.269457 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:27 crc kubenswrapper[4947]: I1129 06:35:27.269473 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:27 crc kubenswrapper[4947]: I1129 06:35:27.269519 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:27 crc kubenswrapper[4947]: I1129 06:35:27.269537 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:27Z","lastTransitionTime":"2025-11-29T06:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:27 crc kubenswrapper[4947]: I1129 06:35:27.371946 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:27 crc kubenswrapper[4947]: I1129 06:35:27.371985 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:27 crc kubenswrapper[4947]: I1129 06:35:27.372011 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:27 crc kubenswrapper[4947]: I1129 06:35:27.372040 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:27 crc kubenswrapper[4947]: I1129 06:35:27.372051 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:27Z","lastTransitionTime":"2025-11-29T06:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:27 crc kubenswrapper[4947]: I1129 06:35:27.475382 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:27 crc kubenswrapper[4947]: I1129 06:35:27.475497 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:27 crc kubenswrapper[4947]: I1129 06:35:27.475524 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:27 crc kubenswrapper[4947]: I1129 06:35:27.475555 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:27 crc kubenswrapper[4947]: I1129 06:35:27.475579 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:27Z","lastTransitionTime":"2025-11-29T06:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:27 crc kubenswrapper[4947]: I1129 06:35:27.578413 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:27 crc kubenswrapper[4947]: I1129 06:35:27.578449 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:27 crc kubenswrapper[4947]: I1129 06:35:27.578486 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:27 crc kubenswrapper[4947]: I1129 06:35:27.578503 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:27 crc kubenswrapper[4947]: I1129 06:35:27.578514 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:27Z","lastTransitionTime":"2025-11-29T06:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:27 crc kubenswrapper[4947]: I1129 06:35:27.681317 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:27 crc kubenswrapper[4947]: I1129 06:35:27.681369 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:27 crc kubenswrapper[4947]: I1129 06:35:27.681382 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:27 crc kubenswrapper[4947]: I1129 06:35:27.681399 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:27 crc kubenswrapper[4947]: I1129 06:35:27.681410 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:27Z","lastTransitionTime":"2025-11-29T06:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:27 crc kubenswrapper[4947]: I1129 06:35:27.783838 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:27 crc kubenswrapper[4947]: I1129 06:35:27.783912 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:27 crc kubenswrapper[4947]: I1129 06:35:27.783925 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:27 crc kubenswrapper[4947]: I1129 06:35:27.783942 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:27 crc kubenswrapper[4947]: I1129 06:35:27.783956 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:27Z","lastTransitionTime":"2025-11-29T06:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:27 crc kubenswrapper[4947]: I1129 06:35:27.886260 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:27 crc kubenswrapper[4947]: I1129 06:35:27.886306 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:27 crc kubenswrapper[4947]: I1129 06:35:27.886315 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:27 crc kubenswrapper[4947]: I1129 06:35:27.886337 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:27 crc kubenswrapper[4947]: I1129 06:35:27.886347 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:27Z","lastTransitionTime":"2025-11-29T06:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:27 crc kubenswrapper[4947]: I1129 06:35:27.988917 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:27 crc kubenswrapper[4947]: I1129 06:35:27.988965 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:27 crc kubenswrapper[4947]: I1129 06:35:27.988975 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:27 crc kubenswrapper[4947]: I1129 06:35:27.988990 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:27 crc kubenswrapper[4947]: I1129 06:35:27.988999 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:27Z","lastTransitionTime":"2025-11-29T06:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:28 crc kubenswrapper[4947]: I1129 06:35:28.092136 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:28 crc kubenswrapper[4947]: I1129 06:35:28.092284 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:28 crc kubenswrapper[4947]: I1129 06:35:28.092310 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:28 crc kubenswrapper[4947]: I1129 06:35:28.092342 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:28 crc kubenswrapper[4947]: I1129 06:35:28.092365 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:28Z","lastTransitionTime":"2025-11-29T06:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:28 crc kubenswrapper[4947]: I1129 06:35:28.178402 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:35:28 crc kubenswrapper[4947]: I1129 06:35:28.178833 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:35:28 crc kubenswrapper[4947]: I1129 06:35:28.178885 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:35:28 crc kubenswrapper[4947]: E1129 06:35:28.178800 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 06:35:28 crc kubenswrapper[4947]: E1129 06:35:28.178997 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 06:35:28 crc kubenswrapper[4947]: E1129 06:35:28.179073 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fbj5" podUID="53a3bcac-8ad0-47ce-abee-ee56fd152ea8" Nov 29 06:35:28 crc kubenswrapper[4947]: I1129 06:35:28.196159 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:28 crc kubenswrapper[4947]: I1129 06:35:28.196200 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:28 crc kubenswrapper[4947]: I1129 06:35:28.196243 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:28 crc kubenswrapper[4947]: I1129 06:35:28.196268 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:28 crc kubenswrapper[4947]: I1129 06:35:28.196286 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:28Z","lastTransitionTime":"2025-11-29T06:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:28 crc kubenswrapper[4947]: I1129 06:35:28.299156 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:28 crc kubenswrapper[4947]: I1129 06:35:28.299200 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:28 crc kubenswrapper[4947]: I1129 06:35:28.299237 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:28 crc kubenswrapper[4947]: I1129 06:35:28.299262 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:28 crc kubenswrapper[4947]: I1129 06:35:28.299278 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:28Z","lastTransitionTime":"2025-11-29T06:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:28 crc kubenswrapper[4947]: I1129 06:35:28.447360 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:28 crc kubenswrapper[4947]: I1129 06:35:28.447411 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:28 crc kubenswrapper[4947]: I1129 06:35:28.447425 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:28 crc kubenswrapper[4947]: I1129 06:35:28.447440 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:28 crc kubenswrapper[4947]: I1129 06:35:28.447449 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:28Z","lastTransitionTime":"2025-11-29T06:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:28 crc kubenswrapper[4947]: I1129 06:35:28.551121 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:28 crc kubenswrapper[4947]: I1129 06:35:28.551249 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:28 crc kubenswrapper[4947]: I1129 06:35:28.551918 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:28 crc kubenswrapper[4947]: I1129 06:35:28.551958 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:28 crc kubenswrapper[4947]: I1129 06:35:28.551995 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:28Z","lastTransitionTime":"2025-11-29T06:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:28 crc kubenswrapper[4947]: I1129 06:35:28.654813 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:28 crc kubenswrapper[4947]: I1129 06:35:28.655164 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:28 crc kubenswrapper[4947]: I1129 06:35:28.655186 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:28 crc kubenswrapper[4947]: I1129 06:35:28.655261 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:28 crc kubenswrapper[4947]: I1129 06:35:28.655288 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:28Z","lastTransitionTime":"2025-11-29T06:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:28 crc kubenswrapper[4947]: I1129 06:35:28.758163 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:28 crc kubenswrapper[4947]: I1129 06:35:28.758214 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:28 crc kubenswrapper[4947]: I1129 06:35:28.758241 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:28 crc kubenswrapper[4947]: I1129 06:35:28.758256 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:28 crc kubenswrapper[4947]: I1129 06:35:28.758266 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:28Z","lastTransitionTime":"2025-11-29T06:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:28 crc kubenswrapper[4947]: I1129 06:35:28.860526 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:28 crc kubenswrapper[4947]: I1129 06:35:28.860575 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:28 crc kubenswrapper[4947]: I1129 06:35:28.860587 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:28 crc kubenswrapper[4947]: I1129 06:35:28.860604 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:28 crc kubenswrapper[4947]: I1129 06:35:28.860614 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:28Z","lastTransitionTime":"2025-11-29T06:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:28 crc kubenswrapper[4947]: I1129 06:35:28.963830 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:28 crc kubenswrapper[4947]: I1129 06:35:28.963894 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:28 crc kubenswrapper[4947]: I1129 06:35:28.963918 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:28 crc kubenswrapper[4947]: I1129 06:35:28.963946 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:28 crc kubenswrapper[4947]: I1129 06:35:28.963969 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:28Z","lastTransitionTime":"2025-11-29T06:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.066361 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.066434 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.066453 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.066484 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.066504 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:29Z","lastTransitionTime":"2025-11-29T06:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.169965 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.170578 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.170736 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.170876 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.171015 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:29Z","lastTransitionTime":"2025-11-29T06:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.178387 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:35:29 crc kubenswrapper[4947]: E1129 06:35:29.178574 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.202517 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"268b8d19-01a0-4696-8b9e-0efed4129d56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06bdbb1b45cdc328f5c4345797e222ac33d9f1ea052011c183a229c02134cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa666baf286a9b54dbafbe35f35c3be377481392a4e3811f72cf189d1adbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ef81ddc4f773e06ee9c296ecc2e67ce76bf795fbd8357236cad6c0462d7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360e1f2c76ee825b275816919d9ab1473c6db5da5ab22b43134738150306f87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4712b1e44979f37b926f325313325a788ecb9e01b30583bf8210ab6c53d71d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:29Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.219643 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92f526fea4a10baf89765a67868ca0fe47ce19bbde73cd538bd65add7578f000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3934144663bb1471d168783c160c42654884d04d2510cbe1e5f2f6e2cfb94f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:29Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.232926 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:29Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.246295 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sxdk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f180a1-2fb0-4b96-85ed-1116677a7c99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b80d32b0abeeab39bc9fd8c24510ec865e957006ecac31da11f86b6e0fb983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swgjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sxdk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:29Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.263099 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlg45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63ddd0da1118c2e86da1aea51f8248927f80bc1d21790723952b4d59f294cd76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35c22792f1661e18f407880f3a743a03890141d9f5c6062e8ba6b06204b4e3da\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T06:35:16Z\\\",\\\"message\\\":\\\"2025-11-29T06:34:31+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c66371c3-af0c-4d62-8105-c20fdc1c09f0\\\\n2025-11-29T06:34:31+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c66371c3-af0c-4d62-8105-c20fdc1c09f0 to /host/opt/cni/bin/\\\\n2025-11-29T06:34:31Z [verbose] multus-daemon started\\\\n2025-11-29T06:34:31Z [verbose] Readiness Indicator file check\\\\n2025-11-29T06:35:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rndhc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlg45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:29Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.273138 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.273266 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.273309 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.273330 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.273342 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:29Z","lastTransitionTime":"2025-11-29T06:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.284153 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21d5af32-2f1c-4aa8-a9b3-c436a5b3cb32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8225fda5f45611099716e891065acf0ffa2db6d2c9b79192b5955c1bac77daf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d79edf1f405f23d4ec3ff1a52d9583fc26d31d1fb690337a7b081868c397d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfae5cdad70e7948b6383df2f934a477b237c38b94cf3d3eda22d11fe02826e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20235c8af3fdd17aa70bd0744d8faf37aa7781f63ef194f6e467721a47e388d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:29Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.307874 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"682a3ca0-7f80-4aa3-8627-44c5f9d6c661\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0decb70631f9a60b5a44909b2cd152c099aa6955393b715617a93d2639a8f211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7347898f9e11318a33ea5f24ef489a4e58da64e0631ac46aa91f30f5691ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://836bfd5239874f47639673b177b0d441dff3d84e255c7c6d1983c9e0db5134fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0e4509596cc7d5e28048c72689ccfc8c249cf06f856142be2b48103608b05f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d0e4509596cc7d5e28048c72689ccfc8c249cf06f856142be2b48103608b05f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:29Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.324944 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:29Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.342453 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2fbj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53a3bcac-8ad0-47ce-abee-ee56fd152ea8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nxj27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nxj27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2fbj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:29Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.358808 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabaff26-a896-4929-8b32-6e32efe02ffc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e518bb57f917b37cc08b0f099557c012da1b6c08a168ff3acb262abc2c9d6983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73fed24d3da601900cf1f54e0adc59daba09bd9a50813cc39508f2d8c66ae5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187288a1894fa8a8aaf843f1889edb54adeb0771fb0a5759a1346d1ae04f9970\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb63d3fae21b626041036425ac0dbe4cc0fba792aa82812037eb0cf64036984\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce17a0c77733e53cc77e1ced6944f39b53419840d78420bc7cf4ad34fec82e8b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 06:34:22.756831 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 06:34:22.759210 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196230451/tls.crt::/tmp/serving-cert-4196230451/tls.key\\\\\\\"\\\\nI1129 06:34:28.364401 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 06:34:28.369411 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 06:34:28.369441 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 06:34:28.369468 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 06:34:28.369474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 06:34:28.374098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 06:34:28.374134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374148 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 06:34:28.374153 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 06:34:28.374157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 06:34:28.374161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 06:34:28.374166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 06:34:28.376189 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ce7bb31402783842d6a20f7d0667426065729bdee1f7cf4fcbe636173e79da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:29Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.375045 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de17371c2f08b16e96df9f76ce3eaf0981cc8590aca0784ea95598b7842812eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:29Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.376215 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.376291 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.376307 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.376332 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.376348 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:29Z","lastTransitionTime":"2025-11-29T06:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.391683 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:29Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.411587 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26f82f9be693d04a2b2536773e664c7e2525ea3cf90f6135e5ed8a41e0def93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffeb939b1928f4e71530473d93e5b142eb70a09d929bee5bea36c407c0fb44a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b624f494cbf18c5d0c15ee7d074273ebb685465bc73aeebae77758840c958415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03aa7fc02c19732bcc2f291ae3e1eb8ee53ac2f8e4b6138a24c73eb69201fa5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba937f20fd42fdc9c935084319c2d3c455813e35002ed22a4be631cc74fab26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5372e6932fe776f4e4d18124f45a05be3f374919ce6df4426d89a023353836\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f688195e20b1f49c08f3c1b7e2348e4ffdaa7417f17c1a626fde1c81232d1920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee517e6c1aeb923ec34bc4ffa9dd6d445887230476b7ea3be7f09d82f1465442\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T06:34:56Z\\\",\\\"message\\\":\\\"tor *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 06:34:56.438063 6572 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1129 06:34:56.438071 6572 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1129 06:34:56.438097 6572 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1129 06:34:56.438057 6572 factory.go:656] Stopping watch factory\\\\nI1129 06:34:56.438088 6572 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1129 06:34:56.438055 6572 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1129 06:34:56.438275 6572 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 06:34:56.438450 6572 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 06:34:56.438512 6572 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 06:34:56.438613 6572 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f688195e20b1f49c08f3c1b7e2348e4ffdaa7417f17c1a626fde1c81232d1920\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T06:35:23Z\\\",\\\"message\\\":\\\"go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:22Z is after 2025-08-24T17:21:41Z]\\\\nI1129 06:35:23.026027 6879 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1129 06:35:23.026155 6879 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI1129 06:35:23.026193 6879 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI1129 06:35:23.026005 6879 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-5zgvc in node crc\\\\nI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fadf23d78dff09f3edcb6159171e1f55f5a84b42d3dbc6804a2e1764d6c71797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z4rxq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:29Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.424707 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84be169c8b603ce62c3c60b6c67eea559c20cbcfc7ba2d0965bcf6acfb00e99b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:29Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.445301 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zb4cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6063fc55-4365-4f22-a005-bfac3812fdce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c5fd1426004597dc139d078e4f9b5bb7fec8ab12162ca6b052f5eb43025b6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fe3a8d1d8d729e919ab73a034943a441b5bc44409af32b5671d73b80e2bd6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe3a8d1d8d729e919ab73a034943a441b5bc44409af32b5671d73b80e2bd6ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49d19e2f799deb4cf6b8154a2f3c0485c06ad44d0b3e0bac5b2b85f16228f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a49d19e2f799deb4cf6b8154a2f3c0485c06ad44d0b3e0bac5b2b85f16228f38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zb4cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:29Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.458477 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f4d791f-bb61-4aaa-a09c-3007b59645a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247b6579653599785a44fc9e8e0d47f93802f01f2002465a9ba3825d487a324b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bade16afee9bcb203991caf838d5e9e5302dd0b35462b93b45c34cf98e89c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5zgvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:29Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.472468 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ttw9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8440d6ae-a357-461e-a91f-a48625b4a9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c526243829deb889e7afc49647d8bf9960f886b6abc9aa7cba8a69c8d5b3ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mxr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ttw9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:29Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.479403 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.479456 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.479471 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.479493 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.479507 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:29Z","lastTransitionTime":"2025-11-29T06:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.486035 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b25cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a287baf-0c87-4698-9553-6f94927fbf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69ea28bcbeb8379671147cd41f131b8b37b41a285319b082c43381a56cdc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbmt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754f770b2635be1fc785e0cc958e0c885dd7516ba54de760493ec7778d738708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbmt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b25cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:29Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.582078 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.582145 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.582168 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.582200 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.582265 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:29Z","lastTransitionTime":"2025-11-29T06:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.685231 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.685290 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.685302 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.685323 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.685339 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:29Z","lastTransitionTime":"2025-11-29T06:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.787539 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.787579 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.787590 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.787606 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.787618 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:29Z","lastTransitionTime":"2025-11-29T06:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.889993 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.890044 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.890057 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.890075 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.890091 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:29Z","lastTransitionTime":"2025-11-29T06:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.993305 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.993379 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.993413 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.993445 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:29 crc kubenswrapper[4947]: I1129 06:35:29.993469 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:29Z","lastTransitionTime":"2025-11-29T06:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:30 crc kubenswrapper[4947]: I1129 06:35:30.096725 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:30 crc kubenswrapper[4947]: I1129 06:35:30.096776 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:30 crc kubenswrapper[4947]: I1129 06:35:30.096791 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:30 crc kubenswrapper[4947]: I1129 06:35:30.096815 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:30 crc kubenswrapper[4947]: I1129 06:35:30.096833 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:30Z","lastTransitionTime":"2025-11-29T06:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:30 crc kubenswrapper[4947]: I1129 06:35:30.177809 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:35:30 crc kubenswrapper[4947]: I1129 06:35:30.177853 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:35:30 crc kubenswrapper[4947]: E1129 06:35:30.177964 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 06:35:30 crc kubenswrapper[4947]: I1129 06:35:30.177776 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:35:30 crc kubenswrapper[4947]: E1129 06:35:30.178257 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fbj5" podUID="53a3bcac-8ad0-47ce-abee-ee56fd152ea8" Nov 29 06:35:30 crc kubenswrapper[4947]: E1129 06:35:30.178364 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 06:35:30 crc kubenswrapper[4947]: I1129 06:35:30.200283 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:30 crc kubenswrapper[4947]: I1129 06:35:30.200371 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:30 crc kubenswrapper[4947]: I1129 06:35:30.200399 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:30 crc kubenswrapper[4947]: I1129 06:35:30.200429 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:30 crc kubenswrapper[4947]: I1129 06:35:30.200456 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:30Z","lastTransitionTime":"2025-11-29T06:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:30 crc kubenswrapper[4947]: I1129 06:35:30.302854 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:30 crc kubenswrapper[4947]: I1129 06:35:30.302907 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:30 crc kubenswrapper[4947]: I1129 06:35:30.302917 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:30 crc kubenswrapper[4947]: I1129 06:35:30.302935 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:30 crc kubenswrapper[4947]: I1129 06:35:30.302950 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:30Z","lastTransitionTime":"2025-11-29T06:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:30 crc kubenswrapper[4947]: I1129 06:35:30.405280 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:30 crc kubenswrapper[4947]: I1129 06:35:30.405306 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:30 crc kubenswrapper[4947]: I1129 06:35:30.405314 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:30 crc kubenswrapper[4947]: I1129 06:35:30.405327 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:30 crc kubenswrapper[4947]: I1129 06:35:30.405336 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:30Z","lastTransitionTime":"2025-11-29T06:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:30 crc kubenswrapper[4947]: I1129 06:35:30.509027 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:30 crc kubenswrapper[4947]: I1129 06:35:30.509109 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:30 crc kubenswrapper[4947]: I1129 06:35:30.509136 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:30 crc kubenswrapper[4947]: I1129 06:35:30.509160 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:30 crc kubenswrapper[4947]: I1129 06:35:30.509193 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:30Z","lastTransitionTime":"2025-11-29T06:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:30 crc kubenswrapper[4947]: I1129 06:35:30.612206 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:30 crc kubenswrapper[4947]: I1129 06:35:30.612305 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:30 crc kubenswrapper[4947]: I1129 06:35:30.612323 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:30 crc kubenswrapper[4947]: I1129 06:35:30.612350 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:30 crc kubenswrapper[4947]: I1129 06:35:30.612368 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:30Z","lastTransitionTime":"2025-11-29T06:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:30 crc kubenswrapper[4947]: I1129 06:35:30.715107 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:30 crc kubenswrapper[4947]: I1129 06:35:30.715156 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:30 crc kubenswrapper[4947]: I1129 06:35:30.715166 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:30 crc kubenswrapper[4947]: I1129 06:35:30.715188 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:30 crc kubenswrapper[4947]: I1129 06:35:30.715200 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:30Z","lastTransitionTime":"2025-11-29T06:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:30 crc kubenswrapper[4947]: I1129 06:35:30.818275 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:30 crc kubenswrapper[4947]: I1129 06:35:30.818324 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:30 crc kubenswrapper[4947]: I1129 06:35:30.818340 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:30 crc kubenswrapper[4947]: I1129 06:35:30.818359 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:30 crc kubenswrapper[4947]: I1129 06:35:30.818374 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:30Z","lastTransitionTime":"2025-11-29T06:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:30 crc kubenswrapper[4947]: I1129 06:35:30.921692 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:30 crc kubenswrapper[4947]: I1129 06:35:30.922153 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:30 crc kubenswrapper[4947]: I1129 06:35:30.922387 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:30 crc kubenswrapper[4947]: I1129 06:35:30.922599 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:30 crc kubenswrapper[4947]: I1129 06:35:30.922741 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:30Z","lastTransitionTime":"2025-11-29T06:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:31 crc kubenswrapper[4947]: I1129 06:35:31.025821 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:31 crc kubenswrapper[4947]: I1129 06:35:31.026209 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:31 crc kubenswrapper[4947]: I1129 06:35:31.026396 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:31 crc kubenswrapper[4947]: I1129 06:35:31.026559 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:31 crc kubenswrapper[4947]: I1129 06:35:31.026733 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:31Z","lastTransitionTime":"2025-11-29T06:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:31 crc kubenswrapper[4947]: I1129 06:35:31.130277 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:31 crc kubenswrapper[4947]: I1129 06:35:31.130328 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:31 crc kubenswrapper[4947]: I1129 06:35:31.130345 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:31 crc kubenswrapper[4947]: I1129 06:35:31.130367 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:31 crc kubenswrapper[4947]: I1129 06:35:31.130384 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:31Z","lastTransitionTime":"2025-11-29T06:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:31 crc kubenswrapper[4947]: I1129 06:35:31.179563 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:35:31 crc kubenswrapper[4947]: E1129 06:35:31.180099 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 06:35:31 crc kubenswrapper[4947]: I1129 06:35:31.194946 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 29 06:35:31 crc kubenswrapper[4947]: I1129 06:35:31.232883 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:31 crc kubenswrapper[4947]: I1129 06:35:31.233075 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:31 crc kubenswrapper[4947]: I1129 06:35:31.233186 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:31 crc kubenswrapper[4947]: I1129 06:35:31.233303 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:31 crc kubenswrapper[4947]: I1129 06:35:31.233387 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:31Z","lastTransitionTime":"2025-11-29T06:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:31 crc kubenswrapper[4947]: I1129 06:35:31.351670 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:31 crc kubenswrapper[4947]: I1129 06:35:31.351808 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:31 crc kubenswrapper[4947]: I1129 06:35:31.352293 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:31 crc kubenswrapper[4947]: I1129 06:35:31.352340 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:31 crc kubenswrapper[4947]: I1129 06:35:31.352363 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:31Z","lastTransitionTime":"2025-11-29T06:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:31 crc kubenswrapper[4947]: I1129 06:35:31.455125 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:31 crc kubenswrapper[4947]: I1129 06:35:31.455184 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:31 crc kubenswrapper[4947]: I1129 06:35:31.455200 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:31 crc kubenswrapper[4947]: I1129 06:35:31.455257 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:31 crc kubenswrapper[4947]: I1129 06:35:31.455274 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:31Z","lastTransitionTime":"2025-11-29T06:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:31 crc kubenswrapper[4947]: I1129 06:35:31.558767 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:31 crc kubenswrapper[4947]: I1129 06:35:31.559115 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:31 crc kubenswrapper[4947]: I1129 06:35:31.559398 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:31 crc kubenswrapper[4947]: I1129 06:35:31.559615 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:31 crc kubenswrapper[4947]: I1129 06:35:31.559804 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:31Z","lastTransitionTime":"2025-11-29T06:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:31 crc kubenswrapper[4947]: I1129 06:35:31.663443 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:31 crc kubenswrapper[4947]: I1129 06:35:31.663486 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:31 crc kubenswrapper[4947]: I1129 06:35:31.663498 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:31 crc kubenswrapper[4947]: I1129 06:35:31.663514 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:31 crc kubenswrapper[4947]: I1129 06:35:31.663526 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:31Z","lastTransitionTime":"2025-11-29T06:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:31 crc kubenswrapper[4947]: I1129 06:35:31.766209 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:31 crc kubenswrapper[4947]: I1129 06:35:31.766288 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:31 crc kubenswrapper[4947]: I1129 06:35:31.766308 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:31 crc kubenswrapper[4947]: I1129 06:35:31.766333 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:31 crc kubenswrapper[4947]: I1129 06:35:31.766352 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:31Z","lastTransitionTime":"2025-11-29T06:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:31 crc kubenswrapper[4947]: I1129 06:35:31.869589 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:31 crc kubenswrapper[4947]: I1129 06:35:31.869840 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:31 crc kubenswrapper[4947]: I1129 06:35:31.870005 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:31 crc kubenswrapper[4947]: I1129 06:35:31.870057 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:31 crc kubenswrapper[4947]: I1129 06:35:31.870077 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:31Z","lastTransitionTime":"2025-11-29T06:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:31 crc kubenswrapper[4947]: I1129 06:35:31.973449 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:31 crc kubenswrapper[4947]: I1129 06:35:31.973492 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:31 crc kubenswrapper[4947]: I1129 06:35:31.973500 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:31 crc kubenswrapper[4947]: I1129 06:35:31.973516 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:31 crc kubenswrapper[4947]: I1129 06:35:31.973527 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:31Z","lastTransitionTime":"2025-11-29T06:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:32 crc kubenswrapper[4947]: I1129 06:35:32.076618 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:32 crc kubenswrapper[4947]: I1129 06:35:32.076674 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:32 crc kubenswrapper[4947]: I1129 06:35:32.076696 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:32 crc kubenswrapper[4947]: I1129 06:35:32.076725 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:32 crc kubenswrapper[4947]: I1129 06:35:32.076747 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:32Z","lastTransitionTime":"2025-11-29T06:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:32 crc kubenswrapper[4947]: I1129 06:35:32.178139 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:35:32 crc kubenswrapper[4947]: I1129 06:35:32.178232 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:35:32 crc kubenswrapper[4947]: I1129 06:35:32.178253 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:35:32 crc kubenswrapper[4947]: E1129 06:35:32.178472 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 06:35:32 crc kubenswrapper[4947]: E1129 06:35:32.178607 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fbj5" podUID="53a3bcac-8ad0-47ce-abee-ee56fd152ea8" Nov 29 06:35:32 crc kubenswrapper[4947]: E1129 06:35:32.178729 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 06:35:32 crc kubenswrapper[4947]: I1129 06:35:32.180943 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:32 crc kubenswrapper[4947]: I1129 06:35:32.180987 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:32 crc kubenswrapper[4947]: I1129 06:35:32.181004 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:32 crc kubenswrapper[4947]: I1129 06:35:32.181027 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:32 crc kubenswrapper[4947]: I1129 06:35:32.181044 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:32Z","lastTransitionTime":"2025-11-29T06:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:32 crc kubenswrapper[4947]: I1129 06:35:32.284194 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:32 crc kubenswrapper[4947]: I1129 06:35:32.284291 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:32 crc kubenswrapper[4947]: I1129 06:35:32.284315 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:32 crc kubenswrapper[4947]: I1129 06:35:32.284343 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:32 crc kubenswrapper[4947]: I1129 06:35:32.284363 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:32Z","lastTransitionTime":"2025-11-29T06:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:32 crc kubenswrapper[4947]: I1129 06:35:32.297884 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:35:32 crc kubenswrapper[4947]: I1129 06:35:32.298019 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:35:32 crc kubenswrapper[4947]: I1129 06:35:32.298059 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:35:32 crc kubenswrapper[4947]: E1129 06:35:32.298292 4947 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 06:35:32 crc kubenswrapper[4947]: E1129 06:35:32.298362 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 06:36:36.298341876 +0000 UTC m=+147.342723987 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 06:35:32 crc kubenswrapper[4947]: E1129 06:35:32.298487 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:36:36.298460128 +0000 UTC m=+147.342842259 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:35:32 crc kubenswrapper[4947]: E1129 06:35:32.298582 4947 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 06:35:32 crc kubenswrapper[4947]: E1129 06:35:32.298717 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 06:36:36.298681444 +0000 UTC m=+147.343063565 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 06:35:32 crc kubenswrapper[4947]: I1129 06:35:32.387960 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:32 crc kubenswrapper[4947]: I1129 06:35:32.388022 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:32 crc kubenswrapper[4947]: I1129 06:35:32.388034 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:32 crc kubenswrapper[4947]: I1129 06:35:32.388054 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:32 crc kubenswrapper[4947]: I1129 06:35:32.388068 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:32Z","lastTransitionTime":"2025-11-29T06:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:32 crc kubenswrapper[4947]: I1129 06:35:32.493312 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:32 crc kubenswrapper[4947]: I1129 06:35:32.493698 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:32 crc kubenswrapper[4947]: I1129 06:35:32.493868 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:32 crc kubenswrapper[4947]: I1129 06:35:32.494054 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:32 crc kubenswrapper[4947]: I1129 06:35:32.494491 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:32Z","lastTransitionTime":"2025-11-29T06:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:32 crc kubenswrapper[4947]: I1129 06:35:32.499783 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:35:32 crc kubenswrapper[4947]: I1129 06:35:32.499850 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:35:32 crc kubenswrapper[4947]: E1129 06:35:32.500066 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 06:35:32 crc kubenswrapper[4947]: E1129 06:35:32.500098 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 06:35:32 crc kubenswrapper[4947]: E1129 06:35:32.500120 4947 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 06:35:32 crc kubenswrapper[4947]: E1129 06:35:32.500139 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 06:35:32 crc kubenswrapper[4947]: E1129 06:35:32.500172 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 06:35:32 crc kubenswrapper[4947]: E1129 06:35:32.500194 4947 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 06:35:32 crc kubenswrapper[4947]: E1129 06:35:32.500197 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-29 06:36:36.500170884 +0000 UTC m=+147.544553005 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 06:35:32 crc kubenswrapper[4947]: E1129 06:35:32.500360 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-29 06:36:36.500333927 +0000 UTC m=+147.544716048 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 06:35:32 crc kubenswrapper[4947]: I1129 06:35:32.597007 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:32 crc kubenswrapper[4947]: I1129 06:35:32.597040 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:32 crc kubenswrapper[4947]: I1129 06:35:32.597051 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:32 crc kubenswrapper[4947]: I1129 06:35:32.597066 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:32 crc kubenswrapper[4947]: I1129 06:35:32.597077 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:32Z","lastTransitionTime":"2025-11-29T06:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:32 crc kubenswrapper[4947]: I1129 06:35:32.699940 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:32 crc kubenswrapper[4947]: I1129 06:35:32.699998 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:32 crc kubenswrapper[4947]: I1129 06:35:32.700014 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:32 crc kubenswrapper[4947]: I1129 06:35:32.700032 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:32 crc kubenswrapper[4947]: I1129 06:35:32.700045 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:32Z","lastTransitionTime":"2025-11-29T06:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:32 crc kubenswrapper[4947]: I1129 06:35:32.802981 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:32 crc kubenswrapper[4947]: I1129 06:35:32.803024 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:32 crc kubenswrapper[4947]: I1129 06:35:32.803033 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:32 crc kubenswrapper[4947]: I1129 06:35:32.803050 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:32 crc kubenswrapper[4947]: I1129 06:35:32.803063 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:32Z","lastTransitionTime":"2025-11-29T06:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:32 crc kubenswrapper[4947]: I1129 06:35:32.906962 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:32 crc kubenswrapper[4947]: I1129 06:35:32.907036 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:32 crc kubenswrapper[4947]: I1129 06:35:32.907047 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:32 crc kubenswrapper[4947]: I1129 06:35:32.907063 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:32 crc kubenswrapper[4947]: I1129 06:35:32.907072 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:32Z","lastTransitionTime":"2025-11-29T06:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.009206 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.009263 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.009274 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.009290 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.009300 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:33Z","lastTransitionTime":"2025-11-29T06:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.112204 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.112267 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.112281 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.112303 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.112318 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:33Z","lastTransitionTime":"2025-11-29T06:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.177743 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:35:33 crc kubenswrapper[4947]: E1129 06:35:33.178015 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.215379 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.215435 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.215445 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.215460 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.215471 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:33Z","lastTransitionTime":"2025-11-29T06:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.318374 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.318406 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.318414 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.318429 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.318438 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:33Z","lastTransitionTime":"2025-11-29T06:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.492929 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.493012 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.493030 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.493061 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.493077 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:33Z","lastTransitionTime":"2025-11-29T06:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.596250 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.596301 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.596323 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.596340 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.596352 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:33Z","lastTransitionTime":"2025-11-29T06:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.689718 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.689766 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.689775 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.689794 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.689804 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:33Z","lastTransitionTime":"2025-11-29T06:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:33 crc kubenswrapper[4947]: E1129 06:35:33.709714 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ead809c-947a-4269-a0f3-b817113e9662\\\",\\\"systemUUID\\\":\\\"f0748469-4a41-446c-a5c3-776c2ca32148\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:33Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.716509 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.716563 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.716580 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.716603 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.716619 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:33Z","lastTransitionTime":"2025-11-29T06:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:33 crc kubenswrapper[4947]: E1129 06:35:33.730576 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ead809c-947a-4269-a0f3-b817113e9662\\\",\\\"systemUUID\\\":\\\"f0748469-4a41-446c-a5c3-776c2ca32148\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:33Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.734697 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.734733 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.734751 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.734774 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.734790 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:33Z","lastTransitionTime":"2025-11-29T06:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:33 crc kubenswrapper[4947]: E1129 06:35:33.749354 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ead809c-947a-4269-a0f3-b817113e9662\\\",\\\"systemUUID\\\":\\\"f0748469-4a41-446c-a5c3-776c2ca32148\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:33Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.753915 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.753974 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.753993 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.754018 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.754035 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:33Z","lastTransitionTime":"2025-11-29T06:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:33 crc kubenswrapper[4947]: E1129 06:35:33.768828 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ead809c-947a-4269-a0f3-b817113e9662\\\",\\\"systemUUID\\\":\\\"f0748469-4a41-446c-a5c3-776c2ca32148\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:33Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.773157 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.773243 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.773268 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.773299 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.773317 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:33Z","lastTransitionTime":"2025-11-29T06:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:33 crc kubenswrapper[4947]: E1129 06:35:33.790331 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ead809c-947a-4269-a0f3-b817113e9662\\\",\\\"systemUUID\\\":\\\"f0748469-4a41-446c-a5c3-776c2ca32148\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:33Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:33 crc kubenswrapper[4947]: E1129 06:35:33.790553 4947 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.792536 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.792588 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.792612 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.792638 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.792659 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:33Z","lastTransitionTime":"2025-11-29T06:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.895648 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.895685 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.895696 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.895714 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.895726 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:33Z","lastTransitionTime":"2025-11-29T06:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.999003 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.999395 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.999733 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:33 crc kubenswrapper[4947]: I1129 06:35:33.999919 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:34 crc kubenswrapper[4947]: I1129 06:35:34.000116 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:33Z","lastTransitionTime":"2025-11-29T06:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:34 crc kubenswrapper[4947]: I1129 06:35:34.103396 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:34 crc kubenswrapper[4947]: I1129 06:35:34.103801 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:34 crc kubenswrapper[4947]: I1129 06:35:34.104169 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:34 crc kubenswrapper[4947]: I1129 06:35:34.104514 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:34 crc kubenswrapper[4947]: I1129 06:35:34.104862 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:34Z","lastTransitionTime":"2025-11-29T06:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:34 crc kubenswrapper[4947]: I1129 06:35:34.177956 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:35:34 crc kubenswrapper[4947]: I1129 06:35:34.177956 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:35:34 crc kubenswrapper[4947]: E1129 06:35:34.178737 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 06:35:34 crc kubenswrapper[4947]: I1129 06:35:34.178033 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:35:34 crc kubenswrapper[4947]: E1129 06:35:34.178904 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 06:35:34 crc kubenswrapper[4947]: E1129 06:35:34.179385 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fbj5" podUID="53a3bcac-8ad0-47ce-abee-ee56fd152ea8" Nov 29 06:35:34 crc kubenswrapper[4947]: I1129 06:35:34.208666 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:34 crc kubenswrapper[4947]: I1129 06:35:34.208772 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:34 crc kubenswrapper[4947]: I1129 06:35:34.208791 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:34 crc kubenswrapper[4947]: I1129 06:35:34.208817 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:34 crc kubenswrapper[4947]: I1129 06:35:34.208837 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:34Z","lastTransitionTime":"2025-11-29T06:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:34 crc kubenswrapper[4947]: I1129 06:35:34.311717 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:34 crc kubenswrapper[4947]: I1129 06:35:34.311776 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:34 crc kubenswrapper[4947]: I1129 06:35:34.311795 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:34 crc kubenswrapper[4947]: I1129 06:35:34.311817 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:34 crc kubenswrapper[4947]: I1129 06:35:34.311833 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:34Z","lastTransitionTime":"2025-11-29T06:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:34 crc kubenswrapper[4947]: I1129 06:35:34.414651 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:34 crc kubenswrapper[4947]: I1129 06:35:34.414695 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:34 crc kubenswrapper[4947]: I1129 06:35:34.414709 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:34 crc kubenswrapper[4947]: I1129 06:35:34.414727 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:34 crc kubenswrapper[4947]: I1129 06:35:34.414740 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:34Z","lastTransitionTime":"2025-11-29T06:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:34 crc kubenswrapper[4947]: I1129 06:35:34.518713 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:34 crc kubenswrapper[4947]: I1129 06:35:34.518791 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:34 crc kubenswrapper[4947]: I1129 06:35:34.518814 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:34 crc kubenswrapper[4947]: I1129 06:35:34.518843 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:34 crc kubenswrapper[4947]: I1129 06:35:34.518864 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:34Z","lastTransitionTime":"2025-11-29T06:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:34 crc kubenswrapper[4947]: I1129 06:35:34.622160 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:34 crc kubenswrapper[4947]: I1129 06:35:34.622253 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:34 crc kubenswrapper[4947]: I1129 06:35:34.622271 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:34 crc kubenswrapper[4947]: I1129 06:35:34.622296 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:34 crc kubenswrapper[4947]: I1129 06:35:34.622317 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:34Z","lastTransitionTime":"2025-11-29T06:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:34 crc kubenswrapper[4947]: I1129 06:35:34.725978 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:34 crc kubenswrapper[4947]: I1129 06:35:34.726048 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:34 crc kubenswrapper[4947]: I1129 06:35:34.726072 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:34 crc kubenswrapper[4947]: I1129 06:35:34.726103 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:34 crc kubenswrapper[4947]: I1129 06:35:34.726123 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:34Z","lastTransitionTime":"2025-11-29T06:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:34 crc kubenswrapper[4947]: I1129 06:35:34.828899 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:34 crc kubenswrapper[4947]: I1129 06:35:34.828951 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:34 crc kubenswrapper[4947]: I1129 06:35:34.828961 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:34 crc kubenswrapper[4947]: I1129 06:35:34.828976 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:34 crc kubenswrapper[4947]: I1129 06:35:34.828987 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:34Z","lastTransitionTime":"2025-11-29T06:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:34 crc kubenswrapper[4947]: I1129 06:35:34.931440 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:34 crc kubenswrapper[4947]: I1129 06:35:34.931484 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:34 crc kubenswrapper[4947]: I1129 06:35:34.931494 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:34 crc kubenswrapper[4947]: I1129 06:35:34.931509 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:34 crc kubenswrapper[4947]: I1129 06:35:34.931519 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:34Z","lastTransitionTime":"2025-11-29T06:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:35 crc kubenswrapper[4947]: I1129 06:35:35.034302 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:35 crc kubenswrapper[4947]: I1129 06:35:35.034408 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:35 crc kubenswrapper[4947]: I1129 06:35:35.034429 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:35 crc kubenswrapper[4947]: I1129 06:35:35.034451 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:35 crc kubenswrapper[4947]: I1129 06:35:35.034466 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:35Z","lastTransitionTime":"2025-11-29T06:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:35 crc kubenswrapper[4947]: I1129 06:35:35.137376 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:35 crc kubenswrapper[4947]: I1129 06:35:35.137413 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:35 crc kubenswrapper[4947]: I1129 06:35:35.137424 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:35 crc kubenswrapper[4947]: I1129 06:35:35.137441 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:35 crc kubenswrapper[4947]: I1129 06:35:35.137453 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:35Z","lastTransitionTime":"2025-11-29T06:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:35 crc kubenswrapper[4947]: I1129 06:35:35.178559 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:35:35 crc kubenswrapper[4947]: E1129 06:35:35.178768 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 06:35:35 crc kubenswrapper[4947]: I1129 06:35:35.239685 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:35 crc kubenswrapper[4947]: I1129 06:35:35.239754 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:35 crc kubenswrapper[4947]: I1129 06:35:35.239779 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:35 crc kubenswrapper[4947]: I1129 06:35:35.239806 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:35 crc kubenswrapper[4947]: I1129 06:35:35.239824 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:35Z","lastTransitionTime":"2025-11-29T06:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:35 crc kubenswrapper[4947]: I1129 06:35:35.342517 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:35 crc kubenswrapper[4947]: I1129 06:35:35.342563 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:35 crc kubenswrapper[4947]: I1129 06:35:35.342580 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:35 crc kubenswrapper[4947]: I1129 06:35:35.342601 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:35 crc kubenswrapper[4947]: I1129 06:35:35.342616 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:35Z","lastTransitionTime":"2025-11-29T06:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:35 crc kubenswrapper[4947]: I1129 06:35:35.445762 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:35 crc kubenswrapper[4947]: I1129 06:35:35.445804 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:35 crc kubenswrapper[4947]: I1129 06:35:35.445821 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:35 crc kubenswrapper[4947]: I1129 06:35:35.445843 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:35 crc kubenswrapper[4947]: I1129 06:35:35.445860 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:35Z","lastTransitionTime":"2025-11-29T06:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:35 crc kubenswrapper[4947]: I1129 06:35:35.549445 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:35 crc kubenswrapper[4947]: I1129 06:35:35.549485 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:35 crc kubenswrapper[4947]: I1129 06:35:35.549494 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:35 crc kubenswrapper[4947]: I1129 06:35:35.549508 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:35 crc kubenswrapper[4947]: I1129 06:35:35.549518 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:35Z","lastTransitionTime":"2025-11-29T06:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:35 crc kubenswrapper[4947]: I1129 06:35:35.652868 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:35 crc kubenswrapper[4947]: I1129 06:35:35.652947 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:35 crc kubenswrapper[4947]: I1129 06:35:35.652977 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:35 crc kubenswrapper[4947]: I1129 06:35:35.653006 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:35 crc kubenswrapper[4947]: I1129 06:35:35.653024 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:35Z","lastTransitionTime":"2025-11-29T06:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:35 crc kubenswrapper[4947]: I1129 06:35:35.754857 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:35 crc kubenswrapper[4947]: I1129 06:35:35.754903 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:35 crc kubenswrapper[4947]: I1129 06:35:35.754912 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:35 crc kubenswrapper[4947]: I1129 06:35:35.754930 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:35 crc kubenswrapper[4947]: I1129 06:35:35.754944 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:35Z","lastTransitionTime":"2025-11-29T06:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:35 crc kubenswrapper[4947]: I1129 06:35:35.857961 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:35 crc kubenswrapper[4947]: I1129 06:35:35.858059 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:35 crc kubenswrapper[4947]: I1129 06:35:35.858082 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:35 crc kubenswrapper[4947]: I1129 06:35:35.858111 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:35 crc kubenswrapper[4947]: I1129 06:35:35.858134 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:35Z","lastTransitionTime":"2025-11-29T06:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:35 crc kubenswrapper[4947]: I1129 06:35:35.960041 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:35 crc kubenswrapper[4947]: I1129 06:35:35.960078 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:35 crc kubenswrapper[4947]: I1129 06:35:35.960087 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:35 crc kubenswrapper[4947]: I1129 06:35:35.960100 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:35 crc kubenswrapper[4947]: I1129 06:35:35.960108 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:35Z","lastTransitionTime":"2025-11-29T06:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:36 crc kubenswrapper[4947]: I1129 06:35:36.062915 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:36 crc kubenswrapper[4947]: I1129 06:35:36.062967 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:36 crc kubenswrapper[4947]: I1129 06:35:36.062979 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:36 crc kubenswrapper[4947]: I1129 06:35:36.062997 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:36 crc kubenswrapper[4947]: I1129 06:35:36.063009 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:36Z","lastTransitionTime":"2025-11-29T06:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:36 crc kubenswrapper[4947]: I1129 06:35:36.166089 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:36 crc kubenswrapper[4947]: I1129 06:35:36.166180 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:36 crc kubenswrapper[4947]: I1129 06:35:36.166205 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:36 crc kubenswrapper[4947]: I1129 06:35:36.166295 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:36 crc kubenswrapper[4947]: I1129 06:35:36.166324 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:36Z","lastTransitionTime":"2025-11-29T06:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:36 crc kubenswrapper[4947]: I1129 06:35:36.178439 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:35:36 crc kubenswrapper[4947]: I1129 06:35:36.178561 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:35:36 crc kubenswrapper[4947]: I1129 06:35:36.178439 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:35:36 crc kubenswrapper[4947]: E1129 06:35:36.178636 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 06:35:36 crc kubenswrapper[4947]: E1129 06:35:36.178753 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 06:35:36 crc kubenswrapper[4947]: E1129 06:35:36.178822 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fbj5" podUID="53a3bcac-8ad0-47ce-abee-ee56fd152ea8" Nov 29 06:35:36 crc kubenswrapper[4947]: I1129 06:35:36.270117 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:36 crc kubenswrapper[4947]: I1129 06:35:36.270207 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:36 crc kubenswrapper[4947]: I1129 06:35:36.270260 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:36 crc kubenswrapper[4947]: I1129 06:35:36.270293 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:36 crc kubenswrapper[4947]: I1129 06:35:36.270315 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:36Z","lastTransitionTime":"2025-11-29T06:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:36 crc kubenswrapper[4947]: I1129 06:35:36.373837 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:36 crc kubenswrapper[4947]: I1129 06:35:36.373890 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:36 crc kubenswrapper[4947]: I1129 06:35:36.373903 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:36 crc kubenswrapper[4947]: I1129 06:35:36.373921 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:36 crc kubenswrapper[4947]: I1129 06:35:36.373936 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:36Z","lastTransitionTime":"2025-11-29T06:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:36 crc kubenswrapper[4947]: I1129 06:35:36.477133 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:36 crc kubenswrapper[4947]: I1129 06:35:36.477308 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:36 crc kubenswrapper[4947]: I1129 06:35:36.477334 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:36 crc kubenswrapper[4947]: I1129 06:35:36.477371 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:36 crc kubenswrapper[4947]: I1129 06:35:36.477390 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:36Z","lastTransitionTime":"2025-11-29T06:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:36 crc kubenswrapper[4947]: I1129 06:35:36.580917 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:36 crc kubenswrapper[4947]: I1129 06:35:36.580975 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:36 crc kubenswrapper[4947]: I1129 06:35:36.580988 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:36 crc kubenswrapper[4947]: I1129 06:35:36.581008 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:36 crc kubenswrapper[4947]: I1129 06:35:36.581018 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:36Z","lastTransitionTime":"2025-11-29T06:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:36 crc kubenswrapper[4947]: I1129 06:35:36.683310 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:36 crc kubenswrapper[4947]: I1129 06:35:36.683358 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:36 crc kubenswrapper[4947]: I1129 06:35:36.683370 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:36 crc kubenswrapper[4947]: I1129 06:35:36.683388 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:36 crc kubenswrapper[4947]: I1129 06:35:36.683398 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:36Z","lastTransitionTime":"2025-11-29T06:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:36 crc kubenswrapper[4947]: I1129 06:35:36.786174 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:36 crc kubenswrapper[4947]: I1129 06:35:36.786236 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:36 crc kubenswrapper[4947]: I1129 06:35:36.786250 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:36 crc kubenswrapper[4947]: I1129 06:35:36.786264 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:36 crc kubenswrapper[4947]: I1129 06:35:36.786273 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:36Z","lastTransitionTime":"2025-11-29T06:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:36 crc kubenswrapper[4947]: I1129 06:35:36.888679 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:36 crc kubenswrapper[4947]: I1129 06:35:36.888727 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:36 crc kubenswrapper[4947]: I1129 06:35:36.888740 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:36 crc kubenswrapper[4947]: I1129 06:35:36.888758 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:36 crc kubenswrapper[4947]: I1129 06:35:36.888771 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:36Z","lastTransitionTime":"2025-11-29T06:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:36 crc kubenswrapper[4947]: I1129 06:35:36.991419 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:36 crc kubenswrapper[4947]: I1129 06:35:36.991482 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:36 crc kubenswrapper[4947]: I1129 06:35:36.991508 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:36 crc kubenswrapper[4947]: I1129 06:35:36.991537 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:36 crc kubenswrapper[4947]: I1129 06:35:36.991559 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:36Z","lastTransitionTime":"2025-11-29T06:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:37 crc kubenswrapper[4947]: I1129 06:35:37.094873 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:37 crc kubenswrapper[4947]: I1129 06:35:37.095136 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:37 crc kubenswrapper[4947]: I1129 06:35:37.095153 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:37 crc kubenswrapper[4947]: I1129 06:35:37.095180 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:37 crc kubenswrapper[4947]: I1129 06:35:37.095200 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:37Z","lastTransitionTime":"2025-11-29T06:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:37 crc kubenswrapper[4947]: I1129 06:35:37.178718 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:35:37 crc kubenswrapper[4947]: E1129 06:35:37.178929 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 06:35:37 crc kubenswrapper[4947]: I1129 06:35:37.197973 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:37 crc kubenswrapper[4947]: I1129 06:35:37.198029 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:37 crc kubenswrapper[4947]: I1129 06:35:37.198046 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:37 crc kubenswrapper[4947]: I1129 06:35:37.198070 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:37 crc kubenswrapper[4947]: I1129 06:35:37.198086 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:37Z","lastTransitionTime":"2025-11-29T06:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:37 crc kubenswrapper[4947]: I1129 06:35:37.301776 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:37 crc kubenswrapper[4947]: I1129 06:35:37.301859 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:37 crc kubenswrapper[4947]: I1129 06:35:37.301874 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:37 crc kubenswrapper[4947]: I1129 06:35:37.301897 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:37 crc kubenswrapper[4947]: I1129 06:35:37.301911 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:37Z","lastTransitionTime":"2025-11-29T06:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:37 crc kubenswrapper[4947]: I1129 06:35:37.405015 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:37 crc kubenswrapper[4947]: I1129 06:35:37.405056 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:37 crc kubenswrapper[4947]: I1129 06:35:37.405066 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:37 crc kubenswrapper[4947]: I1129 06:35:37.405082 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:37 crc kubenswrapper[4947]: I1129 06:35:37.405092 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:37Z","lastTransitionTime":"2025-11-29T06:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:37 crc kubenswrapper[4947]: I1129 06:35:37.508392 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:37 crc kubenswrapper[4947]: I1129 06:35:37.508463 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:37 crc kubenswrapper[4947]: I1129 06:35:37.508480 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:37 crc kubenswrapper[4947]: I1129 06:35:37.508508 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:37 crc kubenswrapper[4947]: I1129 06:35:37.508525 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:37Z","lastTransitionTime":"2025-11-29T06:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:37 crc kubenswrapper[4947]: I1129 06:35:37.611997 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:37 crc kubenswrapper[4947]: I1129 06:35:37.612078 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:37 crc kubenswrapper[4947]: I1129 06:35:37.612104 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:37 crc kubenswrapper[4947]: I1129 06:35:37.612138 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:37 crc kubenswrapper[4947]: I1129 06:35:37.612161 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:37Z","lastTransitionTime":"2025-11-29T06:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:37 crc kubenswrapper[4947]: I1129 06:35:37.715493 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:37 crc kubenswrapper[4947]: I1129 06:35:37.715538 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:37 crc kubenswrapper[4947]: I1129 06:35:37.715549 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:37 crc kubenswrapper[4947]: I1129 06:35:37.715565 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:37 crc kubenswrapper[4947]: I1129 06:35:37.715578 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:37Z","lastTransitionTime":"2025-11-29T06:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:37 crc kubenswrapper[4947]: I1129 06:35:37.818770 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:37 crc kubenswrapper[4947]: I1129 06:35:37.818822 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:37 crc kubenswrapper[4947]: I1129 06:35:37.818832 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:37 crc kubenswrapper[4947]: I1129 06:35:37.818856 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:37 crc kubenswrapper[4947]: I1129 06:35:37.818870 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:37Z","lastTransitionTime":"2025-11-29T06:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:37 crc kubenswrapper[4947]: I1129 06:35:37.922969 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:37 crc kubenswrapper[4947]: I1129 06:35:37.923016 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:37 crc kubenswrapper[4947]: I1129 06:35:37.923029 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:37 crc kubenswrapper[4947]: I1129 06:35:37.923047 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:37 crc kubenswrapper[4947]: I1129 06:35:37.923062 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:37Z","lastTransitionTime":"2025-11-29T06:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.026940 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.027009 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.027033 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.027064 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.027088 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:38Z","lastTransitionTime":"2025-11-29T06:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.131020 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.131085 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.131102 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.131126 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.131146 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:38Z","lastTransitionTime":"2025-11-29T06:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.178254 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.178313 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.178352 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:35:38 crc kubenswrapper[4947]: E1129 06:35:38.178446 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fbj5" podUID="53a3bcac-8ad0-47ce-abee-ee56fd152ea8" Nov 29 06:35:38 crc kubenswrapper[4947]: E1129 06:35:38.178611 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 06:35:38 crc kubenswrapper[4947]: E1129 06:35:38.179106 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.179645 4947 scope.go:117] "RemoveContainer" containerID="f688195e20b1f49c08f3c1b7e2348e4ffdaa7417f17c1a626fde1c81232d1920" Nov 29 06:35:38 crc kubenswrapper[4947]: E1129 06:35:38.179931 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-z4rxq_openshift-ovn-kubernetes(dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" podUID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.197752 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84be169c8b603ce62c3c60b6c67eea559c20cbcfc7ba2d0965bcf6acfb00e99b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:38Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.216903 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zb4cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6063fc55-4365-4f22-a005-bfac3812fdce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c5fd1426004597dc139d078e4f9b5bb7fec8ab12162ca6b052f5eb43025b6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fe3a8d1d8d729e919ab73a034943a441b5bc44409af32b5671d73b80e2bd6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe3a8d1d8d729e919ab73a034943a441b5bc44409af32b5671d73b80e2bd6ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49d19e2f799deb4cf6b8154a2f3c0485c06ad44d0b3e0bac5b2b85f16228f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a49d19e2f799deb4cf6b8154a2f3c0485c06ad44d0b3e0bac5b2b85f16228f38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zb4cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:38Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.235486 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f4d791f-bb61-4aaa-a09c-3007b59645a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247b6579653599785a44fc9e8e0d47f93802f01f2002465a9ba3825d487a324b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bade16afee9bcb203991caf838d5e9e5302dd0b35462b93b45c34cf98e89c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5zgvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:38Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.238272 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.238549 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.238856 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.239131 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.239351 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:38Z","lastTransitionTime":"2025-11-29T06:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.249804 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ttw9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8440d6ae-a357-461e-a91f-a48625b4a9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c526243829deb889e7afc49647d8bf9960f886b6abc9aa7cba8a69c8d5b3ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mxr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ttw9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:38Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.266881 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b25cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a287baf-0c87-4698-9553-6f94927fbf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69ea28bcbeb8379671147cd41f131b8b37b41a285319b082c43381a56cdc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbmt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754f770b2635be1fc785e0cc958e0c885dd7516ba54de760493ec7778d738708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbmt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b25cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:38Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.283549 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9db81144-11ad-4bb9-8158-dd661afb8844\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f62b86b8b7ede5c01c1026af41f584b1e7a171ff14ef1e3769ddf8e73121296f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ef6bea418d1acfb4cfbf3310e7898127bbd731f3eb432daf74d7eeecc4c796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5ef6bea418d1acfb4cfbf3310e7898127bbd731f3eb432daf74d7eeecc4c796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:38Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.301560 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92f526fea4a10baf89765a67868ca0fe47ce19bbde73cd538bd65add7578f000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3934144663bb1471d168783c160c42654884d04d2510cbe1e5f2f6e2cfb94f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:38Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.321189 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:38Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.336745 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sxdk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f180a1-2fb0-4b96-85ed-1116677a7c99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b80d32b0abeeab39bc9fd8c24510ec865e957006ecac31da11f86b6e0fb983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swgjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sxdk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:38Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.342768 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.342987 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.343167 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.343416 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.343583 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:38Z","lastTransitionTime":"2025-11-29T06:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.354994 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlg45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63ddd0da1118c2e86da1aea51f8248927f80bc1d21790723952b4d59f294cd76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35c22792f1661e18f407880f3a743a03890141d9f5c6062e8ba6b06204b4e3da\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T06:35:16Z\\\",\\\"message\\\":\\\"2025-11-29T06:34:31+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c66371c3-af0c-4d62-8105-c20fdc1c09f0\\\\n2025-11-29T06:34:31+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c66371c3-af0c-4d62-8105-c20fdc1c09f0 to /host/opt/cni/bin/\\\\n2025-11-29T06:34:31Z [verbose] multus-daemon started\\\\n2025-11-29T06:34:31Z [verbose] Readiness Indicator file check\\\\n2025-11-29T06:35:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rndhc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlg45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:38Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.392741 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"268b8d19-01a0-4696-8b9e-0efed4129d56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06bdbb1b45cdc328f5c4345797e222ac33d9f1ea052011c183a229c02134cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa666baf286a9b54dbafbe35f35c3be377481392a4e3811f72cf189d1adbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ef81ddc4f773e06ee9c296ecc2e67ce76bf795fbd8357236cad6c0462d7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360e1f2c76ee825b275816919d9ab1473c6db5da5ab22b43134738150306f87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4712b1e44979f37b926f325313325a788ecb9e01b30583bf8210ab6c53d71d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:38Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.414774 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"682a3ca0-7f80-4aa3-8627-44c5f9d6c661\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0decb70631f9a60b5a44909b2cd152c099aa6955393b715617a93d2639a8f211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7347898f9e11318a33ea5f24ef489a4e58da64e0631ac46aa91f30f5691ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://836bfd5239874f47639673b177b0d441dff3d84e255c7c6d1983c9e0db5134fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0e4509596cc7d5e28048c72689ccfc8c249cf06f856142be2b48103608b05f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d0e4509596cc7d5e28048c72689ccfc8c249cf06f856142be2b48103608b05f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:38Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.436012 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:38Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.447327 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.447391 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.447411 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.447443 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.447483 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:38Z","lastTransitionTime":"2025-11-29T06:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.453503 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2fbj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53a3bcac-8ad0-47ce-abee-ee56fd152ea8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nxj27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nxj27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2fbj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:38Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.478824 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21d5af32-2f1c-4aa8-a9b3-c436a5b3cb32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8225fda5f45611099716e891065acf0ffa2db6d2c9b79192b5955c1bac77daf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d79edf1f405f23d4ec3ff1a52d9583fc26d31d1fb690337a7b081868c397d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfae5cdad70e7948b6383df2f934a477b237c38b94cf3d3eda22d11fe02826e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20235c8af3fdd17aa70bd0744d8faf37aa7781f63ef194f6e467721a47e388d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:38Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.502395 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de17371c2f08b16e96df9f76ce3eaf0981cc8590aca0784ea95598b7842812eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:38Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.522823 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:38Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.550075 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.550201 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.550255 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.550287 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.550308 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:38Z","lastTransitionTime":"2025-11-29T06:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.552139 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26f82f9be693d04a2b2536773e664c7e2525ea3cf90f6135e5ed8a41e0def93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffeb939b1928f4e71530473d93e5b142eb70a09d929bee5bea36c407c0fb44a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b624f494cbf18c5d0c15ee7d074273ebb685465bc73aeebae77758840c958415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03aa7fc02c19732bcc2f291ae3e1eb8ee53ac2f8e4b6138a24c73eb69201fa5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba937f20fd42fdc9c935084319c2d3c455813e35002ed22a4be631cc74fab26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5372e6932fe776f4e4d18124f45a05be3f374919ce6df4426d89a023353836\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f688195e20b1f49c08f3c1b7e2348e4ffdaa7417f17c1a626fde1c81232d1920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f688195e20b1f49c08f3c1b7e2348e4ffdaa7417f17c1a626fde1c81232d1920\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T06:35:23Z\\\",\\\"message\\\":\\\"go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:22Z is after 2025-08-24T17:21:41Z]\\\\nI1129 06:35:23.026027 6879 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1129 06:35:23.026155 6879 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI1129 06:35:23.026193 6879 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI1129 06:35:23.026005 6879 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-5zgvc in node crc\\\\nI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:35:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-z4rxq_openshift-ovn-kubernetes(dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fadf23d78dff09f3edcb6159171e1f55f5a84b42d3dbc6804a2e1764d6c71797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z4rxq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:38Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.574058 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabaff26-a896-4929-8b32-6e32efe02ffc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e518bb57f917b37cc08b0f099557c012da1b6c08a168ff3acb262abc2c9d6983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73fed24d3da601900cf1f54e0adc59daba09bd9a50813cc39508f2d8c66ae5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187288a1894fa8a8aaf843f1889edb54adeb0771fb0a5759a1346d1ae04f9970\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb63d3fae21b626041036425ac0dbe4cc0fba792aa82812037eb0cf64036984\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce17a0c77733e53cc77e1ced6944f39b53419840d78420bc7cf4ad34fec82e8b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 06:34:22.756831 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 06:34:22.759210 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196230451/tls.crt::/tmp/serving-cert-4196230451/tls.key\\\\\\\"\\\\nI1129 06:34:28.364401 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 06:34:28.369411 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 06:34:28.369441 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 06:34:28.369468 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 06:34:28.369474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 06:34:28.374098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 06:34:28.374134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374148 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 06:34:28.374153 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 06:34:28.374157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 06:34:28.374161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 06:34:28.374166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 06:34:28.376189 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ce7bb31402783842d6a20f7d0667426065729bdee1f7cf4fcbe636173e79da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:38Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.652608 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.652653 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.652662 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.652677 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.652686 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:38Z","lastTransitionTime":"2025-11-29T06:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.756343 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.756419 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.756443 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.756473 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.756498 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:38Z","lastTransitionTime":"2025-11-29T06:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.862188 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.862260 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.862271 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.862287 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.862296 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:38Z","lastTransitionTime":"2025-11-29T06:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.965320 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.965380 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.965391 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.965410 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:38 crc kubenswrapper[4947]: I1129 06:35:38.965421 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:38Z","lastTransitionTime":"2025-11-29T06:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.068000 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.068074 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.068091 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.068113 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.068128 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:39Z","lastTransitionTime":"2025-11-29T06:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.170949 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.170988 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.170997 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.171013 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.171024 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:39Z","lastTransitionTime":"2025-11-29T06:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.178587 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:35:39 crc kubenswrapper[4947]: E1129 06:35:39.178742 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.204216 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabaff26-a896-4929-8b32-6e32efe02ffc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e518bb57f917b37cc08b0f099557c012da1b6c08a168ff3acb262abc2c9d6983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73fed24d3da601900cf1f54e0adc59daba09bd9a50813cc39508f2d8c66ae5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187288a1894fa8a8aaf843f1889edb54adeb0771fb0a5759a1346d1ae04f9970\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb63d3fae21b626041036425ac0dbe4cc0fba792aa82812037eb0cf64036984\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce17a0c77733e53cc77e1ced6944f39b53419840d78420bc7cf4ad34fec82e8b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 06:34:22.756831 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 06:34:22.759210 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196230451/tls.crt::/tmp/serving-cert-4196230451/tls.key\\\\\\\"\\\\nI1129 06:34:28.364401 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 06:34:28.369411 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 06:34:28.369441 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 06:34:28.369468 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 06:34:28.369474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 06:34:28.374098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 06:34:28.374134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374148 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 06:34:28.374153 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 06:34:28.374157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 06:34:28.374161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 06:34:28.374166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 06:34:28.376189 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ce7bb31402783842d6a20f7d0667426065729bdee1f7cf4fcbe636173e79da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:39Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.222862 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de17371c2f08b16e96df9f76ce3eaf0981cc8590aca0784ea95598b7842812eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:39Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.243783 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:39Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.273620 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.273693 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.273707 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.273727 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.273765 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:39Z","lastTransitionTime":"2025-11-29T06:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.275514 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26f82f9be693d04a2b2536773e664c7e2525ea3cf90f6135e5ed8a41e0def93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffeb939b1928f4e71530473d93e5b142eb70a09d929bee5bea36c407c0fb44a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b624f494cbf18c5d0c15ee7d074273ebb685465bc73aeebae77758840c958415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03aa7fc02c19732bcc2f291ae3e1eb8ee53ac2f8e4b6138a24c73eb69201fa5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba937f20fd42fdc9c935084319c2d3c455813e35002ed22a4be631cc74fab26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5372e6932fe776f4e4d18124f45a05be3f374919ce6df4426d89a023353836\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f688195e20b1f49c08f3c1b7e2348e4ffdaa7417f17c1a626fde1c81232d1920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f688195e20b1f49c08f3c1b7e2348e4ffdaa7417f17c1a626fde1c81232d1920\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T06:35:23Z\\\",\\\"message\\\":\\\"go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:22Z is after 2025-08-24T17:21:41Z]\\\\nI1129 06:35:23.026027 6879 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1129 06:35:23.026155 6879 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI1129 06:35:23.026193 6879 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI1129 06:35:23.026005 6879 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-5zgvc in node crc\\\\nI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:35:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-z4rxq_openshift-ovn-kubernetes(dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fadf23d78dff09f3edcb6159171e1f55f5a84b42d3dbc6804a2e1764d6c71797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z4rxq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:39Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.297256 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b25cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a287baf-0c87-4698-9553-6f94927fbf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69ea28bcbeb8379671147cd41f131b8b37b41a285319b082c43381a56cdc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbmt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754f770b2635be1fc785e0cc958e0c885dd7516ba54de760493ec7778d738708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbmt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b25cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:39Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.308721 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9db81144-11ad-4bb9-8158-dd661afb8844\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f62b86b8b7ede5c01c1026af41f584b1e7a171ff14ef1e3769ddf8e73121296f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ef6bea418d1acfb4cfbf3310e7898127bbd731f3eb432daf74d7eeecc4c796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5ef6bea418d1acfb4cfbf3310e7898127bbd731f3eb432daf74d7eeecc4c796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:39Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.321302 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84be169c8b603ce62c3c60b6c67eea559c20cbcfc7ba2d0965bcf6acfb00e99b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:39Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.333447 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zb4cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6063fc55-4365-4f22-a005-bfac3812fdce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c5fd1426004597dc139d078e4f9b5bb7fec8ab12162ca6b052f5eb43025b6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fe3a8d1d8d729e919ab73a034943a441b5bc44409af32b5671d73b80e2bd6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe3a8d1d8d729e919ab73a034943a441b5bc44409af32b5671d73b80e2bd6ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49d19e2f799deb4cf6b8154a2f3c0485c06ad44d0b3e0bac5b2b85f16228f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a49d19e2f799deb4cf6b8154a2f3c0485c06ad44d0b3e0bac5b2b85f16228f38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zb4cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:39Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.342391 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f4d791f-bb61-4aaa-a09c-3007b59645a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247b6579653599785a44fc9e8e0d47f93802f01f2002465a9ba3825d487a324b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bade16afee9bcb203991caf838d5e9e5302dd0b35462b93b45c34cf98e89c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5zgvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:39Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.350148 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ttw9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8440d6ae-a357-461e-a91f-a48625b4a9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c526243829deb889e7afc49647d8bf9960f886b6abc9aa7cba8a69c8d5b3ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mxr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ttw9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:39Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.367595 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"268b8d19-01a0-4696-8b9e-0efed4129d56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06bdbb1b45cdc328f5c4345797e222ac33d9f1ea052011c183a229c02134cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa666baf286a9b54dbafbe35f35c3be377481392a4e3811f72cf189d1adbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ef81ddc4f773e06ee9c296ecc2e67ce76bf795fbd8357236cad6c0462d7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360e1f2c76ee825b275816919d9ab1473c6db5da5ab22b43134738150306f87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4712b1e44979f37b926f325313325a788ecb9e01b30583bf8210ab6c53d71d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:39Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.376836 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.376918 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.376935 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.376985 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.377004 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:39Z","lastTransitionTime":"2025-11-29T06:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.378903 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92f526fea4a10baf89765a67868ca0fe47ce19bbde73cd538bd65add7578f000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3934144663bb1471d168783c160c42654884d04d2510cbe1e5f2f6e2cfb94f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:39Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.388566 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:39Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.401024 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sxdk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f180a1-2fb0-4b96-85ed-1116677a7c99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b80d32b0abeeab39bc9fd8c24510ec865e957006ecac31da11f86b6e0fb983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swgjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sxdk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:39Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.414857 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlg45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63ddd0da1118c2e86da1aea51f8248927f80bc1d21790723952b4d59f294cd76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35c22792f1661e18f407880f3a743a03890141d9f5c6062e8ba6b06204b4e3da\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T06:35:16Z\\\",\\\"message\\\":\\\"2025-11-29T06:34:31+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c66371c3-af0c-4d62-8105-c20fdc1c09f0\\\\n2025-11-29T06:34:31+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c66371c3-af0c-4d62-8105-c20fdc1c09f0 to /host/opt/cni/bin/\\\\n2025-11-29T06:34:31Z [verbose] multus-daemon started\\\\n2025-11-29T06:34:31Z [verbose] Readiness Indicator file check\\\\n2025-11-29T06:35:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rndhc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlg45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:39Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.431863 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21d5af32-2f1c-4aa8-a9b3-c436a5b3cb32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8225fda5f45611099716e891065acf0ffa2db6d2c9b79192b5955c1bac77daf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d79edf1f405f23d4ec3ff1a52d9583fc26d31d1fb690337a7b081868c397d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfae5cdad70e7948b6383df2f934a477b237c38b94cf3d3eda22d11fe02826e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20235c8af3fdd17aa70bd0744d8faf37aa7781f63ef194f6e467721a47e388d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:39Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.446978 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"682a3ca0-7f80-4aa3-8627-44c5f9d6c661\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0decb70631f9a60b5a44909b2cd152c099aa6955393b715617a93d2639a8f211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7347898f9e11318a33ea5f24ef489a4e58da64e0631ac46aa91f30f5691ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://836bfd5239874f47639673b177b0d441dff3d84e255c7c6d1983c9e0db5134fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0e4509596cc7d5e28048c72689ccfc8c249cf06f856142be2b48103608b05f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d0e4509596cc7d5e28048c72689ccfc8c249cf06f856142be2b48103608b05f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:39Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.465291 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:39Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.479961 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.480022 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.480036 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.480059 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.480073 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:39Z","lastTransitionTime":"2025-11-29T06:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.483158 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2fbj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53a3bcac-8ad0-47ce-abee-ee56fd152ea8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nxj27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nxj27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2fbj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:39Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.583265 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.583305 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.583318 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.583334 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.583344 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:39Z","lastTransitionTime":"2025-11-29T06:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.685504 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.685550 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.685560 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.685581 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.685592 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:39Z","lastTransitionTime":"2025-11-29T06:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.788101 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.788435 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.788451 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.788469 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.788482 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:39Z","lastTransitionTime":"2025-11-29T06:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.891012 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.891081 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.891105 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.891134 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.891156 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:39Z","lastTransitionTime":"2025-11-29T06:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.993329 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.993366 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.993376 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.993392 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:39 crc kubenswrapper[4947]: I1129 06:35:39.993402 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:39Z","lastTransitionTime":"2025-11-29T06:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:40 crc kubenswrapper[4947]: I1129 06:35:40.096199 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:40 crc kubenswrapper[4947]: I1129 06:35:40.096248 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:40 crc kubenswrapper[4947]: I1129 06:35:40.096258 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:40 crc kubenswrapper[4947]: I1129 06:35:40.096274 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:40 crc kubenswrapper[4947]: I1129 06:35:40.096283 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:40Z","lastTransitionTime":"2025-11-29T06:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:40 crc kubenswrapper[4947]: I1129 06:35:40.178720 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:35:40 crc kubenswrapper[4947]: E1129 06:35:40.179086 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fbj5" podUID="53a3bcac-8ad0-47ce-abee-ee56fd152ea8" Nov 29 06:35:40 crc kubenswrapper[4947]: I1129 06:35:40.179384 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:35:40 crc kubenswrapper[4947]: I1129 06:35:40.179542 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:35:40 crc kubenswrapper[4947]: E1129 06:35:40.179552 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 06:35:40 crc kubenswrapper[4947]: E1129 06:35:40.179756 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 06:35:40 crc kubenswrapper[4947]: I1129 06:35:40.199821 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:40 crc kubenswrapper[4947]: I1129 06:35:40.199885 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:40 crc kubenswrapper[4947]: I1129 06:35:40.199904 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:40 crc kubenswrapper[4947]: I1129 06:35:40.199928 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:40 crc kubenswrapper[4947]: I1129 06:35:40.199945 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:40Z","lastTransitionTime":"2025-11-29T06:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:40 crc kubenswrapper[4947]: I1129 06:35:40.302200 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:40 crc kubenswrapper[4947]: I1129 06:35:40.302272 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:40 crc kubenswrapper[4947]: I1129 06:35:40.302288 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:40 crc kubenswrapper[4947]: I1129 06:35:40.302311 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:40 crc kubenswrapper[4947]: I1129 06:35:40.302326 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:40Z","lastTransitionTime":"2025-11-29T06:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:40 crc kubenswrapper[4947]: I1129 06:35:40.405312 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:40 crc kubenswrapper[4947]: I1129 06:35:40.405364 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:40 crc kubenswrapper[4947]: I1129 06:35:40.405377 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:40 crc kubenswrapper[4947]: I1129 06:35:40.405394 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:40 crc kubenswrapper[4947]: I1129 06:35:40.405407 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:40Z","lastTransitionTime":"2025-11-29T06:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:40 crc kubenswrapper[4947]: I1129 06:35:40.509727 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:40 crc kubenswrapper[4947]: I1129 06:35:40.509782 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:40 crc kubenswrapper[4947]: I1129 06:35:40.509803 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:40 crc kubenswrapper[4947]: I1129 06:35:40.509819 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:40 crc kubenswrapper[4947]: I1129 06:35:40.509832 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:40Z","lastTransitionTime":"2025-11-29T06:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:40 crc kubenswrapper[4947]: I1129 06:35:40.611919 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:40 crc kubenswrapper[4947]: I1129 06:35:40.611957 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:40 crc kubenswrapper[4947]: I1129 06:35:40.611966 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:40 crc kubenswrapper[4947]: I1129 06:35:40.611978 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:40 crc kubenswrapper[4947]: I1129 06:35:40.611987 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:40Z","lastTransitionTime":"2025-11-29T06:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:40 crc kubenswrapper[4947]: I1129 06:35:40.714242 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:40 crc kubenswrapper[4947]: I1129 06:35:40.714277 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:40 crc kubenswrapper[4947]: I1129 06:35:40.714287 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:40 crc kubenswrapper[4947]: I1129 06:35:40.714303 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:40 crc kubenswrapper[4947]: I1129 06:35:40.714315 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:40Z","lastTransitionTime":"2025-11-29T06:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:40 crc kubenswrapper[4947]: I1129 06:35:40.817039 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:40 crc kubenswrapper[4947]: I1129 06:35:40.817070 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:40 crc kubenswrapper[4947]: I1129 06:35:40.817079 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:40 crc kubenswrapper[4947]: I1129 06:35:40.817094 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:40 crc kubenswrapper[4947]: I1129 06:35:40.817103 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:40Z","lastTransitionTime":"2025-11-29T06:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:40 crc kubenswrapper[4947]: I1129 06:35:40.919430 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:40 crc kubenswrapper[4947]: I1129 06:35:40.919499 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:40 crc kubenswrapper[4947]: I1129 06:35:40.919523 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:40 crc kubenswrapper[4947]: I1129 06:35:40.919553 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:40 crc kubenswrapper[4947]: I1129 06:35:40.919577 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:40Z","lastTransitionTime":"2025-11-29T06:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:41 crc kubenswrapper[4947]: I1129 06:35:41.022790 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:41 crc kubenswrapper[4947]: I1129 06:35:41.022855 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:41 crc kubenswrapper[4947]: I1129 06:35:41.022878 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:41 crc kubenswrapper[4947]: I1129 06:35:41.022908 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:41 crc kubenswrapper[4947]: I1129 06:35:41.022929 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:41Z","lastTransitionTime":"2025-11-29T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:41 crc kubenswrapper[4947]: I1129 06:35:41.125070 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:41 crc kubenswrapper[4947]: I1129 06:35:41.125131 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:41 crc kubenswrapper[4947]: I1129 06:35:41.125153 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:41 crc kubenswrapper[4947]: I1129 06:35:41.125181 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:41 crc kubenswrapper[4947]: I1129 06:35:41.125202 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:41Z","lastTransitionTime":"2025-11-29T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:41 crc kubenswrapper[4947]: I1129 06:35:41.177932 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:35:41 crc kubenswrapper[4947]: E1129 06:35:41.178175 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 06:35:41 crc kubenswrapper[4947]: I1129 06:35:41.227754 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:41 crc kubenswrapper[4947]: I1129 06:35:41.227814 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:41 crc kubenswrapper[4947]: I1129 06:35:41.227830 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:41 crc kubenswrapper[4947]: I1129 06:35:41.227855 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:41 crc kubenswrapper[4947]: I1129 06:35:41.227872 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:41Z","lastTransitionTime":"2025-11-29T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:41 crc kubenswrapper[4947]: I1129 06:35:41.330618 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:41 crc kubenswrapper[4947]: I1129 06:35:41.330649 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:41 crc kubenswrapper[4947]: I1129 06:35:41.330657 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:41 crc kubenswrapper[4947]: I1129 06:35:41.330671 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:41 crc kubenswrapper[4947]: I1129 06:35:41.330680 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:41Z","lastTransitionTime":"2025-11-29T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:41 crc kubenswrapper[4947]: I1129 06:35:41.434087 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:41 crc kubenswrapper[4947]: I1129 06:35:41.434152 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:41 crc kubenswrapper[4947]: I1129 06:35:41.434189 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:41 crc kubenswrapper[4947]: I1129 06:35:41.434252 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:41 crc kubenswrapper[4947]: I1129 06:35:41.434276 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:41Z","lastTransitionTime":"2025-11-29T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:41 crc kubenswrapper[4947]: I1129 06:35:41.537300 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:41 crc kubenswrapper[4947]: I1129 06:35:41.537377 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:41 crc kubenswrapper[4947]: I1129 06:35:41.537401 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:41 crc kubenswrapper[4947]: I1129 06:35:41.537432 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:41 crc kubenswrapper[4947]: I1129 06:35:41.537457 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:41Z","lastTransitionTime":"2025-11-29T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:41 crc kubenswrapper[4947]: I1129 06:35:41.639178 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:41 crc kubenswrapper[4947]: I1129 06:35:41.639287 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:41 crc kubenswrapper[4947]: I1129 06:35:41.639304 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:41 crc kubenswrapper[4947]: I1129 06:35:41.639321 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:41 crc kubenswrapper[4947]: I1129 06:35:41.639334 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:41Z","lastTransitionTime":"2025-11-29T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:41 crc kubenswrapper[4947]: I1129 06:35:41.741385 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:41 crc kubenswrapper[4947]: I1129 06:35:41.741427 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:41 crc kubenswrapper[4947]: I1129 06:35:41.741436 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:41 crc kubenswrapper[4947]: I1129 06:35:41.741450 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:41 crc kubenswrapper[4947]: I1129 06:35:41.741459 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:41Z","lastTransitionTime":"2025-11-29T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:41 crc kubenswrapper[4947]: I1129 06:35:41.844238 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:41 crc kubenswrapper[4947]: I1129 06:35:41.844275 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:41 crc kubenswrapper[4947]: I1129 06:35:41.844287 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:41 crc kubenswrapper[4947]: I1129 06:35:41.844302 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:41 crc kubenswrapper[4947]: I1129 06:35:41.844312 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:41Z","lastTransitionTime":"2025-11-29T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:41 crc kubenswrapper[4947]: I1129 06:35:41.948424 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:41 crc kubenswrapper[4947]: I1129 06:35:41.948471 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:41 crc kubenswrapper[4947]: I1129 06:35:41.948483 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:41 crc kubenswrapper[4947]: I1129 06:35:41.948501 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:41 crc kubenswrapper[4947]: I1129 06:35:41.948514 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:41Z","lastTransitionTime":"2025-11-29T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:42 crc kubenswrapper[4947]: I1129 06:35:42.051603 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:42 crc kubenswrapper[4947]: I1129 06:35:42.051662 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:42 crc kubenswrapper[4947]: I1129 06:35:42.051682 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:42 crc kubenswrapper[4947]: I1129 06:35:42.051708 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:42 crc kubenswrapper[4947]: I1129 06:35:42.051727 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:42Z","lastTransitionTime":"2025-11-29T06:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:42 crc kubenswrapper[4947]: I1129 06:35:42.155024 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:42 crc kubenswrapper[4947]: I1129 06:35:42.155093 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:42 crc kubenswrapper[4947]: I1129 06:35:42.155117 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:42 crc kubenswrapper[4947]: I1129 06:35:42.155148 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:42 crc kubenswrapper[4947]: I1129 06:35:42.155169 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:42Z","lastTransitionTime":"2025-11-29T06:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:42 crc kubenswrapper[4947]: I1129 06:35:42.178353 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:35:42 crc kubenswrapper[4947]: I1129 06:35:42.178417 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:35:42 crc kubenswrapper[4947]: I1129 06:35:42.178445 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:35:42 crc kubenswrapper[4947]: E1129 06:35:42.178518 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 06:35:42 crc kubenswrapper[4947]: E1129 06:35:42.178661 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fbj5" podUID="53a3bcac-8ad0-47ce-abee-ee56fd152ea8" Nov 29 06:35:42 crc kubenswrapper[4947]: E1129 06:35:42.178813 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 06:35:42 crc kubenswrapper[4947]: I1129 06:35:42.258596 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:42 crc kubenswrapper[4947]: I1129 06:35:42.258724 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:42 crc kubenswrapper[4947]: I1129 06:35:42.258754 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:42 crc kubenswrapper[4947]: I1129 06:35:42.258781 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:42 crc kubenswrapper[4947]: I1129 06:35:42.258802 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:42Z","lastTransitionTime":"2025-11-29T06:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:42 crc kubenswrapper[4947]: I1129 06:35:42.361591 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:42 crc kubenswrapper[4947]: I1129 06:35:42.361655 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:42 crc kubenswrapper[4947]: I1129 06:35:42.361680 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:42 crc kubenswrapper[4947]: I1129 06:35:42.361711 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:42 crc kubenswrapper[4947]: I1129 06:35:42.361739 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:42Z","lastTransitionTime":"2025-11-29T06:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:42 crc kubenswrapper[4947]: I1129 06:35:42.464506 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:42 crc kubenswrapper[4947]: I1129 06:35:42.464545 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:42 crc kubenswrapper[4947]: I1129 06:35:42.464556 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:42 crc kubenswrapper[4947]: I1129 06:35:42.464571 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:42 crc kubenswrapper[4947]: I1129 06:35:42.464584 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:42Z","lastTransitionTime":"2025-11-29T06:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:42 crc kubenswrapper[4947]: I1129 06:35:42.567490 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:42 crc kubenswrapper[4947]: I1129 06:35:42.567553 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:42 crc kubenswrapper[4947]: I1129 06:35:42.567565 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:42 crc kubenswrapper[4947]: I1129 06:35:42.567584 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:42 crc kubenswrapper[4947]: I1129 06:35:42.567597 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:42Z","lastTransitionTime":"2025-11-29T06:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:42 crc kubenswrapper[4947]: I1129 06:35:42.670957 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:42 crc kubenswrapper[4947]: I1129 06:35:42.671004 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:42 crc kubenswrapper[4947]: I1129 06:35:42.671016 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:42 crc kubenswrapper[4947]: I1129 06:35:42.671032 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:42 crc kubenswrapper[4947]: I1129 06:35:42.671041 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:42Z","lastTransitionTime":"2025-11-29T06:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:42 crc kubenswrapper[4947]: I1129 06:35:42.773182 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:42 crc kubenswrapper[4947]: I1129 06:35:42.773280 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:42 crc kubenswrapper[4947]: I1129 06:35:42.773290 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:42 crc kubenswrapper[4947]: I1129 06:35:42.773317 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:42 crc kubenswrapper[4947]: I1129 06:35:42.773329 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:42Z","lastTransitionTime":"2025-11-29T06:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:42 crc kubenswrapper[4947]: I1129 06:35:42.876649 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:42 crc kubenswrapper[4947]: I1129 06:35:42.876711 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:42 crc kubenswrapper[4947]: I1129 06:35:42.876731 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:42 crc kubenswrapper[4947]: I1129 06:35:42.876752 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:42 crc kubenswrapper[4947]: I1129 06:35:42.876764 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:42Z","lastTransitionTime":"2025-11-29T06:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:42 crc kubenswrapper[4947]: I1129 06:35:42.979643 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:42 crc kubenswrapper[4947]: I1129 06:35:42.979736 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:42 crc kubenswrapper[4947]: I1129 06:35:42.979751 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:42 crc kubenswrapper[4947]: I1129 06:35:42.979796 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:42 crc kubenswrapper[4947]: I1129 06:35:42.979813 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:42Z","lastTransitionTime":"2025-11-29T06:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.082205 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.082274 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.082282 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.082296 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.082305 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:43Z","lastTransitionTime":"2025-11-29T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.178964 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:35:43 crc kubenswrapper[4947]: E1129 06:35:43.179336 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.185089 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.185136 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.185151 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.185173 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.185189 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:43Z","lastTransitionTime":"2025-11-29T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.287793 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.287895 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.287916 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.287942 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.287960 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:43Z","lastTransitionTime":"2025-11-29T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.391579 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.391635 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.391649 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.391673 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.391690 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:43Z","lastTransitionTime":"2025-11-29T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.494589 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.494654 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.494678 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.494708 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.494733 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:43Z","lastTransitionTime":"2025-11-29T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.597672 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.597743 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.597762 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.597789 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.597806 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:43Z","lastTransitionTime":"2025-11-29T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.700404 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.700489 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.700515 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.700550 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.700573 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:43Z","lastTransitionTime":"2025-11-29T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.802930 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.802972 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.802982 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.802997 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.803008 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:43Z","lastTransitionTime":"2025-11-29T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.861255 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.861314 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.861327 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.861356 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.861376 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:43Z","lastTransitionTime":"2025-11-29T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:43 crc kubenswrapper[4947]: E1129 06:35:43.879915 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ead809c-947a-4269-a0f3-b817113e9662\\\",\\\"systemUUID\\\":\\\"f0748469-4a41-446c-a5c3-776c2ca32148\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:43Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.884738 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.884764 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.884773 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.884791 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.884801 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:43Z","lastTransitionTime":"2025-11-29T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:43 crc kubenswrapper[4947]: E1129 06:35:43.901861 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ead809c-947a-4269-a0f3-b817113e9662\\\",\\\"systemUUID\\\":\\\"f0748469-4a41-446c-a5c3-776c2ca32148\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:43Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.907029 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.907093 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.907113 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.907140 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.907159 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:43Z","lastTransitionTime":"2025-11-29T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:43 crc kubenswrapper[4947]: E1129 06:35:43.925458 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ead809c-947a-4269-a0f3-b817113e9662\\\",\\\"systemUUID\\\":\\\"f0748469-4a41-446c-a5c3-776c2ca32148\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:43Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.930643 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.930683 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.930692 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.930707 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.930721 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:43Z","lastTransitionTime":"2025-11-29T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:43 crc kubenswrapper[4947]: E1129 06:35:43.944322 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ead809c-947a-4269-a0f3-b817113e9662\\\",\\\"systemUUID\\\":\\\"f0748469-4a41-446c-a5c3-776c2ca32148\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:43Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.949657 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.949697 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.949707 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.949722 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.949733 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:43Z","lastTransitionTime":"2025-11-29T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:43 crc kubenswrapper[4947]: E1129 06:35:43.971312 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:35:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ead809c-947a-4269-a0f3-b817113e9662\\\",\\\"systemUUID\\\":\\\"f0748469-4a41-446c-a5c3-776c2ca32148\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:43Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:43 crc kubenswrapper[4947]: E1129 06:35:43.971431 4947 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.973757 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.973777 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.973785 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.973799 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:43 crc kubenswrapper[4947]: I1129 06:35:43.973809 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:43Z","lastTransitionTime":"2025-11-29T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:44 crc kubenswrapper[4947]: I1129 06:35:44.076494 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:44 crc kubenswrapper[4947]: I1129 06:35:44.076537 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:44 crc kubenswrapper[4947]: I1129 06:35:44.076545 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:44 crc kubenswrapper[4947]: I1129 06:35:44.076561 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:44 crc kubenswrapper[4947]: I1129 06:35:44.076571 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:44Z","lastTransitionTime":"2025-11-29T06:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:44 crc kubenswrapper[4947]: I1129 06:35:44.177844 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:35:44 crc kubenswrapper[4947]: I1129 06:35:44.177882 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:35:44 crc kubenswrapper[4947]: E1129 06:35:44.177973 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fbj5" podUID="53a3bcac-8ad0-47ce-abee-ee56fd152ea8" Nov 29 06:35:44 crc kubenswrapper[4947]: I1129 06:35:44.177840 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:35:44 crc kubenswrapper[4947]: E1129 06:35:44.178082 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 06:35:44 crc kubenswrapper[4947]: E1129 06:35:44.178271 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 06:35:44 crc kubenswrapper[4947]: I1129 06:35:44.179615 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:44 crc kubenswrapper[4947]: I1129 06:35:44.179639 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:44 crc kubenswrapper[4947]: I1129 06:35:44.179649 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:44 crc kubenswrapper[4947]: I1129 06:35:44.179664 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:44 crc kubenswrapper[4947]: I1129 06:35:44.179674 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:44Z","lastTransitionTime":"2025-11-29T06:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:44 crc kubenswrapper[4947]: I1129 06:35:44.282427 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:44 crc kubenswrapper[4947]: I1129 06:35:44.282466 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:44 crc kubenswrapper[4947]: I1129 06:35:44.282480 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:44 crc kubenswrapper[4947]: I1129 06:35:44.282498 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:44 crc kubenswrapper[4947]: I1129 06:35:44.282510 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:44Z","lastTransitionTime":"2025-11-29T06:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:44 crc kubenswrapper[4947]: I1129 06:35:44.385940 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:44 crc kubenswrapper[4947]: I1129 06:35:44.385991 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:44 crc kubenswrapper[4947]: I1129 06:35:44.386011 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:44 crc kubenswrapper[4947]: I1129 06:35:44.386035 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:44 crc kubenswrapper[4947]: I1129 06:35:44.386050 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:44Z","lastTransitionTime":"2025-11-29T06:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:44 crc kubenswrapper[4947]: I1129 06:35:44.489095 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:44 crc kubenswrapper[4947]: I1129 06:35:44.489736 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:44 crc kubenswrapper[4947]: I1129 06:35:44.489842 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:44 crc kubenswrapper[4947]: I1129 06:35:44.489944 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:44 crc kubenswrapper[4947]: I1129 06:35:44.490038 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:44Z","lastTransitionTime":"2025-11-29T06:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:44 crc kubenswrapper[4947]: I1129 06:35:44.593044 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:44 crc kubenswrapper[4947]: I1129 06:35:44.593503 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:44 crc kubenswrapper[4947]: I1129 06:35:44.593653 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:44 crc kubenswrapper[4947]: I1129 06:35:44.593796 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:44 crc kubenswrapper[4947]: I1129 06:35:44.593953 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:44Z","lastTransitionTime":"2025-11-29T06:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:44 crc kubenswrapper[4947]: I1129 06:35:44.696564 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:44 crc kubenswrapper[4947]: I1129 06:35:44.696635 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:44 crc kubenswrapper[4947]: I1129 06:35:44.696662 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:44 crc kubenswrapper[4947]: I1129 06:35:44.696692 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:44 crc kubenswrapper[4947]: I1129 06:35:44.696714 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:44Z","lastTransitionTime":"2025-11-29T06:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:44 crc kubenswrapper[4947]: I1129 06:35:44.799451 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:44 crc kubenswrapper[4947]: I1129 06:35:44.799522 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:44 crc kubenswrapper[4947]: I1129 06:35:44.799540 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:44 crc kubenswrapper[4947]: I1129 06:35:44.799565 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:44 crc kubenswrapper[4947]: I1129 06:35:44.799581 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:44Z","lastTransitionTime":"2025-11-29T06:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:44 crc kubenswrapper[4947]: I1129 06:35:44.903073 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:44 crc kubenswrapper[4947]: I1129 06:35:44.903202 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:44 crc kubenswrapper[4947]: I1129 06:35:44.903254 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:44 crc kubenswrapper[4947]: I1129 06:35:44.903306 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:44 crc kubenswrapper[4947]: I1129 06:35:44.903331 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:44Z","lastTransitionTime":"2025-11-29T06:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:45 crc kubenswrapper[4947]: I1129 06:35:45.005956 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:45 crc kubenswrapper[4947]: I1129 06:35:45.006114 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:45 crc kubenswrapper[4947]: I1129 06:35:45.006134 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:45 crc kubenswrapper[4947]: I1129 06:35:45.006157 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:45 crc kubenswrapper[4947]: I1129 06:35:45.006172 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:45Z","lastTransitionTime":"2025-11-29T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:45 crc kubenswrapper[4947]: I1129 06:35:45.108738 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:45 crc kubenswrapper[4947]: I1129 06:35:45.108883 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:45 crc kubenswrapper[4947]: I1129 06:35:45.108907 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:45 crc kubenswrapper[4947]: I1129 06:35:45.108931 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:45 crc kubenswrapper[4947]: I1129 06:35:45.108947 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:45Z","lastTransitionTime":"2025-11-29T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:45 crc kubenswrapper[4947]: I1129 06:35:45.178450 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:35:45 crc kubenswrapper[4947]: E1129 06:35:45.178621 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 06:35:45 crc kubenswrapper[4947]: I1129 06:35:45.211541 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:45 crc kubenswrapper[4947]: I1129 06:35:45.211597 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:45 crc kubenswrapper[4947]: I1129 06:35:45.211620 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:45 crc kubenswrapper[4947]: I1129 06:35:45.211645 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:45 crc kubenswrapper[4947]: I1129 06:35:45.211663 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:45Z","lastTransitionTime":"2025-11-29T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:45 crc kubenswrapper[4947]: I1129 06:35:45.315353 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:45 crc kubenswrapper[4947]: I1129 06:35:45.315410 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:45 crc kubenswrapper[4947]: I1129 06:35:45.315427 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:45 crc kubenswrapper[4947]: I1129 06:35:45.315451 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:45 crc kubenswrapper[4947]: I1129 06:35:45.315467 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:45Z","lastTransitionTime":"2025-11-29T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:45 crc kubenswrapper[4947]: I1129 06:35:45.417828 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:45 crc kubenswrapper[4947]: I1129 06:35:45.417866 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:45 crc kubenswrapper[4947]: I1129 06:35:45.417879 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:45 crc kubenswrapper[4947]: I1129 06:35:45.417894 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:45 crc kubenswrapper[4947]: I1129 06:35:45.417904 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:45Z","lastTransitionTime":"2025-11-29T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:45 crc kubenswrapper[4947]: I1129 06:35:45.520655 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:45 crc kubenswrapper[4947]: I1129 06:35:45.520716 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:45 crc kubenswrapper[4947]: I1129 06:35:45.520733 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:45 crc kubenswrapper[4947]: I1129 06:35:45.520757 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:45 crc kubenswrapper[4947]: I1129 06:35:45.520773 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:45Z","lastTransitionTime":"2025-11-29T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:45 crc kubenswrapper[4947]: I1129 06:35:45.623608 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:45 crc kubenswrapper[4947]: I1129 06:35:45.623688 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:45 crc kubenswrapper[4947]: I1129 06:35:45.623712 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:45 crc kubenswrapper[4947]: I1129 06:35:45.623746 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:45 crc kubenswrapper[4947]: I1129 06:35:45.623771 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:45Z","lastTransitionTime":"2025-11-29T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:45 crc kubenswrapper[4947]: I1129 06:35:45.726756 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:45 crc kubenswrapper[4947]: I1129 06:35:45.726820 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:45 crc kubenswrapper[4947]: I1129 06:35:45.726842 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:45 crc kubenswrapper[4947]: I1129 06:35:45.726869 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:45 crc kubenswrapper[4947]: I1129 06:35:45.726887 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:45Z","lastTransitionTime":"2025-11-29T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:45 crc kubenswrapper[4947]: I1129 06:35:45.830123 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:45 crc kubenswrapper[4947]: I1129 06:35:45.830184 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:45 crc kubenswrapper[4947]: I1129 06:35:45.830201 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:45 crc kubenswrapper[4947]: I1129 06:35:45.830257 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:45 crc kubenswrapper[4947]: I1129 06:35:45.830281 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:45Z","lastTransitionTime":"2025-11-29T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:45 crc kubenswrapper[4947]: I1129 06:35:45.934551 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:45 crc kubenswrapper[4947]: I1129 06:35:45.934619 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:45 crc kubenswrapper[4947]: I1129 06:35:45.934637 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:45 crc kubenswrapper[4947]: I1129 06:35:45.934666 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:45 crc kubenswrapper[4947]: I1129 06:35:45.934706 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:45Z","lastTransitionTime":"2025-11-29T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:46 crc kubenswrapper[4947]: I1129 06:35:46.038101 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:46 crc kubenswrapper[4947]: I1129 06:35:46.038470 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:46 crc kubenswrapper[4947]: I1129 06:35:46.038549 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:46 crc kubenswrapper[4947]: I1129 06:35:46.038575 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:46 crc kubenswrapper[4947]: I1129 06:35:46.038666 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:46Z","lastTransitionTime":"2025-11-29T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:46 crc kubenswrapper[4947]: I1129 06:35:46.141852 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:46 crc kubenswrapper[4947]: I1129 06:35:46.141919 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:46 crc kubenswrapper[4947]: I1129 06:35:46.141954 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:46 crc kubenswrapper[4947]: I1129 06:35:46.141985 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:46 crc kubenswrapper[4947]: I1129 06:35:46.142008 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:46Z","lastTransitionTime":"2025-11-29T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:46 crc kubenswrapper[4947]: I1129 06:35:46.177799 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:35:46 crc kubenswrapper[4947]: I1129 06:35:46.177877 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:35:46 crc kubenswrapper[4947]: E1129 06:35:46.177960 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 06:35:46 crc kubenswrapper[4947]: I1129 06:35:46.177801 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:35:46 crc kubenswrapper[4947]: E1129 06:35:46.178161 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fbj5" podUID="53a3bcac-8ad0-47ce-abee-ee56fd152ea8" Nov 29 06:35:46 crc kubenswrapper[4947]: E1129 06:35:46.178287 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 06:35:46 crc kubenswrapper[4947]: I1129 06:35:46.245288 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:46 crc kubenswrapper[4947]: I1129 06:35:46.245350 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:46 crc kubenswrapper[4947]: I1129 06:35:46.245374 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:46 crc kubenswrapper[4947]: I1129 06:35:46.245404 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:46 crc kubenswrapper[4947]: I1129 06:35:46.245425 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:46Z","lastTransitionTime":"2025-11-29T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:46 crc kubenswrapper[4947]: I1129 06:35:46.348151 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:46 crc kubenswrapper[4947]: I1129 06:35:46.348214 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:46 crc kubenswrapper[4947]: I1129 06:35:46.348280 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:46 crc kubenswrapper[4947]: I1129 06:35:46.348306 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:46 crc kubenswrapper[4947]: I1129 06:35:46.348322 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:46Z","lastTransitionTime":"2025-11-29T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:46 crc kubenswrapper[4947]: I1129 06:35:46.454878 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:46 crc kubenswrapper[4947]: I1129 06:35:46.454925 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:46 crc kubenswrapper[4947]: I1129 06:35:46.454937 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:46 crc kubenswrapper[4947]: I1129 06:35:46.454962 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:46 crc kubenswrapper[4947]: I1129 06:35:46.454976 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:46Z","lastTransitionTime":"2025-11-29T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:46 crc kubenswrapper[4947]: I1129 06:35:46.557890 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:46 crc kubenswrapper[4947]: I1129 06:35:46.557936 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:46 crc kubenswrapper[4947]: I1129 06:35:46.557951 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:46 crc kubenswrapper[4947]: I1129 06:35:46.557969 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:46 crc kubenswrapper[4947]: I1129 06:35:46.557980 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:46Z","lastTransitionTime":"2025-11-29T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:46 crc kubenswrapper[4947]: I1129 06:35:46.660399 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:46 crc kubenswrapper[4947]: I1129 06:35:46.660430 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:46 crc kubenswrapper[4947]: I1129 06:35:46.660438 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:46 crc kubenswrapper[4947]: I1129 06:35:46.660452 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:46 crc kubenswrapper[4947]: I1129 06:35:46.660460 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:46Z","lastTransitionTime":"2025-11-29T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:46 crc kubenswrapper[4947]: I1129 06:35:46.763275 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:46 crc kubenswrapper[4947]: I1129 06:35:46.763453 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:46 crc kubenswrapper[4947]: I1129 06:35:46.763475 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:46 crc kubenswrapper[4947]: I1129 06:35:46.763504 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:46 crc kubenswrapper[4947]: I1129 06:35:46.763522 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:46Z","lastTransitionTime":"2025-11-29T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:46 crc kubenswrapper[4947]: I1129 06:35:46.866208 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:46 crc kubenswrapper[4947]: I1129 06:35:46.866277 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:46 crc kubenswrapper[4947]: I1129 06:35:46.866296 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:46 crc kubenswrapper[4947]: I1129 06:35:46.866313 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:46 crc kubenswrapper[4947]: I1129 06:35:46.866324 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:46Z","lastTransitionTime":"2025-11-29T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:46 crc kubenswrapper[4947]: I1129 06:35:46.969718 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:46 crc kubenswrapper[4947]: I1129 06:35:46.969770 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:46 crc kubenswrapper[4947]: I1129 06:35:46.969787 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:46 crc kubenswrapper[4947]: I1129 06:35:46.969810 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:46 crc kubenswrapper[4947]: I1129 06:35:46.969826 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:46Z","lastTransitionTime":"2025-11-29T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:47 crc kubenswrapper[4947]: I1129 06:35:47.072840 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:47 crc kubenswrapper[4947]: I1129 06:35:47.072880 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:47 crc kubenswrapper[4947]: I1129 06:35:47.072892 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:47 crc kubenswrapper[4947]: I1129 06:35:47.072908 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:47 crc kubenswrapper[4947]: I1129 06:35:47.072919 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:47Z","lastTransitionTime":"2025-11-29T06:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:47 crc kubenswrapper[4947]: I1129 06:35:47.176621 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:47 crc kubenswrapper[4947]: I1129 06:35:47.176706 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:47 crc kubenswrapper[4947]: I1129 06:35:47.176738 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:47 crc kubenswrapper[4947]: I1129 06:35:47.176769 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:47 crc kubenswrapper[4947]: I1129 06:35:47.176790 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:47Z","lastTransitionTime":"2025-11-29T06:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:47 crc kubenswrapper[4947]: I1129 06:35:47.177809 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:35:47 crc kubenswrapper[4947]: E1129 06:35:47.178072 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 06:35:47 crc kubenswrapper[4947]: I1129 06:35:47.280621 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:47 crc kubenswrapper[4947]: I1129 06:35:47.280675 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:47 crc kubenswrapper[4947]: I1129 06:35:47.280697 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:47 crc kubenswrapper[4947]: I1129 06:35:47.280737 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:47 crc kubenswrapper[4947]: I1129 06:35:47.280758 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:47Z","lastTransitionTime":"2025-11-29T06:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:47 crc kubenswrapper[4947]: I1129 06:35:47.386642 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:47 crc kubenswrapper[4947]: I1129 06:35:47.386682 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:47 crc kubenswrapper[4947]: I1129 06:35:47.386690 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:47 crc kubenswrapper[4947]: I1129 06:35:47.386705 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:47 crc kubenswrapper[4947]: I1129 06:35:47.386714 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:47Z","lastTransitionTime":"2025-11-29T06:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:47 crc kubenswrapper[4947]: I1129 06:35:47.489676 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:47 crc kubenswrapper[4947]: I1129 06:35:47.489748 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:47 crc kubenswrapper[4947]: I1129 06:35:47.489772 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:47 crc kubenswrapper[4947]: I1129 06:35:47.489801 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:47 crc kubenswrapper[4947]: I1129 06:35:47.489822 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:47Z","lastTransitionTime":"2025-11-29T06:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:47 crc kubenswrapper[4947]: I1129 06:35:47.592060 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:47 crc kubenswrapper[4947]: I1129 06:35:47.592100 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:47 crc kubenswrapper[4947]: I1129 06:35:47.592111 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:47 crc kubenswrapper[4947]: I1129 06:35:47.592127 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:47 crc kubenswrapper[4947]: I1129 06:35:47.592137 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:47Z","lastTransitionTime":"2025-11-29T06:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:47 crc kubenswrapper[4947]: I1129 06:35:47.694391 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:47 crc kubenswrapper[4947]: I1129 06:35:47.694435 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:47 crc kubenswrapper[4947]: I1129 06:35:47.694447 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:47 crc kubenswrapper[4947]: I1129 06:35:47.694464 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:47 crc kubenswrapper[4947]: I1129 06:35:47.694477 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:47Z","lastTransitionTime":"2025-11-29T06:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:47 crc kubenswrapper[4947]: I1129 06:35:47.796596 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:47 crc kubenswrapper[4947]: I1129 06:35:47.796641 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:47 crc kubenswrapper[4947]: I1129 06:35:47.796653 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:47 crc kubenswrapper[4947]: I1129 06:35:47.796670 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:47 crc kubenswrapper[4947]: I1129 06:35:47.796680 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:47Z","lastTransitionTime":"2025-11-29T06:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:47 crc kubenswrapper[4947]: I1129 06:35:47.899148 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:47 crc kubenswrapper[4947]: I1129 06:35:47.899257 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:47 crc kubenswrapper[4947]: I1129 06:35:47.899288 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:47 crc kubenswrapper[4947]: I1129 06:35:47.899332 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:47 crc kubenswrapper[4947]: I1129 06:35:47.899355 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:47Z","lastTransitionTime":"2025-11-29T06:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:48 crc kubenswrapper[4947]: I1129 06:35:48.001607 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:48 crc kubenswrapper[4947]: I1129 06:35:48.001676 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:48 crc kubenswrapper[4947]: I1129 06:35:48.001698 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:48 crc kubenswrapper[4947]: I1129 06:35:48.001726 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:48 crc kubenswrapper[4947]: I1129 06:35:48.001749 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:48Z","lastTransitionTime":"2025-11-29T06:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:48 crc kubenswrapper[4947]: I1129 06:35:48.080728 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/53a3bcac-8ad0-47ce-abee-ee56fd152ea8-metrics-certs\") pod \"network-metrics-daemon-2fbj5\" (UID: \"53a3bcac-8ad0-47ce-abee-ee56fd152ea8\") " pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:35:48 crc kubenswrapper[4947]: E1129 06:35:48.080973 4947 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 06:35:48 crc kubenswrapper[4947]: E1129 06:35:48.081094 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/53a3bcac-8ad0-47ce-abee-ee56fd152ea8-metrics-certs podName:53a3bcac-8ad0-47ce-abee-ee56fd152ea8 nodeName:}" failed. No retries permitted until 2025-11-29 06:36:52.081067163 +0000 UTC m=+163.125449284 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/53a3bcac-8ad0-47ce-abee-ee56fd152ea8-metrics-certs") pod "network-metrics-daemon-2fbj5" (UID: "53a3bcac-8ad0-47ce-abee-ee56fd152ea8") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 06:35:48 crc kubenswrapper[4947]: I1129 06:35:48.105107 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:48 crc kubenswrapper[4947]: I1129 06:35:48.105172 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:48 crc kubenswrapper[4947]: I1129 06:35:48.105188 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:48 crc kubenswrapper[4947]: I1129 06:35:48.105213 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:48 crc kubenswrapper[4947]: I1129 06:35:48.105316 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:48Z","lastTransitionTime":"2025-11-29T06:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:48 crc kubenswrapper[4947]: I1129 06:35:48.178629 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:35:48 crc kubenswrapper[4947]: I1129 06:35:48.178723 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:35:48 crc kubenswrapper[4947]: E1129 06:35:48.178797 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 06:35:48 crc kubenswrapper[4947]: I1129 06:35:48.178919 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:35:48 crc kubenswrapper[4947]: E1129 06:35:48.179163 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 06:35:48 crc kubenswrapper[4947]: E1129 06:35:48.179412 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fbj5" podUID="53a3bcac-8ad0-47ce-abee-ee56fd152ea8" Nov 29 06:35:48 crc kubenswrapper[4947]: I1129 06:35:48.208580 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:48 crc kubenswrapper[4947]: I1129 06:35:48.208653 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:48 crc kubenswrapper[4947]: I1129 06:35:48.208673 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:48 crc kubenswrapper[4947]: I1129 06:35:48.208699 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:48 crc kubenswrapper[4947]: I1129 06:35:48.208716 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:48Z","lastTransitionTime":"2025-11-29T06:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:48 crc kubenswrapper[4947]: I1129 06:35:48.311656 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:48 crc kubenswrapper[4947]: I1129 06:35:48.311723 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:48 crc kubenswrapper[4947]: I1129 06:35:48.311742 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:48 crc kubenswrapper[4947]: I1129 06:35:48.311768 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:48 crc kubenswrapper[4947]: I1129 06:35:48.311786 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:48Z","lastTransitionTime":"2025-11-29T06:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:48 crc kubenswrapper[4947]: I1129 06:35:48.414465 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:48 crc kubenswrapper[4947]: I1129 06:35:48.414542 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:48 crc kubenswrapper[4947]: I1129 06:35:48.414559 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:48 crc kubenswrapper[4947]: I1129 06:35:48.414583 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:48 crc kubenswrapper[4947]: I1129 06:35:48.414601 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:48Z","lastTransitionTime":"2025-11-29T06:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:48 crc kubenswrapper[4947]: I1129 06:35:48.517115 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:48 crc kubenswrapper[4947]: I1129 06:35:48.517179 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:48 crc kubenswrapper[4947]: I1129 06:35:48.517202 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:48 crc kubenswrapper[4947]: I1129 06:35:48.517305 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:48 crc kubenswrapper[4947]: I1129 06:35:48.517336 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:48Z","lastTransitionTime":"2025-11-29T06:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:48 crc kubenswrapper[4947]: I1129 06:35:48.620321 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:48 crc kubenswrapper[4947]: I1129 06:35:48.620389 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:48 crc kubenswrapper[4947]: I1129 06:35:48.620413 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:48 crc kubenswrapper[4947]: I1129 06:35:48.620445 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:48 crc kubenswrapper[4947]: I1129 06:35:48.620465 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:48Z","lastTransitionTime":"2025-11-29T06:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:48 crc kubenswrapper[4947]: I1129 06:35:48.722916 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:48 crc kubenswrapper[4947]: I1129 06:35:48.722961 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:48 crc kubenswrapper[4947]: I1129 06:35:48.722972 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:48 crc kubenswrapper[4947]: I1129 06:35:48.722988 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:48 crc kubenswrapper[4947]: I1129 06:35:48.722997 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:48Z","lastTransitionTime":"2025-11-29T06:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:48 crc kubenswrapper[4947]: I1129 06:35:48.825461 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:48 crc kubenswrapper[4947]: I1129 06:35:48.825508 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:48 crc kubenswrapper[4947]: I1129 06:35:48.825539 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:48 crc kubenswrapper[4947]: I1129 06:35:48.825555 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:48 crc kubenswrapper[4947]: I1129 06:35:48.825564 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:48Z","lastTransitionTime":"2025-11-29T06:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:48 crc kubenswrapper[4947]: I1129 06:35:48.929051 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:48 crc kubenswrapper[4947]: I1129 06:35:48.929107 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:48 crc kubenswrapper[4947]: I1129 06:35:48.929125 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:48 crc kubenswrapper[4947]: I1129 06:35:48.929149 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:48 crc kubenswrapper[4947]: I1129 06:35:48.929167 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:48Z","lastTransitionTime":"2025-11-29T06:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.032276 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.032350 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.032373 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.032442 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.032470 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:49Z","lastTransitionTime":"2025-11-29T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.135053 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.135089 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.135098 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.135115 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.135124 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:49Z","lastTransitionTime":"2025-11-29T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.178662 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:35:49 crc kubenswrapper[4947]: E1129 06:35:49.178781 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.198064 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26f82f9be693d04a2b2536773e664c7e2525ea3cf90f6135e5ed8a41e0def93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffeb939b1928f4e71530473d93e5b142eb70a09d929bee5bea36c407c0fb44a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b624f494cbf18c5d0c15ee7d074273ebb685465bc73aeebae77758840c958415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03aa7fc02c19732bcc2f291ae3e1eb8ee53ac2f8e4b6138a24c73eb69201fa5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba937f20fd42fdc9c935084319c2d3c455813e35002ed22a4be631cc74fab26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5372e6932fe776f4e4d18124f45a05be3f374919ce6df4426d89a023353836\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f688195e20b1f49c08f3c1b7e2348e4ffdaa7417f17c1a626fde1c81232d1920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f688195e20b1f49c08f3c1b7e2348e4ffdaa7417f17c1a626fde1c81232d1920\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T06:35:23Z\\\",\\\"message\\\":\\\"go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:22Z is after 2025-08-24T17:21:41Z]\\\\nI1129 06:35:23.026027 6879 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1129 06:35:23.026155 6879 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI1129 06:35:23.026193 6879 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI1129 06:35:23.026005 6879 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-5zgvc in node crc\\\\nI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:35:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-z4rxq_openshift-ovn-kubernetes(dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fadf23d78dff09f3edcb6159171e1f55f5a84b42d3dbc6804a2e1764d6c71797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grbf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z4rxq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:49Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.213330 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabaff26-a896-4929-8b32-6e32efe02ffc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e518bb57f917b37cc08b0f099557c012da1b6c08a168ff3acb262abc2c9d6983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73fed24d3da601900cf1f54e0adc59daba09bd9a50813cc39508f2d8c66ae5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187288a1894fa8a8aaf843f1889edb54adeb0771fb0a5759a1346d1ae04f9970\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb63d3fae21b626041036425ac0dbe4cc0fba792aa82812037eb0cf64036984\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce17a0c77733e53cc77e1ced6944f39b53419840d78420bc7cf4ad34fec82e8b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1129 06:34:22.756831 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1129 06:34:22.759210 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4196230451/tls.crt::/tmp/serving-cert-4196230451/tls.key\\\\\\\"\\\\nI1129 06:34:28.364401 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1129 06:34:28.369411 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1129 06:34:28.369441 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1129 06:34:28.369468 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1129 06:34:28.369474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1129 06:34:28.374098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1129 06:34:28.374134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1129 06:34:28.374148 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1129 06:34:28.374153 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1129 06:34:28.374157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1129 06:34:28.374161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1129 06:34:28.374166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1129 06:34:28.376189 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ce7bb31402783842d6a20f7d0667426065729bdee1f7cf4fcbe636173e79da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:49Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.227929 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de17371c2f08b16e96df9f76ce3eaf0981cc8590aca0784ea95598b7842812eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:49Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.237078 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.237109 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.237118 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.237131 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.237140 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:49Z","lastTransitionTime":"2025-11-29T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.241813 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:49Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.252797 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f4d791f-bb61-4aaa-a09c-3007b59645a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247b6579653599785a44fc9e8e0d47f93802f01f2002465a9ba3825d487a324b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bade16afee9bcb203991caf838d5e9e5302dd0b35462b93b45c34cf98e89c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5zgvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:49Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.263037 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ttw9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8440d6ae-a357-461e-a91f-a48625b4a9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c526243829deb889e7afc49647d8bf9960f886b6abc9aa7cba8a69c8d5b3ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mxr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ttw9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:49Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.274665 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b25cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a287baf-0c87-4698-9553-6f94927fbf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69ea28bcbeb8379671147cd41f131b8b37b41a285319b082c43381a56cdc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbmt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754f770b2635be1fc785e0cc958e0c885dd7516ba54de760493ec7778d738708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbmt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b25cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:49Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.286403 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9db81144-11ad-4bb9-8158-dd661afb8844\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f62b86b8b7ede5c01c1026af41f584b1e7a171ff14ef1e3769ddf8e73121296f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ef6bea418d1acfb4cfbf3310e7898127bbd731f3eb432daf74d7eeecc4c796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5ef6bea418d1acfb4cfbf3310e7898127bbd731f3eb432daf74d7eeecc4c796\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:49Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.299782 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84be169c8b603ce62c3c60b6c67eea559c20cbcfc7ba2d0965bcf6acfb00e99b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:49Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.313812 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zb4cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6063fc55-4365-4f22-a005-bfac3812fdce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c5fd1426004597dc139d078e4f9b5bb7fec8ab12162ca6b052f5eb43025b6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f48e35b26b0ad516cfcbe39d076b7decda3521b80ca21863ce9447718e4a86a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80aa902ebe6cfb129dd5231e40ac5a948f2209db79dbbc158fbbfaa150d83477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406774802f773c0076cde13a2cb96ba5444233f29cb2c18c6712d4df7ff171aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6c6812845b33c70cf5f42779b09e2bb945ee14f03ec3d1be46027bdc0457353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fe3a8d1d8d729e919ab73a034943a441b5bc44409af32b5671d73b80e2bd6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe3a8d1d8d729e919ab73a034943a441b5bc44409af32b5671d73b80e2bd6ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49d19e2f799deb4cf6b8154a2f3c0485c06ad44d0b3e0bac5b2b85f16228f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a49d19e2f799deb4cf6b8154a2f3c0485c06ad44d0b3e0bac5b2b85f16228f38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5bjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zb4cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:49Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.323047 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sxdk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f180a1-2fb0-4b96-85ed-1116677a7c99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b80d32b0abeeab39bc9fd8c24510ec865e957006ecac31da11f86b6e0fb983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swgjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sxdk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:49Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.334141 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlg45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cbb3532-a15b-4cca-bde1-aa1ae20698f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:35:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63ddd0da1118c2e86da1aea51f8248927f80bc1d21790723952b4d59f294cd76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35c22792f1661e18f407880f3a743a03890141d9f5c6062e8ba6b06204b4e3da\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T06:35:16Z\\\",\\\"message\\\":\\\"2025-11-29T06:34:31+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c66371c3-af0c-4d62-8105-c20fdc1c09f0\\\\n2025-11-29T06:34:31+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c66371c3-af0c-4d62-8105-c20fdc1c09f0 to /host/opt/cni/bin/\\\\n2025-11-29T06:34:31Z [verbose] multus-daemon started\\\\n2025-11-29T06:34:31Z [verbose] Readiness Indicator file check\\\\n2025-11-29T06:35:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rndhc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlg45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:49Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.339781 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.339826 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.339842 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.339864 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.339881 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:49Z","lastTransitionTime":"2025-11-29T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.352483 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"268b8d19-01a0-4696-8b9e-0efed4129d56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06bdbb1b45cdc328f5c4345797e222ac33d9f1ea052011c183a229c02134cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa666baf286a9b54dbafbe35f35c3be377481392a4e3811f72cf189d1adbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ef81ddc4f773e06ee9c296ecc2e67ce76bf795fbd8357236cad6c0462d7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360e1f2c76ee825b275816919d9ab1473c6db5da5ab22b43134738150306f87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4712b1e44979f37b926f325313325a788ecb9e01b30583bf8210ab6c53d71d57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5586d2e5e51b11d1c0b30ad06e66001772048012482681af4e923d196d5099df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fb325dedce82b64b129ace56fc908d0538d1cbf42ba3d989ae14b499679bfde\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b87781f0dd4b78f68ab4a614615675dbd157b7c9a1e79db891dc470ef582e8af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:49Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.385335 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92f526fea4a10baf89765a67868ca0fe47ce19bbde73cd538bd65add7578f000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3934144663bb1471d168783c160c42654884d04d2510cbe1e5f2f6e2cfb94f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:49Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.403808 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:49Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.414798 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2fbj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53a3bcac-8ad0-47ce-abee-ee56fd152ea8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nxj27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nxj27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2fbj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:49Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.427015 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21d5af32-2f1c-4aa8-a9b3-c436a5b3cb32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8225fda5f45611099716e891065acf0ffa2db6d2c9b79192b5955c1bac77daf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d79edf1f405f23d4ec3ff1a52d9583fc26d31d1fb690337a7b081868c397d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfae5cdad70e7948b6383df2f934a477b237c38b94cf3d3eda22d11fe02826e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20235c8af3fdd17aa70bd0744d8faf37aa7781f63ef194f6e467721a47e388d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:49Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.439203 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"682a3ca0-7f80-4aa3-8627-44c5f9d6c661\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0decb70631f9a60b5a44909b2cd152c099aa6955393b715617a93d2639a8f211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7347898f9e11318a33ea5f24ef489a4e58da64e0631ac46aa91f30f5691ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://836bfd5239874f47639673b177b0d441dff3d84e255c7c6d1983c9e0db5134fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T06:34:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0e4509596cc7d5e28048c72689ccfc8c249cf06f856142be2b48103608b05f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d0e4509596cc7d5e28048c72689ccfc8c249cf06f856142be2b48103608b05f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T06:34:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T06:34:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T06:34:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:49Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.441765 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.441881 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.441956 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.442048 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.442146 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:49Z","lastTransitionTime":"2025-11-29T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.449410 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T06:34:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T06:35:49Z is after 2025-08-24T17:21:41Z" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.545745 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.545810 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.545830 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.545851 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.545868 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:49Z","lastTransitionTime":"2025-11-29T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.648891 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.648952 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.648974 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.649002 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.649024 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:49Z","lastTransitionTime":"2025-11-29T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.752129 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.752182 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.752196 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.752213 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.752265 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:49Z","lastTransitionTime":"2025-11-29T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.854787 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.854859 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.854882 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.854911 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.854932 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:49Z","lastTransitionTime":"2025-11-29T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.957414 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.957490 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.957505 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.957522 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:49 crc kubenswrapper[4947]: I1129 06:35:49.957538 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:49Z","lastTransitionTime":"2025-11-29T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:50 crc kubenswrapper[4947]: I1129 06:35:50.060423 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:50 crc kubenswrapper[4947]: I1129 06:35:50.060455 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:50 crc kubenswrapper[4947]: I1129 06:35:50.060465 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:50 crc kubenswrapper[4947]: I1129 06:35:50.060481 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:50 crc kubenswrapper[4947]: I1129 06:35:50.060492 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:50Z","lastTransitionTime":"2025-11-29T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:50 crc kubenswrapper[4947]: I1129 06:35:50.162725 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:50 crc kubenswrapper[4947]: I1129 06:35:50.162775 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:50 crc kubenswrapper[4947]: I1129 06:35:50.162785 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:50 crc kubenswrapper[4947]: I1129 06:35:50.162802 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:50 crc kubenswrapper[4947]: I1129 06:35:50.162813 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:50Z","lastTransitionTime":"2025-11-29T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:50 crc kubenswrapper[4947]: I1129 06:35:50.178758 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:35:50 crc kubenswrapper[4947]: I1129 06:35:50.178829 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:35:50 crc kubenswrapper[4947]: I1129 06:35:50.178859 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:35:50 crc kubenswrapper[4947]: E1129 06:35:50.178922 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 06:35:50 crc kubenswrapper[4947]: E1129 06:35:50.179169 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 06:35:50 crc kubenswrapper[4947]: E1129 06:35:50.179323 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fbj5" podUID="53a3bcac-8ad0-47ce-abee-ee56fd152ea8" Nov 29 06:35:50 crc kubenswrapper[4947]: I1129 06:35:50.265642 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:50 crc kubenswrapper[4947]: I1129 06:35:50.265683 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:50 crc kubenswrapper[4947]: I1129 06:35:50.265694 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:50 crc kubenswrapper[4947]: I1129 06:35:50.265710 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:50 crc kubenswrapper[4947]: I1129 06:35:50.265720 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:50Z","lastTransitionTime":"2025-11-29T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:50 crc kubenswrapper[4947]: I1129 06:35:50.368021 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:50 crc kubenswrapper[4947]: I1129 06:35:50.368095 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:50 crc kubenswrapper[4947]: I1129 06:35:50.368113 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:50 crc kubenswrapper[4947]: I1129 06:35:50.368137 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:50 crc kubenswrapper[4947]: I1129 06:35:50.368157 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:50Z","lastTransitionTime":"2025-11-29T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:50 crc kubenswrapper[4947]: I1129 06:35:50.470379 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:50 crc kubenswrapper[4947]: I1129 06:35:50.470442 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:50 crc kubenswrapper[4947]: I1129 06:35:50.470459 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:50 crc kubenswrapper[4947]: I1129 06:35:50.470482 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:50 crc kubenswrapper[4947]: I1129 06:35:50.470502 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:50Z","lastTransitionTime":"2025-11-29T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:50 crc kubenswrapper[4947]: I1129 06:35:50.573057 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:50 crc kubenswrapper[4947]: I1129 06:35:50.573139 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:50 crc kubenswrapper[4947]: I1129 06:35:50.573165 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:50 crc kubenswrapper[4947]: I1129 06:35:50.573197 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:50 crc kubenswrapper[4947]: I1129 06:35:50.573256 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:50Z","lastTransitionTime":"2025-11-29T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:50 crc kubenswrapper[4947]: I1129 06:35:50.677547 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:50 crc kubenswrapper[4947]: I1129 06:35:50.677606 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:50 crc kubenswrapper[4947]: I1129 06:35:50.677640 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:50 crc kubenswrapper[4947]: I1129 06:35:50.677679 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:50 crc kubenswrapper[4947]: I1129 06:35:50.677702 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:50Z","lastTransitionTime":"2025-11-29T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:50 crc kubenswrapper[4947]: I1129 06:35:50.780738 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:50 crc kubenswrapper[4947]: I1129 06:35:50.780803 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:50 crc kubenswrapper[4947]: I1129 06:35:50.780819 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:50 crc kubenswrapper[4947]: I1129 06:35:50.780843 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:50 crc kubenswrapper[4947]: I1129 06:35:50.780861 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:50Z","lastTransitionTime":"2025-11-29T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:50 crc kubenswrapper[4947]: I1129 06:35:50.884190 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:50 crc kubenswrapper[4947]: I1129 06:35:50.884301 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:50 crc kubenswrapper[4947]: I1129 06:35:50.884319 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:50 crc kubenswrapper[4947]: I1129 06:35:50.884343 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:50 crc kubenswrapper[4947]: I1129 06:35:50.884360 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:50Z","lastTransitionTime":"2025-11-29T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:50 crc kubenswrapper[4947]: I1129 06:35:50.987253 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:50 crc kubenswrapper[4947]: I1129 06:35:50.987289 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:50 crc kubenswrapper[4947]: I1129 06:35:50.987298 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:50 crc kubenswrapper[4947]: I1129 06:35:50.987314 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:50 crc kubenswrapper[4947]: I1129 06:35:50.987323 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:50Z","lastTransitionTime":"2025-11-29T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:51 crc kubenswrapper[4947]: I1129 06:35:51.089560 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:51 crc kubenswrapper[4947]: I1129 06:35:51.089638 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:51 crc kubenswrapper[4947]: I1129 06:35:51.089658 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:51 crc kubenswrapper[4947]: I1129 06:35:51.089688 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:51 crc kubenswrapper[4947]: I1129 06:35:51.089706 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:51Z","lastTransitionTime":"2025-11-29T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:51 crc kubenswrapper[4947]: I1129 06:35:51.179113 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:35:51 crc kubenswrapper[4947]: E1129 06:35:51.183966 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 06:35:51 crc kubenswrapper[4947]: I1129 06:35:51.193000 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:51 crc kubenswrapper[4947]: I1129 06:35:51.193111 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:51 crc kubenswrapper[4947]: I1129 06:35:51.193130 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:51 crc kubenswrapper[4947]: I1129 06:35:51.193156 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:51 crc kubenswrapper[4947]: I1129 06:35:51.193174 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:51Z","lastTransitionTime":"2025-11-29T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:51 crc kubenswrapper[4947]: I1129 06:35:51.296963 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:51 crc kubenswrapper[4947]: I1129 06:35:51.297020 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:51 crc kubenswrapper[4947]: I1129 06:35:51.297037 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:51 crc kubenswrapper[4947]: I1129 06:35:51.297060 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:51 crc kubenswrapper[4947]: I1129 06:35:51.297080 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:51Z","lastTransitionTime":"2025-11-29T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:51 crc kubenswrapper[4947]: I1129 06:35:51.399844 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:51 crc kubenswrapper[4947]: I1129 06:35:51.399881 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:51 crc kubenswrapper[4947]: I1129 06:35:51.399898 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:51 crc kubenswrapper[4947]: I1129 06:35:51.399914 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:51 crc kubenswrapper[4947]: I1129 06:35:51.399924 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:51Z","lastTransitionTime":"2025-11-29T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:51 crc kubenswrapper[4947]: I1129 06:35:51.501810 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:51 crc kubenswrapper[4947]: I1129 06:35:51.501856 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:51 crc kubenswrapper[4947]: I1129 06:35:51.501866 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:51 crc kubenswrapper[4947]: I1129 06:35:51.501881 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:51 crc kubenswrapper[4947]: I1129 06:35:51.501891 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:51Z","lastTransitionTime":"2025-11-29T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:51 crc kubenswrapper[4947]: I1129 06:35:51.604614 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:51 crc kubenswrapper[4947]: I1129 06:35:51.604673 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:51 crc kubenswrapper[4947]: I1129 06:35:51.604683 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:51 crc kubenswrapper[4947]: I1129 06:35:51.604699 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:51 crc kubenswrapper[4947]: I1129 06:35:51.604710 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:51Z","lastTransitionTime":"2025-11-29T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:51 crc kubenswrapper[4947]: I1129 06:35:51.708007 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:51 crc kubenswrapper[4947]: I1129 06:35:51.708074 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:51 crc kubenswrapper[4947]: I1129 06:35:51.708091 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:51 crc kubenswrapper[4947]: I1129 06:35:51.708116 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:51 crc kubenswrapper[4947]: I1129 06:35:51.708133 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:51Z","lastTransitionTime":"2025-11-29T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:51 crc kubenswrapper[4947]: I1129 06:35:51.811042 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:51 crc kubenswrapper[4947]: I1129 06:35:51.811114 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:51 crc kubenswrapper[4947]: I1129 06:35:51.811137 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:51 crc kubenswrapper[4947]: I1129 06:35:51.811169 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:51 crc kubenswrapper[4947]: I1129 06:35:51.811189 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:51Z","lastTransitionTime":"2025-11-29T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:51 crc kubenswrapper[4947]: I1129 06:35:51.914787 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:51 crc kubenswrapper[4947]: I1129 06:35:51.914860 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:51 crc kubenswrapper[4947]: I1129 06:35:51.914882 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:51 crc kubenswrapper[4947]: I1129 06:35:51.914908 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:51 crc kubenswrapper[4947]: I1129 06:35:51.914925 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:51Z","lastTransitionTime":"2025-11-29T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:52 crc kubenswrapper[4947]: I1129 06:35:52.017820 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:52 crc kubenswrapper[4947]: I1129 06:35:52.017888 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:52 crc kubenswrapper[4947]: I1129 06:35:52.017905 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:52 crc kubenswrapper[4947]: I1129 06:35:52.017927 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:52 crc kubenswrapper[4947]: I1129 06:35:52.017975 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:52Z","lastTransitionTime":"2025-11-29T06:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:52 crc kubenswrapper[4947]: I1129 06:35:52.121140 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:52 crc kubenswrapper[4947]: I1129 06:35:52.121179 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:52 crc kubenswrapper[4947]: I1129 06:35:52.121189 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:52 crc kubenswrapper[4947]: I1129 06:35:52.121204 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:52 crc kubenswrapper[4947]: I1129 06:35:52.121213 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:52Z","lastTransitionTime":"2025-11-29T06:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:52 crc kubenswrapper[4947]: I1129 06:35:52.178481 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:35:52 crc kubenswrapper[4947]: I1129 06:35:52.178835 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:35:52 crc kubenswrapper[4947]: I1129 06:35:52.178906 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:35:52 crc kubenswrapper[4947]: E1129 06:35:52.179280 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 06:35:52 crc kubenswrapper[4947]: E1129 06:35:52.179162 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fbj5" podUID="53a3bcac-8ad0-47ce-abee-ee56fd152ea8" Nov 29 06:35:52 crc kubenswrapper[4947]: E1129 06:35:52.179433 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 06:35:52 crc kubenswrapper[4947]: I1129 06:35:52.223931 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:52 crc kubenswrapper[4947]: I1129 06:35:52.223997 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:52 crc kubenswrapper[4947]: I1129 06:35:52.224021 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:52 crc kubenswrapper[4947]: I1129 06:35:52.224055 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:52 crc kubenswrapper[4947]: I1129 06:35:52.224081 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:52Z","lastTransitionTime":"2025-11-29T06:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:52 crc kubenswrapper[4947]: I1129 06:35:52.326704 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:52 crc kubenswrapper[4947]: I1129 06:35:52.326745 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:52 crc kubenswrapper[4947]: I1129 06:35:52.326755 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:52 crc kubenswrapper[4947]: I1129 06:35:52.326773 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:52 crc kubenswrapper[4947]: I1129 06:35:52.326783 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:52Z","lastTransitionTime":"2025-11-29T06:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:52 crc kubenswrapper[4947]: I1129 06:35:52.428893 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:52 crc kubenswrapper[4947]: I1129 06:35:52.428947 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:52 crc kubenswrapper[4947]: I1129 06:35:52.428959 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:52 crc kubenswrapper[4947]: I1129 06:35:52.428982 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:52 crc kubenswrapper[4947]: I1129 06:35:52.429027 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:52Z","lastTransitionTime":"2025-11-29T06:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:52 crc kubenswrapper[4947]: I1129 06:35:52.531769 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:52 crc kubenswrapper[4947]: I1129 06:35:52.531810 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:52 crc kubenswrapper[4947]: I1129 06:35:52.531823 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:52 crc kubenswrapper[4947]: I1129 06:35:52.531840 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:52 crc kubenswrapper[4947]: I1129 06:35:52.531851 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:52Z","lastTransitionTime":"2025-11-29T06:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:52 crc kubenswrapper[4947]: I1129 06:35:52.634431 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:52 crc kubenswrapper[4947]: I1129 06:35:52.634469 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:52 crc kubenswrapper[4947]: I1129 06:35:52.634481 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:52 crc kubenswrapper[4947]: I1129 06:35:52.634499 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:52 crc kubenswrapper[4947]: I1129 06:35:52.634511 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:52Z","lastTransitionTime":"2025-11-29T06:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:52 crc kubenswrapper[4947]: I1129 06:35:52.737288 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:52 crc kubenswrapper[4947]: I1129 06:35:52.737357 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:52 crc kubenswrapper[4947]: I1129 06:35:52.737390 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:52 crc kubenswrapper[4947]: I1129 06:35:52.737441 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:52 crc kubenswrapper[4947]: I1129 06:35:52.737463 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:52Z","lastTransitionTime":"2025-11-29T06:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:52 crc kubenswrapper[4947]: I1129 06:35:52.839905 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:52 crc kubenswrapper[4947]: I1129 06:35:52.839952 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:52 crc kubenswrapper[4947]: I1129 06:35:52.839968 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:52 crc kubenswrapper[4947]: I1129 06:35:52.839987 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:52 crc kubenswrapper[4947]: I1129 06:35:52.839998 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:52Z","lastTransitionTime":"2025-11-29T06:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:52 crc kubenswrapper[4947]: I1129 06:35:52.943036 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:52 crc kubenswrapper[4947]: I1129 06:35:52.943088 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:52 crc kubenswrapper[4947]: I1129 06:35:52.943108 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:52 crc kubenswrapper[4947]: I1129 06:35:52.943136 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:52 crc kubenswrapper[4947]: I1129 06:35:52.943157 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:52Z","lastTransitionTime":"2025-11-29T06:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:53 crc kubenswrapper[4947]: I1129 06:35:53.045798 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:53 crc kubenswrapper[4947]: I1129 06:35:53.045848 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:53 crc kubenswrapper[4947]: I1129 06:35:53.045864 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:53 crc kubenswrapper[4947]: I1129 06:35:53.045885 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:53 crc kubenswrapper[4947]: I1129 06:35:53.045900 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:53Z","lastTransitionTime":"2025-11-29T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:53 crc kubenswrapper[4947]: I1129 06:35:53.148169 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:53 crc kubenswrapper[4947]: I1129 06:35:53.148293 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:53 crc kubenswrapper[4947]: I1129 06:35:53.148313 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:53 crc kubenswrapper[4947]: I1129 06:35:53.148337 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:53 crc kubenswrapper[4947]: I1129 06:35:53.148354 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:53Z","lastTransitionTime":"2025-11-29T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:53 crc kubenswrapper[4947]: I1129 06:35:53.178532 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:35:53 crc kubenswrapper[4947]: E1129 06:35:53.178757 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 06:35:53 crc kubenswrapper[4947]: I1129 06:35:53.180306 4947 scope.go:117] "RemoveContainer" containerID="f688195e20b1f49c08f3c1b7e2348e4ffdaa7417f17c1a626fde1c81232d1920" Nov 29 06:35:53 crc kubenswrapper[4947]: E1129 06:35:53.180967 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-z4rxq_openshift-ovn-kubernetes(dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" podUID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" Nov 29 06:35:53 crc kubenswrapper[4947]: I1129 06:35:53.250614 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:53 crc kubenswrapper[4947]: I1129 06:35:53.250645 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:53 crc kubenswrapper[4947]: I1129 06:35:53.250654 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:53 crc kubenswrapper[4947]: I1129 06:35:53.250668 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:53 crc kubenswrapper[4947]: I1129 06:35:53.250676 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:53Z","lastTransitionTime":"2025-11-29T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:53 crc kubenswrapper[4947]: I1129 06:35:53.353194 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:53 crc kubenswrapper[4947]: I1129 06:35:53.353304 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:53 crc kubenswrapper[4947]: I1129 06:35:53.353318 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:53 crc kubenswrapper[4947]: I1129 06:35:53.353338 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:53 crc kubenswrapper[4947]: I1129 06:35:53.353350 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:53Z","lastTransitionTime":"2025-11-29T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:53 crc kubenswrapper[4947]: I1129 06:35:53.456480 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:53 crc kubenswrapper[4947]: I1129 06:35:53.456539 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:53 crc kubenswrapper[4947]: I1129 06:35:53.456555 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:53 crc kubenswrapper[4947]: I1129 06:35:53.456578 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:53 crc kubenswrapper[4947]: I1129 06:35:53.456598 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:53Z","lastTransitionTime":"2025-11-29T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:53 crc kubenswrapper[4947]: I1129 06:35:53.559612 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:53 crc kubenswrapper[4947]: I1129 06:35:53.559653 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:53 crc kubenswrapper[4947]: I1129 06:35:53.559663 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:53 crc kubenswrapper[4947]: I1129 06:35:53.559678 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:53 crc kubenswrapper[4947]: I1129 06:35:53.559687 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:53Z","lastTransitionTime":"2025-11-29T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:53 crc kubenswrapper[4947]: I1129 06:35:53.662548 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:53 crc kubenswrapper[4947]: I1129 06:35:53.662607 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:53 crc kubenswrapper[4947]: I1129 06:35:53.662624 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:53 crc kubenswrapper[4947]: I1129 06:35:53.662686 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:53 crc kubenswrapper[4947]: I1129 06:35:53.662712 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:53Z","lastTransitionTime":"2025-11-29T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:53 crc kubenswrapper[4947]: I1129 06:35:53.766326 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:53 crc kubenswrapper[4947]: I1129 06:35:53.766409 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:53 crc kubenswrapper[4947]: I1129 06:35:53.766432 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:53 crc kubenswrapper[4947]: I1129 06:35:53.766685 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:53 crc kubenswrapper[4947]: I1129 06:35:53.766737 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:53Z","lastTransitionTime":"2025-11-29T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:53 crc kubenswrapper[4947]: I1129 06:35:53.869831 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:53 crc kubenswrapper[4947]: I1129 06:35:53.869894 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:53 crc kubenswrapper[4947]: I1129 06:35:53.869913 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:53 crc kubenswrapper[4947]: I1129 06:35:53.869940 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:53 crc kubenswrapper[4947]: I1129 06:35:53.869958 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:53Z","lastTransitionTime":"2025-11-29T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:53 crc kubenswrapper[4947]: I1129 06:35:53.973903 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:53 crc kubenswrapper[4947]: I1129 06:35:53.973972 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:53 crc kubenswrapper[4947]: I1129 06:35:53.973983 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:53 crc kubenswrapper[4947]: I1129 06:35:53.974002 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:53 crc kubenswrapper[4947]: I1129 06:35:53.974013 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:53Z","lastTransitionTime":"2025-11-29T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:54 crc kubenswrapper[4947]: I1129 06:35:54.077019 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:54 crc kubenswrapper[4947]: I1129 06:35:54.077077 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:54 crc kubenswrapper[4947]: I1129 06:35:54.077090 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:54 crc kubenswrapper[4947]: I1129 06:35:54.077112 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:54 crc kubenswrapper[4947]: I1129 06:35:54.077125 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:54Z","lastTransitionTime":"2025-11-29T06:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:54 crc kubenswrapper[4947]: I1129 06:35:54.178593 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:35:54 crc kubenswrapper[4947]: E1129 06:35:54.178754 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 06:35:54 crc kubenswrapper[4947]: I1129 06:35:54.179498 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:35:54 crc kubenswrapper[4947]: I1129 06:35:54.179831 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:35:54 crc kubenswrapper[4947]: E1129 06:35:54.179978 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 06:35:54 crc kubenswrapper[4947]: E1129 06:35:54.180185 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fbj5" podUID="53a3bcac-8ad0-47ce-abee-ee56fd152ea8" Nov 29 06:35:54 crc kubenswrapper[4947]: I1129 06:35:54.181312 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:54 crc kubenswrapper[4947]: I1129 06:35:54.181343 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:54 crc kubenswrapper[4947]: I1129 06:35:54.181355 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:54 crc kubenswrapper[4947]: I1129 06:35:54.181374 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:54 crc kubenswrapper[4947]: I1129 06:35:54.181387 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:54Z","lastTransitionTime":"2025-11-29T06:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:54 crc kubenswrapper[4947]: I1129 06:35:54.284114 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:54 crc kubenswrapper[4947]: I1129 06:35:54.284152 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:54 crc kubenswrapper[4947]: I1129 06:35:54.284169 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:54 crc kubenswrapper[4947]: I1129 06:35:54.284186 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:54 crc kubenswrapper[4947]: I1129 06:35:54.284198 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:54Z","lastTransitionTime":"2025-11-29T06:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:54 crc kubenswrapper[4947]: I1129 06:35:54.373070 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 06:35:54 crc kubenswrapper[4947]: I1129 06:35:54.373164 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 06:35:54 crc kubenswrapper[4947]: I1129 06:35:54.373189 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 06:35:54 crc kubenswrapper[4947]: I1129 06:35:54.373258 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 06:35:54 crc kubenswrapper[4947]: I1129 06:35:54.373288 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T06:35:54Z","lastTransitionTime":"2025-11-29T06:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 06:35:54 crc kubenswrapper[4947]: I1129 06:35:54.432081 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-cf84v"] Nov 29 06:35:54 crc kubenswrapper[4947]: I1129 06:35:54.432632 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cf84v" Nov 29 06:35:54 crc kubenswrapper[4947]: I1129 06:35:54.437129 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 29 06:35:54 crc kubenswrapper[4947]: I1129 06:35:54.437235 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 29 06:35:54 crc kubenswrapper[4947]: I1129 06:35:54.437212 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 29 06:35:54 crc kubenswrapper[4947]: I1129 06:35:54.439367 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 29 06:35:54 crc kubenswrapper[4947]: I1129 06:35:54.452301 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=85.452285989 podStartE2EDuration="1m25.452285989s" podCreationTimestamp="2025-11-29 06:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:35:54.451639054 +0000 UTC m=+105.496021145" watchObservedRunningTime="2025-11-29 06:35:54.452285989 +0000 UTC m=+105.496668070" Nov 29 06:35:54 crc kubenswrapper[4947]: I1129 06:35:54.514770 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=23.514749586 podStartE2EDuration="23.514749586s" podCreationTimestamp="2025-11-29 06:35:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:35:54.514681944 +0000 UTC m=+105.559064025" watchObservedRunningTime="2025-11-29 06:35:54.514749586 +0000 UTC m=+105.559131667" Nov 29 06:35:54 crc kubenswrapper[4947]: I1129 06:35:54.539289 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-zb4cv" podStartSLOduration=85.539270147 podStartE2EDuration="1m25.539270147s" podCreationTimestamp="2025-11-29 06:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:35:54.539182185 +0000 UTC m=+105.583564276" watchObservedRunningTime="2025-11-29 06:35:54.539270147 +0000 UTC m=+105.583652228" Nov 29 06:35:54 crc kubenswrapper[4947]: I1129 06:35:54.551377 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podStartSLOduration=85.551355239 podStartE2EDuration="1m25.551355239s" podCreationTimestamp="2025-11-29 06:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:35:54.55060014 +0000 UTC m=+105.594982232" watchObservedRunningTime="2025-11-29 06:35:54.551355239 +0000 UTC m=+105.595737320" Nov 29 06:35:54 crc kubenswrapper[4947]: I1129 06:35:54.559045 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cdec35e3-d16d-4cbf-9671-f6951f194f02-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-cf84v\" (UID: \"cdec35e3-d16d-4cbf-9671-f6951f194f02\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cf84v" Nov 29 06:35:54 crc kubenswrapper[4947]: I1129 06:35:54.559107 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cdec35e3-d16d-4cbf-9671-f6951f194f02-service-ca\") pod \"cluster-version-operator-5c965bbfc6-cf84v\" (UID: \"cdec35e3-d16d-4cbf-9671-f6951f194f02\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cf84v" Nov 29 06:35:54 crc kubenswrapper[4947]: I1129 06:35:54.559140 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/cdec35e3-d16d-4cbf-9671-f6951f194f02-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-cf84v\" (UID: \"cdec35e3-d16d-4cbf-9671-f6951f194f02\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cf84v" Nov 29 06:35:54 crc kubenswrapper[4947]: I1129 06:35:54.559251 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cdec35e3-d16d-4cbf-9671-f6951f194f02-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-cf84v\" (UID: \"cdec35e3-d16d-4cbf-9671-f6951f194f02\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cf84v" Nov 29 06:35:54 crc kubenswrapper[4947]: I1129 06:35:54.559300 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/cdec35e3-d16d-4cbf-9671-f6951f194f02-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-cf84v\" (UID: \"cdec35e3-d16d-4cbf-9671-f6951f194f02\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cf84v" Nov 29 06:35:54 crc kubenswrapper[4947]: I1129 06:35:54.559802 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-ttw9v" podStartSLOduration=85.559776652 podStartE2EDuration="1m25.559776652s" podCreationTimestamp="2025-11-29 06:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:35:54.559738951 +0000 UTC m=+105.604121032" watchObservedRunningTime="2025-11-29 06:35:54.559776652 +0000 UTC m=+105.604158733" Nov 29 06:35:54 crc kubenswrapper[4947]: I1129 06:35:54.571017 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b25cq" podStartSLOduration=84.570999272 podStartE2EDuration="1m24.570999272s" podCreationTimestamp="2025-11-29 06:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:35:54.570579782 +0000 UTC m=+105.614961863" watchObservedRunningTime="2025-11-29 06:35:54.570999272 +0000 UTC m=+105.615381363" Nov 29 06:35:54 crc kubenswrapper[4947]: I1129 06:35:54.613517 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=86.613498768 podStartE2EDuration="1m26.613498768s" podCreationTimestamp="2025-11-29 06:34:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:35:54.598392533 +0000 UTC m=+105.642774654" watchObservedRunningTime="2025-11-29 06:35:54.613498768 +0000 UTC m=+105.657880869" Nov 29 06:35:54 crc kubenswrapper[4947]: I1129 06:35:54.637457 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-sxdk5" podStartSLOduration=86.637438845 podStartE2EDuration="1m26.637438845s" podCreationTimestamp="2025-11-29 06:34:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:35:54.636151304 +0000 UTC m=+105.680533455" watchObservedRunningTime="2025-11-29 06:35:54.637438845 +0000 UTC m=+105.681820926" Nov 29 06:35:54 crc kubenswrapper[4947]: I1129 06:35:54.654995 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-xlg45" podStartSLOduration=85.654970608 podStartE2EDuration="1m25.654970608s" podCreationTimestamp="2025-11-29 06:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:35:54.654532707 +0000 UTC m=+105.698914798" watchObservedRunningTime="2025-11-29 06:35:54.654970608 +0000 UTC m=+105.699352729" Nov 29 06:35:54 crc kubenswrapper[4947]: I1129 06:35:54.659936 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cdec35e3-d16d-4cbf-9671-f6951f194f02-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-cf84v\" (UID: \"cdec35e3-d16d-4cbf-9671-f6951f194f02\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cf84v" Nov 29 06:35:54 crc kubenswrapper[4947]: I1129 06:35:54.659985 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/cdec35e3-d16d-4cbf-9671-f6951f194f02-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-cf84v\" (UID: \"cdec35e3-d16d-4cbf-9671-f6951f194f02\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cf84v" Nov 29 06:35:54 crc kubenswrapper[4947]: I1129 06:35:54.660028 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cdec35e3-d16d-4cbf-9671-f6951f194f02-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-cf84v\" (UID: \"cdec35e3-d16d-4cbf-9671-f6951f194f02\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cf84v" Nov 29 06:35:54 crc kubenswrapper[4947]: I1129 06:35:54.660062 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cdec35e3-d16d-4cbf-9671-f6951f194f02-service-ca\") pod \"cluster-version-operator-5c965bbfc6-cf84v\" (UID: \"cdec35e3-d16d-4cbf-9671-f6951f194f02\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cf84v" Nov 29 06:35:54 crc kubenswrapper[4947]: I1129 06:35:54.660092 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/cdec35e3-d16d-4cbf-9671-f6951f194f02-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-cf84v\" (UID: \"cdec35e3-d16d-4cbf-9671-f6951f194f02\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cf84v" Nov 29 06:35:54 crc kubenswrapper[4947]: I1129 06:35:54.660149 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/cdec35e3-d16d-4cbf-9671-f6951f194f02-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-cf84v\" (UID: \"cdec35e3-d16d-4cbf-9671-f6951f194f02\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cf84v" Nov 29 06:35:54 crc kubenswrapper[4947]: I1129 06:35:54.660268 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/cdec35e3-d16d-4cbf-9671-f6951f194f02-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-cf84v\" (UID: \"cdec35e3-d16d-4cbf-9671-f6951f194f02\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cf84v" Nov 29 06:35:54 crc kubenswrapper[4947]: I1129 06:35:54.661460 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cdec35e3-d16d-4cbf-9671-f6951f194f02-service-ca\") pod \"cluster-version-operator-5c965bbfc6-cf84v\" (UID: \"cdec35e3-d16d-4cbf-9671-f6951f194f02\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cf84v" Nov 29 06:35:54 crc kubenswrapper[4947]: I1129 06:35:54.668423 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cdec35e3-d16d-4cbf-9671-f6951f194f02-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-cf84v\" (UID: \"cdec35e3-d16d-4cbf-9671-f6951f194f02\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cf84v" Nov 29 06:35:54 crc kubenswrapper[4947]: I1129 06:35:54.677688 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=86.677654185 podStartE2EDuration="1m26.677654185s" podCreationTimestamp="2025-11-29 06:34:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:35:54.671042285 +0000 UTC m=+105.715424406" watchObservedRunningTime="2025-11-29 06:35:54.677654185 +0000 UTC m=+105.722036316" Nov 29 06:35:54 crc kubenswrapper[4947]: I1129 06:35:54.700015 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cdec35e3-d16d-4cbf-9671-f6951f194f02-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-cf84v\" (UID: \"cdec35e3-d16d-4cbf-9671-f6951f194f02\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cf84v" Nov 29 06:35:54 crc kubenswrapper[4947]: I1129 06:35:54.705032 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=57.704981854 podStartE2EDuration="57.704981854s" podCreationTimestamp="2025-11-29 06:34:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:35:54.693380334 +0000 UTC m=+105.737762435" watchObservedRunningTime="2025-11-29 06:35:54.704981854 +0000 UTC m=+105.749363945" Nov 29 06:35:54 crc kubenswrapper[4947]: I1129 06:35:54.764951 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cf84v" Nov 29 06:35:54 crc kubenswrapper[4947]: W1129 06:35:54.784024 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcdec35e3_d16d_4cbf_9671_f6951f194f02.slice/crio-371322d625d7f179669eb5787f9a9d6294a8e8e4cec9f9b38280156076de65f4 WatchSource:0}: Error finding container 371322d625d7f179669eb5787f9a9d6294a8e8e4cec9f9b38280156076de65f4: Status 404 returned error can't find the container with id 371322d625d7f179669eb5787f9a9d6294a8e8e4cec9f9b38280156076de65f4 Nov 29 06:35:55 crc kubenswrapper[4947]: I1129 06:35:55.177992 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:35:55 crc kubenswrapper[4947]: E1129 06:35:55.178520 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 06:35:55 crc kubenswrapper[4947]: I1129 06:35:55.732999 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cf84v" event={"ID":"cdec35e3-d16d-4cbf-9671-f6951f194f02","Type":"ContainerStarted","Data":"a38bf1122efd0e8b684379ffea88a413316965df4b82f56d448ba4d67166a2f9"} Nov 29 06:35:55 crc kubenswrapper[4947]: I1129 06:35:55.733088 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cf84v" event={"ID":"cdec35e3-d16d-4cbf-9671-f6951f194f02","Type":"ContainerStarted","Data":"371322d625d7f179669eb5787f9a9d6294a8e8e4cec9f9b38280156076de65f4"} Nov 29 06:35:55 crc kubenswrapper[4947]: I1129 06:35:55.749665 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cf84v" podStartSLOduration=86.749640169 podStartE2EDuration="1m26.749640169s" podCreationTimestamp="2025-11-29 06:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:35:55.748799199 +0000 UTC m=+106.793181330" watchObservedRunningTime="2025-11-29 06:35:55.749640169 +0000 UTC m=+106.794022310" Nov 29 06:35:56 crc kubenswrapper[4947]: I1129 06:35:56.178720 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:35:56 crc kubenswrapper[4947]: I1129 06:35:56.178669 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:35:56 crc kubenswrapper[4947]: E1129 06:35:56.178869 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fbj5" podUID="53a3bcac-8ad0-47ce-abee-ee56fd152ea8" Nov 29 06:35:56 crc kubenswrapper[4947]: E1129 06:35:56.179055 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 06:35:56 crc kubenswrapper[4947]: I1129 06:35:56.179416 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:35:56 crc kubenswrapper[4947]: E1129 06:35:56.179611 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 06:35:57 crc kubenswrapper[4947]: I1129 06:35:57.178030 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:35:57 crc kubenswrapper[4947]: E1129 06:35:57.178327 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 06:35:58 crc kubenswrapper[4947]: I1129 06:35:58.178272 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:35:58 crc kubenswrapper[4947]: I1129 06:35:58.178342 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:35:58 crc kubenswrapper[4947]: I1129 06:35:58.178374 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:35:58 crc kubenswrapper[4947]: E1129 06:35:58.178428 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 06:35:58 crc kubenswrapper[4947]: E1129 06:35:58.178514 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fbj5" podUID="53a3bcac-8ad0-47ce-abee-ee56fd152ea8" Nov 29 06:35:58 crc kubenswrapper[4947]: E1129 06:35:58.178574 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 06:35:59 crc kubenswrapper[4947]: I1129 06:35:59.178587 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:35:59 crc kubenswrapper[4947]: E1129 06:35:59.179896 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 06:36:00 crc kubenswrapper[4947]: I1129 06:36:00.178142 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:36:00 crc kubenswrapper[4947]: I1129 06:36:00.178264 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:36:00 crc kubenswrapper[4947]: E1129 06:36:00.178304 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 06:36:00 crc kubenswrapper[4947]: I1129 06:36:00.178352 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:36:00 crc kubenswrapper[4947]: E1129 06:36:00.178488 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 06:36:00 crc kubenswrapper[4947]: E1129 06:36:00.178621 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fbj5" podUID="53a3bcac-8ad0-47ce-abee-ee56fd152ea8" Nov 29 06:36:01 crc kubenswrapper[4947]: I1129 06:36:01.178061 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:36:01 crc kubenswrapper[4947]: E1129 06:36:01.178286 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 06:36:02 crc kubenswrapper[4947]: I1129 06:36:02.177904 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:36:02 crc kubenswrapper[4947]: I1129 06:36:02.177927 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:36:02 crc kubenswrapper[4947]: E1129 06:36:02.178141 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 06:36:02 crc kubenswrapper[4947]: I1129 06:36:02.177927 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:36:02 crc kubenswrapper[4947]: E1129 06:36:02.178417 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 06:36:02 crc kubenswrapper[4947]: E1129 06:36:02.178475 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fbj5" podUID="53a3bcac-8ad0-47ce-abee-ee56fd152ea8" Nov 29 06:36:03 crc kubenswrapper[4947]: I1129 06:36:03.178898 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:36:03 crc kubenswrapper[4947]: E1129 06:36:03.179188 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 06:36:03 crc kubenswrapper[4947]: I1129 06:36:03.765354 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xlg45_2cbb3532-a15b-4cca-bde1-aa1ae20698f1/kube-multus/1.log" Nov 29 06:36:03 crc kubenswrapper[4947]: I1129 06:36:03.766005 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xlg45_2cbb3532-a15b-4cca-bde1-aa1ae20698f1/kube-multus/0.log" Nov 29 06:36:03 crc kubenswrapper[4947]: I1129 06:36:03.766088 4947 generic.go:334] "Generic (PLEG): container finished" podID="2cbb3532-a15b-4cca-bde1-aa1ae20698f1" containerID="63ddd0da1118c2e86da1aea51f8248927f80bc1d21790723952b4d59f294cd76" exitCode=1 Nov 29 06:36:03 crc kubenswrapper[4947]: I1129 06:36:03.766134 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xlg45" event={"ID":"2cbb3532-a15b-4cca-bde1-aa1ae20698f1","Type":"ContainerDied","Data":"63ddd0da1118c2e86da1aea51f8248927f80bc1d21790723952b4d59f294cd76"} Nov 29 06:36:03 crc kubenswrapper[4947]: I1129 06:36:03.766183 4947 scope.go:117] "RemoveContainer" containerID="35c22792f1661e18f407880f3a743a03890141d9f5c6062e8ba6b06204b4e3da" Nov 29 06:36:03 crc kubenswrapper[4947]: I1129 06:36:03.766856 4947 scope.go:117] "RemoveContainer" containerID="63ddd0da1118c2e86da1aea51f8248927f80bc1d21790723952b4d59f294cd76" Nov 29 06:36:03 crc kubenswrapper[4947]: E1129 06:36:03.767091 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-xlg45_openshift-multus(2cbb3532-a15b-4cca-bde1-aa1ae20698f1)\"" pod="openshift-multus/multus-xlg45" podUID="2cbb3532-a15b-4cca-bde1-aa1ae20698f1" Nov 29 06:36:04 crc kubenswrapper[4947]: I1129 06:36:04.178662 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:36:04 crc kubenswrapper[4947]: I1129 06:36:04.178732 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:36:04 crc kubenswrapper[4947]: E1129 06:36:04.178828 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 06:36:04 crc kubenswrapper[4947]: I1129 06:36:04.178919 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:36:04 crc kubenswrapper[4947]: E1129 06:36:04.179105 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fbj5" podUID="53a3bcac-8ad0-47ce-abee-ee56fd152ea8" Nov 29 06:36:04 crc kubenswrapper[4947]: E1129 06:36:04.179488 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 06:36:04 crc kubenswrapper[4947]: I1129 06:36:04.771803 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xlg45_2cbb3532-a15b-4cca-bde1-aa1ae20698f1/kube-multus/1.log" Nov 29 06:36:05 crc kubenswrapper[4947]: I1129 06:36:05.178544 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:36:05 crc kubenswrapper[4947]: E1129 06:36:05.178762 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 06:36:06 crc kubenswrapper[4947]: I1129 06:36:06.177970 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:36:06 crc kubenswrapper[4947]: I1129 06:36:06.177999 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:36:06 crc kubenswrapper[4947]: I1129 06:36:06.178082 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:36:06 crc kubenswrapper[4947]: E1129 06:36:06.178169 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 06:36:06 crc kubenswrapper[4947]: E1129 06:36:06.178264 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fbj5" podUID="53a3bcac-8ad0-47ce-abee-ee56fd152ea8" Nov 29 06:36:06 crc kubenswrapper[4947]: E1129 06:36:06.178357 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 06:36:07 crc kubenswrapper[4947]: I1129 06:36:07.178715 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:36:07 crc kubenswrapper[4947]: E1129 06:36:07.179333 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 06:36:07 crc kubenswrapper[4947]: I1129 06:36:07.180636 4947 scope.go:117] "RemoveContainer" containerID="f688195e20b1f49c08f3c1b7e2348e4ffdaa7417f17c1a626fde1c81232d1920" Nov 29 06:36:07 crc kubenswrapper[4947]: I1129 06:36:07.785101 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z4rxq_dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0/ovnkube-controller/3.log" Nov 29 06:36:07 crc kubenswrapper[4947]: I1129 06:36:07.788621 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" event={"ID":"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0","Type":"ContainerStarted","Data":"5567c06c44d06a87e61444e9e77b7096c805189d10806a57b4c6cfd436cbce42"} Nov 29 06:36:07 crc kubenswrapper[4947]: I1129 06:36:07.789085 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:36:07 crc kubenswrapper[4947]: I1129 06:36:07.816332 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" podStartSLOduration=98.816315799 podStartE2EDuration="1m38.816315799s" podCreationTimestamp="2025-11-29 06:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:36:07.816178036 +0000 UTC m=+118.860560127" watchObservedRunningTime="2025-11-29 06:36:07.816315799 +0000 UTC m=+118.860697880" Nov 29 06:36:08 crc kubenswrapper[4947]: I1129 06:36:08.162896 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-2fbj5"] Nov 29 06:36:08 crc kubenswrapper[4947]: I1129 06:36:08.163008 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:36:08 crc kubenswrapper[4947]: E1129 06:36:08.163095 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fbj5" podUID="53a3bcac-8ad0-47ce-abee-ee56fd152ea8" Nov 29 06:36:08 crc kubenswrapper[4947]: I1129 06:36:08.178263 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:36:08 crc kubenswrapper[4947]: I1129 06:36:08.178382 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:36:08 crc kubenswrapper[4947]: E1129 06:36:08.178508 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 06:36:08 crc kubenswrapper[4947]: E1129 06:36:08.178629 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 06:36:09 crc kubenswrapper[4947]: E1129 06:36:09.172597 4947 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Nov 29 06:36:09 crc kubenswrapper[4947]: I1129 06:36:09.178106 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:36:09 crc kubenswrapper[4947]: E1129 06:36:09.178937 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 06:36:09 crc kubenswrapper[4947]: E1129 06:36:09.409320 4947 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 29 06:36:10 crc kubenswrapper[4947]: I1129 06:36:10.177784 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:36:10 crc kubenswrapper[4947]: I1129 06:36:10.177847 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:36:10 crc kubenswrapper[4947]: I1129 06:36:10.177997 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:36:10 crc kubenswrapper[4947]: E1129 06:36:10.178658 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 06:36:10 crc kubenswrapper[4947]: E1129 06:36:10.179035 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fbj5" podUID="53a3bcac-8ad0-47ce-abee-ee56fd152ea8" Nov 29 06:36:10 crc kubenswrapper[4947]: E1129 06:36:10.178958 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 06:36:11 crc kubenswrapper[4947]: I1129 06:36:11.178620 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:36:11 crc kubenswrapper[4947]: E1129 06:36:11.178794 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 06:36:12 crc kubenswrapper[4947]: I1129 06:36:12.177841 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:36:12 crc kubenswrapper[4947]: I1129 06:36:12.177949 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:36:12 crc kubenswrapper[4947]: E1129 06:36:12.178055 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fbj5" podUID="53a3bcac-8ad0-47ce-abee-ee56fd152ea8" Nov 29 06:36:12 crc kubenswrapper[4947]: I1129 06:36:12.178093 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:36:12 crc kubenswrapper[4947]: E1129 06:36:12.178280 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 06:36:12 crc kubenswrapper[4947]: E1129 06:36:12.178385 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 06:36:13 crc kubenswrapper[4947]: I1129 06:36:13.178273 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:36:13 crc kubenswrapper[4947]: E1129 06:36:13.178426 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 06:36:14 crc kubenswrapper[4947]: I1129 06:36:14.177759 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:36:14 crc kubenswrapper[4947]: I1129 06:36:14.177831 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:36:14 crc kubenswrapper[4947]: E1129 06:36:14.178211 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 06:36:14 crc kubenswrapper[4947]: I1129 06:36:14.177831 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:36:14 crc kubenswrapper[4947]: E1129 06:36:14.178317 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 06:36:14 crc kubenswrapper[4947]: E1129 06:36:14.178428 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fbj5" podUID="53a3bcac-8ad0-47ce-abee-ee56fd152ea8" Nov 29 06:36:14 crc kubenswrapper[4947]: E1129 06:36:14.410988 4947 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 29 06:36:15 crc kubenswrapper[4947]: I1129 06:36:15.179470 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:36:15 crc kubenswrapper[4947]: E1129 06:36:15.179613 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 06:36:16 crc kubenswrapper[4947]: I1129 06:36:16.177792 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:36:16 crc kubenswrapper[4947]: I1129 06:36:16.177819 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:36:16 crc kubenswrapper[4947]: E1129 06:36:16.177963 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 06:36:16 crc kubenswrapper[4947]: I1129 06:36:16.178034 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:36:16 crc kubenswrapper[4947]: E1129 06:36:16.178189 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 06:36:16 crc kubenswrapper[4947]: E1129 06:36:16.178359 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fbj5" podUID="53a3bcac-8ad0-47ce-abee-ee56fd152ea8" Nov 29 06:36:17 crc kubenswrapper[4947]: I1129 06:36:17.178552 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:36:17 crc kubenswrapper[4947]: E1129 06:36:17.179049 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 06:36:17 crc kubenswrapper[4947]: I1129 06:36:17.179446 4947 scope.go:117] "RemoveContainer" containerID="63ddd0da1118c2e86da1aea51f8248927f80bc1d21790723952b4d59f294cd76" Nov 29 06:36:17 crc kubenswrapper[4947]: I1129 06:36:17.820588 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xlg45_2cbb3532-a15b-4cca-bde1-aa1ae20698f1/kube-multus/1.log" Nov 29 06:36:17 crc kubenswrapper[4947]: I1129 06:36:17.820640 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xlg45" event={"ID":"2cbb3532-a15b-4cca-bde1-aa1ae20698f1","Type":"ContainerStarted","Data":"835a800714641bae786d619e7b11ef925de7bab3829365dde0a3e2934199065e"} Nov 29 06:36:18 crc kubenswrapper[4947]: I1129 06:36:18.178364 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:36:18 crc kubenswrapper[4947]: I1129 06:36:18.178411 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:36:18 crc kubenswrapper[4947]: E1129 06:36:18.178497 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 06:36:18 crc kubenswrapper[4947]: I1129 06:36:18.178384 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:36:18 crc kubenswrapper[4947]: E1129 06:36:18.178596 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 06:36:18 crc kubenswrapper[4947]: E1129 06:36:18.178661 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2fbj5" podUID="53a3bcac-8ad0-47ce-abee-ee56fd152ea8" Nov 29 06:36:19 crc kubenswrapper[4947]: I1129 06:36:19.178555 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:36:19 crc kubenswrapper[4947]: E1129 06:36:19.180695 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 06:36:20 crc kubenswrapper[4947]: I1129 06:36:20.178586 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:36:20 crc kubenswrapper[4947]: I1129 06:36:20.178583 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:36:20 crc kubenswrapper[4947]: I1129 06:36:20.178583 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:36:20 crc kubenswrapper[4947]: I1129 06:36:20.180771 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 29 06:36:20 crc kubenswrapper[4947]: I1129 06:36:20.181604 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 29 06:36:20 crc kubenswrapper[4947]: I1129 06:36:20.182054 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 29 06:36:20 crc kubenswrapper[4947]: I1129 06:36:20.183002 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 29 06:36:20 crc kubenswrapper[4947]: I1129 06:36:20.183916 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 29 06:36:20 crc kubenswrapper[4947]: I1129 06:36:20.184430 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 29 06:36:21 crc kubenswrapper[4947]: I1129 06:36:21.177877 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.797666 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.831198 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-lx74d"] Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.832645 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-lx74d" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.836960 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.839137 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.839203 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.839373 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.839467 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.839394 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.840282 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.841905 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-mcs9v"] Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.842106 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.842313 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-mcs9v" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.842399 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.842661 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.845444 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.845472 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.845794 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.846228 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8t9v"] Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.846706 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8t9v" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.846960 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2vgqt"] Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.847452 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-2vgqt" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.849534 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.850256 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.850444 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.850632 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.852080 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.852320 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.852598 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.852924 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.852975 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.853667 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hm525"] Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.854161 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hm525" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.855340 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.855413 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b6rpb"] Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.856168 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b6rpb" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.856897 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.856940 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.857778 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.858025 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.867824 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-g227d"] Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.868262 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-flwtp"] Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.867825 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.867898 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.867951 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.868818 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-sghgg"] Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.868928 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-g227d" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.868895 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-flwtp" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.869870 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-gkrmk"] Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.870152 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-r88sq"] Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.870609 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r88sq" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.870859 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sghgg" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.871117 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-gkrmk" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.872212 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.872380 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.872469 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.877451 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.879564 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.880083 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-tcchr"] Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.893339 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.895333 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.895556 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.895765 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.896060 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.896243 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.896470 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.896679 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.897046 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.897578 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.897941 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.898344 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.898537 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.898872 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.898874 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9rskb"] Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.900901 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9rskb" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.903719 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-tcchr" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.903930 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-b45sk"] Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.916772 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.916993 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.917264 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.917412 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.917447 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.917566 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.918196 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-72fb2"] Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.918435 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.918476 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.918597 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-72fb2" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.918714 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.918719 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-b45sk" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.920682 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-cjcrf"] Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.921382 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.921425 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mfvb"] Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.921389 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.921723 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.921808 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.922065 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.922285 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.922422 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.922580 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.923015 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.923526 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.923791 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.923965 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-nswtf"] Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.924515 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-nswtf" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.926252 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.926524 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mfvb" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.927382 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.927659 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.931628 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.935245 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.935346 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.935468 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.936077 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.936263 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.936410 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.936769 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.942112 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.942454 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.942558 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.942708 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.942818 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.942930 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.943026 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-xxz9x"] Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.943058 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.943571 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.943626 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-zcdgs"] Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.943927 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-zcdgs" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.943996 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xxz9x" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.946329 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-dh8pj"] Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.947309 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dh8pj" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.948881 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.948993 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.949547 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.950321 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.950805 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.952956 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.953122 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.953399 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.953511 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.953407 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.953446 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.953475 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.953140 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.956050 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-ts6sm"] Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.956763 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ts6sm" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.957166 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-6ndbx"] Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.957618 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-6ndbx" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.958188 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d0bccdee-ee49-4d76-9826-0e8ece077528-images\") pod \"machine-api-operator-5694c8668f-flwtp\" (UID: \"d0bccdee-ee49-4d76-9826-0e8ece077528\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-flwtp" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.958253 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20404d7a-857c-4f60-beef-e6ef9116804d-serving-cert\") pod \"route-controller-manager-6576b87f9c-z8t9v\" (UID: \"20404d7a-857c-4f60-beef-e6ef9116804d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8t9v" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.958286 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a50236c5-0779-4d32-968b-2d4aee931dd6-etcd-ca\") pod \"etcd-operator-b45778765-gkrmk\" (UID: \"a50236c5-0779-4d32-968b-2d4aee931dd6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gkrmk" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.958308 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96b6a564-8c15-4680-a2e2-c8bf9289fdf8-serving-cert\") pod \"console-operator-58897d9998-mcs9v\" (UID: \"96b6a564-8c15-4680-a2e2-c8bf9289fdf8\") " pod="openshift-console-operator/console-operator-58897d9998-mcs9v" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.958330 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.958331 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20404d7a-857c-4f60-beef-e6ef9116804d-config\") pod \"route-controller-manager-6576b87f9c-z8t9v\" (UID: \"20404d7a-857c-4f60-beef-e6ef9116804d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8t9v" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.958571 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6c322b6-b29e-4177-9e8c-7fefbf9d7e4a-config\") pod \"machine-approver-56656f9798-sghgg\" (UID: \"e6c322b6-b29e-4177-9e8c-7fefbf9d7e4a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sghgg" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.958593 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ff97c32-0757-44e2-8cad-55b8bfadf0a8-config\") pod \"controller-manager-879f6c89f-2vgqt\" (UID: \"8ff97c32-0757-44e2-8cad-55b8bfadf0a8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2vgqt" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.958617 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bccmc\" (UniqueName: \"kubernetes.io/projected/f060d79a-f223-455c-b203-0bd9e430a896-kube-api-access-bccmc\") pod \"dns-operator-744455d44c-tcchr\" (UID: \"f060d79a-f223-455c-b203-0bd9e430a896\") " pod="openshift-dns-operator/dns-operator-744455d44c-tcchr" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.958643 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e6c322b6-b29e-4177-9e8c-7fefbf9d7e4a-machine-approver-tls\") pod \"machine-approver-56656f9798-sghgg\" (UID: \"e6c322b6-b29e-4177-9e8c-7fefbf9d7e4a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sghgg" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.958679 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/061238bd-c978-4bf9-9868-5ef174d414f2-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-hm525\" (UID: \"061238bd-c978-4bf9-9868-5ef174d414f2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hm525" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.958705 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ff97c32-0757-44e2-8cad-55b8bfadf0a8-serving-cert\") pod \"controller-manager-879f6c89f-2vgqt\" (UID: \"8ff97c32-0757-44e2-8cad-55b8bfadf0a8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2vgqt" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.958725 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96b6a564-8c15-4680-a2e2-c8bf9289fdf8-config\") pod \"console-operator-58897d9998-mcs9v\" (UID: \"96b6a564-8c15-4680-a2e2-c8bf9289fdf8\") " pod="openshift-console-operator/console-operator-58897d9998-mcs9v" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.958745 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b85a2376-eba6-4a1e-b6eb-870ffc696f31-etcd-client\") pod \"apiserver-76f77b778f-lx74d\" (UID: \"b85a2376-eba6-4a1e-b6eb-870ffc696f31\") " pod="openshift-apiserver/apiserver-76f77b778f-lx74d" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.958777 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b85a2376-eba6-4a1e-b6eb-870ffc696f31-encryption-config\") pod \"apiserver-76f77b778f-lx74d\" (UID: \"b85a2376-eba6-4a1e-b6eb-870ffc696f31\") " pod="openshift-apiserver/apiserver-76f77b778f-lx74d" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.958804 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b85a2376-eba6-4a1e-b6eb-870ffc696f31-node-pullsecrets\") pod \"apiserver-76f77b778f-lx74d\" (UID: \"b85a2376-eba6-4a1e-b6eb-870ffc696f31\") " pod="openshift-apiserver/apiserver-76f77b778f-lx74d" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.958831 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b85a2376-eba6-4a1e-b6eb-870ffc696f31-audit\") pod \"apiserver-76f77b778f-lx74d\" (UID: \"b85a2376-eba6-4a1e-b6eb-870ffc696f31\") " pod="openshift-apiserver/apiserver-76f77b778f-lx74d" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.958851 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8ff97c32-0757-44e2-8cad-55b8bfadf0a8-client-ca\") pod \"controller-manager-879f6c89f-2vgqt\" (UID: \"8ff97c32-0757-44e2-8cad-55b8bfadf0a8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2vgqt" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.958878 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a689b7f0-2ae8-4200-9e32-0ed56e5791d1-config\") pod \"kube-controller-manager-operator-78b949d7b-9rskb\" (UID: \"a689b7f0-2ae8-4200-9e32-0ed56e5791d1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9rskb" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.958901 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqcks\" (UniqueName: \"kubernetes.io/projected/13ecfe15-43e4-42ff-817f-fc95fb8f54aa-kube-api-access-nqcks\") pod \"migrator-59844c95c7-r88sq\" (UID: \"13ecfe15-43e4-42ff-817f-fc95fb8f54aa\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r88sq" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.958943 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/061238bd-c978-4bf9-9868-5ef174d414f2-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-hm525\" (UID: \"061238bd-c978-4bf9-9868-5ef174d414f2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hm525" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.958965 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4cfb4573-1a3c-43b5-aa58-83774b1b9212-service-ca-bundle\") pod \"authentication-operator-69f744f599-g227d\" (UID: \"4cfb4573-1a3c-43b5-aa58-83774b1b9212\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g227d" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.958990 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b85a2376-eba6-4a1e-b6eb-870ffc696f31-trusted-ca-bundle\") pod \"apiserver-76f77b778f-lx74d\" (UID: \"b85a2376-eba6-4a1e-b6eb-870ffc696f31\") " pod="openshift-apiserver/apiserver-76f77b778f-lx74d" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.959016 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b85a2376-eba6-4a1e-b6eb-870ffc696f31-audit-dir\") pod \"apiserver-76f77b778f-lx74d\" (UID: \"b85a2376-eba6-4a1e-b6eb-870ffc696f31\") " pod="openshift-apiserver/apiserver-76f77b778f-lx74d" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.959040 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4q46\" (UniqueName: \"kubernetes.io/projected/e6c322b6-b29e-4177-9e8c-7fefbf9d7e4a-kube-api-access-q4q46\") pod \"machine-approver-56656f9798-sghgg\" (UID: \"e6c322b6-b29e-4177-9e8c-7fefbf9d7e4a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sghgg" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.959065 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0bccdee-ee49-4d76-9826-0e8ece077528-config\") pod \"machine-api-operator-5694c8668f-flwtp\" (UID: \"d0bccdee-ee49-4d76-9826-0e8ece077528\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-flwtp" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.958984 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bkmbq"] Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.959157 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8s77\" (UniqueName: \"kubernetes.io/projected/a50236c5-0779-4d32-968b-2d4aee931dd6-kube-api-access-x8s77\") pod \"etcd-operator-b45778765-gkrmk\" (UID: \"a50236c5-0779-4d32-968b-2d4aee931dd6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gkrmk" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.959261 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5h8f\" (UniqueName: \"kubernetes.io/projected/8ff97c32-0757-44e2-8cad-55b8bfadf0a8-kube-api-access-v5h8f\") pod \"controller-manager-879f6c89f-2vgqt\" (UID: \"8ff97c32-0757-44e2-8cad-55b8bfadf0a8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2vgqt" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.959285 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpb2s\" (UniqueName: \"kubernetes.io/projected/20404d7a-857c-4f60-beef-e6ef9116804d-kube-api-access-hpb2s\") pod \"route-controller-manager-6576b87f9c-z8t9v\" (UID: \"20404d7a-857c-4f60-beef-e6ef9116804d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8t9v" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.959322 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b85a2376-eba6-4a1e-b6eb-870ffc696f31-config\") pod \"apiserver-76f77b778f-lx74d\" (UID: \"b85a2376-eba6-4a1e-b6eb-870ffc696f31\") " pod="openshift-apiserver/apiserver-76f77b778f-lx74d" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.959362 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95k4w\" (UniqueName: \"kubernetes.io/projected/2ce0e4b6-b001-41b7-a850-a77a9b7131d9-kube-api-access-95k4w\") pod \"cluster-samples-operator-665b6dd947-b6rpb\" (UID: \"2ce0e4b6-b001-41b7-a850-a77a9b7131d9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b6rpb" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.959414 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b85a2376-eba6-4a1e-b6eb-870ffc696f31-image-import-ca\") pod \"apiserver-76f77b778f-lx74d\" (UID: \"b85a2376-eba6-4a1e-b6eb-870ffc696f31\") " pod="openshift-apiserver/apiserver-76f77b778f-lx74d" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.959444 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cfb4573-1a3c-43b5-aa58-83774b1b9212-serving-cert\") pod \"authentication-operator-69f744f599-g227d\" (UID: \"4cfb4573-1a3c-43b5-aa58-83774b1b9212\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g227d" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.959471 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a689b7f0-2ae8-4200-9e32-0ed56e5791d1-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-9rskb\" (UID: \"a689b7f0-2ae8-4200-9e32-0ed56e5791d1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9rskb" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.959496 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6m4f\" (UniqueName: \"kubernetes.io/projected/061238bd-c978-4bf9-9868-5ef174d414f2-kube-api-access-f6m4f\") pod \"cluster-image-registry-operator-dc59b4c8b-hm525\" (UID: \"061238bd-c978-4bf9-9868-5ef174d414f2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hm525" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.959515 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a689b7f0-2ae8-4200-9e32-0ed56e5791d1-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-9rskb\" (UID: \"a689b7f0-2ae8-4200-9e32-0ed56e5791d1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9rskb" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.959530 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4cfb4573-1a3c-43b5-aa58-83774b1b9212-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-g227d\" (UID: \"4cfb4573-1a3c-43b5-aa58-83774b1b9212\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g227d" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.959566 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a50236c5-0779-4d32-968b-2d4aee931dd6-etcd-service-ca\") pod \"etcd-operator-b45778765-gkrmk\" (UID: \"a50236c5-0779-4d32-968b-2d4aee931dd6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gkrmk" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.959591 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cfb4573-1a3c-43b5-aa58-83774b1b9212-config\") pod \"authentication-operator-69f744f599-g227d\" (UID: \"4cfb4573-1a3c-43b5-aa58-83774b1b9212\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g227d" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.959641 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a50236c5-0779-4d32-968b-2d4aee931dd6-etcd-client\") pod \"etcd-operator-b45778765-gkrmk\" (UID: \"a50236c5-0779-4d32-968b-2d4aee931dd6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gkrmk" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.959658 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b85a2376-eba6-4a1e-b6eb-870ffc696f31-serving-cert\") pod \"apiserver-76f77b778f-lx74d\" (UID: \"b85a2376-eba6-4a1e-b6eb-870ffc696f31\") " pod="openshift-apiserver/apiserver-76f77b778f-lx74d" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.959690 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e6c322b6-b29e-4177-9e8c-7fefbf9d7e4a-auth-proxy-config\") pod \"machine-approver-56656f9798-sghgg\" (UID: \"e6c322b6-b29e-4177-9e8c-7fefbf9d7e4a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sghgg" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.959714 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20404d7a-857c-4f60-beef-e6ef9116804d-client-ca\") pod \"route-controller-manager-6576b87f9c-z8t9v\" (UID: \"20404d7a-857c-4f60-beef-e6ef9116804d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8t9v" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.959735 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/2ce0e4b6-b001-41b7-a850-a77a9b7131d9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-b6rpb\" (UID: \"2ce0e4b6-b001-41b7-a850-a77a9b7131d9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b6rpb" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.959759 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/061238bd-c978-4bf9-9868-5ef174d414f2-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-hm525\" (UID: \"061238bd-c978-4bf9-9868-5ef174d414f2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hm525" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.959790 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a50236c5-0779-4d32-968b-2d4aee931dd6-serving-cert\") pod \"etcd-operator-b45778765-gkrmk\" (UID: \"a50236c5-0779-4d32-968b-2d4aee931dd6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gkrmk" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.959813 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rmgg\" (UniqueName: \"kubernetes.io/projected/96b6a564-8c15-4680-a2e2-c8bf9289fdf8-kube-api-access-4rmgg\") pod \"console-operator-58897d9998-mcs9v\" (UID: \"96b6a564-8c15-4680-a2e2-c8bf9289fdf8\") " pod="openshift-console-operator/console-operator-58897d9998-mcs9v" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.959831 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jl4h\" (UniqueName: \"kubernetes.io/projected/d0bccdee-ee49-4d76-9826-0e8ece077528-kube-api-access-7jl4h\") pod \"machine-api-operator-5694c8668f-flwtp\" (UID: \"d0bccdee-ee49-4d76-9826-0e8ece077528\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-flwtp" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.959847 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8ff97c32-0757-44e2-8cad-55b8bfadf0a8-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-2vgqt\" (UID: \"8ff97c32-0757-44e2-8cad-55b8bfadf0a8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2vgqt" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.959883 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a50236c5-0779-4d32-968b-2d4aee931dd6-config\") pod \"etcd-operator-b45778765-gkrmk\" (UID: \"a50236c5-0779-4d32-968b-2d4aee931dd6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gkrmk" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.959931 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b85a2376-eba6-4a1e-b6eb-870ffc696f31-etcd-serving-ca\") pod \"apiserver-76f77b778f-lx74d\" (UID: \"b85a2376-eba6-4a1e-b6eb-870ffc696f31\") " pod="openshift-apiserver/apiserver-76f77b778f-lx74d" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.959950 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79bq6\" (UniqueName: \"kubernetes.io/projected/b85a2376-eba6-4a1e-b6eb-870ffc696f31-kube-api-access-79bq6\") pod \"apiserver-76f77b778f-lx74d\" (UID: \"b85a2376-eba6-4a1e-b6eb-870ffc696f31\") " pod="openshift-apiserver/apiserver-76f77b778f-lx74d" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.959974 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/96b6a564-8c15-4680-a2e2-c8bf9289fdf8-trusted-ca\") pod \"console-operator-58897d9998-mcs9v\" (UID: \"96b6a564-8c15-4680-a2e2-c8bf9289fdf8\") " pod="openshift-console-operator/console-operator-58897d9998-mcs9v" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.959994 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/d0bccdee-ee49-4d76-9826-0e8ece077528-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-flwtp\" (UID: \"d0bccdee-ee49-4d76-9826-0e8ece077528\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-flwtp" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.960014 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcctf\" (UniqueName: \"kubernetes.io/projected/4cfb4573-1a3c-43b5-aa58-83774b1b9212-kube-api-access-hcctf\") pod \"authentication-operator-69f744f599-g227d\" (UID: \"4cfb4573-1a3c-43b5-aa58-83774b1b9212\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g227d" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.960058 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f060d79a-f223-455c-b203-0bd9e430a896-metrics-tls\") pod \"dns-operator-744455d44c-tcchr\" (UID: \"f060d79a-f223-455c-b203-0bd9e430a896\") " pod="openshift-dns-operator/dns-operator-744455d44c-tcchr" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.962837 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bkmbq" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.966244 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.970407 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cwrn9"] Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.979509 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-lkf5s"] Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.980785 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lkf5s" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.982361 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.986312 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cwrn9" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.986550 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.986719 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dzzqk"] Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.987650 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dzzqk" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.990819 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5b7f7"] Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.992134 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mzd69"] Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.992256 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5b7f7" Nov 29 06:36:24 crc kubenswrapper[4947]: I1129 06:36:24.994441 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mzd69" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.012332 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qv66j"] Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.013076 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qv66j" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.013388 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m5hbt"] Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.013612 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.013825 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m5hbt" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.014477 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-2drts"] Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.016088 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.016847 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-2nj82"] Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.017000 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.017362 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-2drts" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.018389 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-7jmxk"] Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.018702 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-2nj82" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.019407 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-mcs9v"] Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.019444 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xjxbf"] Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.019903 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xjxbf" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.020120 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-7jmxk" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.020361 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6mhws"] Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.021005 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6mhws" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.021393 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-lx74d"] Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.023306 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406630-wnfkh"] Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.024372 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-66xmw"] Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.024908 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406630-wnfkh" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.025184 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-66xmw" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.026723 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8t9v"] Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.026744 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2vgqt"] Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.028328 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-td4m7"] Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.029126 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b6rpb"] Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.029234 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-td4m7" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.029781 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-cjcrf"] Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.030910 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-flwtp"] Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.032185 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hm525"] Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.034954 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.036567 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-nswtf"] Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.036602 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bkmbq"] Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.036619 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-gkrmk"] Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.037881 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-tcchr"] Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.038675 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-ts6sm"] Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.040277 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dzzqk"] Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.040824 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qv66j"] Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.041836 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9rskb"] Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.042974 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-b45sk"] Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.048465 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-g227d"] Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.048570 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-dh8pj"] Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.056488 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.056736 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m5hbt"] Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.058850 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mfvb"] Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.060019 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mzd69"] Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.060700 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a4cf8b6f-d7b3-4cc2-adf5-2494f5d21b71-stats-auth\") pod \"router-default-5444994796-6ndbx\" (UID: \"a4cf8b6f-d7b3-4cc2-adf5-2494f5d21b71\") " pod="openshift-ingress/router-default-5444994796-6ndbx" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.060754 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6f31928-dd2b-41c9-8103-3652eb01b1ad-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-bkmbq\" (UID: \"e6f31928-dd2b-41c9-8103-3652eb01b1ad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bkmbq" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.060774 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-72fb2\" (UID: \"51755494-2de8-480e-a1e5-fc10c9af3d06\") " pod="openshift-authentication/oauth-openshift-558db77b4-72fb2" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.060817 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-72fb2\" (UID: \"51755494-2de8-480e-a1e5-fc10c9af3d06\") " pod="openshift-authentication/oauth-openshift-558db77b4-72fb2" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.060893 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-72fb2\" (UID: \"51755494-2de8-480e-a1e5-fc10c9af3d06\") " pod="openshift-authentication/oauth-openshift-558db77b4-72fb2" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.060921 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/061238bd-c978-4bf9-9868-5ef174d414f2-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-hm525\" (UID: \"061238bd-c978-4bf9-9868-5ef174d414f2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hm525" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.060938 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ff97c32-0757-44e2-8cad-55b8bfadf0a8-serving-cert\") pod \"controller-manager-879f6c89f-2vgqt\" (UID: \"8ff97c32-0757-44e2-8cad-55b8bfadf0a8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2vgqt" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.060977 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e8a3f832-b756-47ec-9729-7beacc669293-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dzzqk\" (UID: \"e8a3f832-b756-47ec-9729-7beacc669293\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dzzqk" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.060999 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96b6a564-8c15-4680-a2e2-c8bf9289fdf8-config\") pod \"console-operator-58897d9998-mcs9v\" (UID: \"96b6a564-8c15-4680-a2e2-c8bf9289fdf8\") " pod="openshift-console-operator/console-operator-58897d9998-mcs9v" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.061014 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b85a2376-eba6-4a1e-b6eb-870ffc696f31-etcd-client\") pod \"apiserver-76f77b778f-lx74d\" (UID: \"b85a2376-eba6-4a1e-b6eb-870ffc696f31\") " pod="openshift-apiserver/apiserver-76f77b778f-lx74d" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.061029 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b85a2376-eba6-4a1e-b6eb-870ffc696f31-encryption-config\") pod \"apiserver-76f77b778f-lx74d\" (UID: \"b85a2376-eba6-4a1e-b6eb-870ffc696f31\") " pod="openshift-apiserver/apiserver-76f77b778f-lx74d" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.061066 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtck7\" (UniqueName: \"kubernetes.io/projected/967c6d72-d998-4b42-8de2-b9fd1712fc12-kube-api-access-rtck7\") pod \"service-ca-9c57cc56f-2nj82\" (UID: \"967c6d72-d998-4b42-8de2-b9fd1712fc12\") " pod="openshift-service-ca/service-ca-9c57cc56f-2nj82" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.061083 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b85a2376-eba6-4a1e-b6eb-870ffc696f31-audit\") pod \"apiserver-76f77b778f-lx74d\" (UID: \"b85a2376-eba6-4a1e-b6eb-870ffc696f31\") " pod="openshift-apiserver/apiserver-76f77b778f-lx74d" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.061098 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8ff97c32-0757-44e2-8cad-55b8bfadf0a8-client-ca\") pod \"controller-manager-879f6c89f-2vgqt\" (UID: \"8ff97c32-0757-44e2-8cad-55b8bfadf0a8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2vgqt" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.061132 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b85a2376-eba6-4a1e-b6eb-870ffc696f31-node-pullsecrets\") pod \"apiserver-76f77b778f-lx74d\" (UID: \"b85a2376-eba6-4a1e-b6eb-870ffc696f31\") " pod="openshift-apiserver/apiserver-76f77b778f-lx74d" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.061150 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5da50f13-a25d-403c-8fda-39f93a5cf4fd-profile-collector-cert\") pod \"olm-operator-6b444d44fb-qv66j\" (UID: \"5da50f13-a25d-403c-8fda-39f93a5cf4fd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qv66j" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.061168 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-72fb2\" (UID: \"51755494-2de8-480e-a1e5-fc10c9af3d06\") " pod="openshift-authentication/oauth-openshift-558db77b4-72fb2" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.061187 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a689b7f0-2ae8-4200-9e32-0ed56e5791d1-config\") pod \"kube-controller-manager-operator-78b949d7b-9rskb\" (UID: \"a689b7f0-2ae8-4200-9e32-0ed56e5791d1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9rskb" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.061234 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqcks\" (UniqueName: \"kubernetes.io/projected/13ecfe15-43e4-42ff-817f-fc95fb8f54aa-kube-api-access-nqcks\") pod \"migrator-59844c95c7-r88sq\" (UID: \"13ecfe15-43e4-42ff-817f-fc95fb8f54aa\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r88sq" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.061253 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/967c6d72-d998-4b42-8de2-b9fd1712fc12-signing-key\") pod \"service-ca-9c57cc56f-2nj82\" (UID: \"967c6d72-d998-4b42-8de2-b9fd1712fc12\") " pod="openshift-service-ca/service-ca-9c57cc56f-2nj82" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.061291 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/019be0a2-be0d-43c7-a91d-280a3508c623-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2mfvb\" (UID: \"019be0a2-be0d-43c7-a91d-280a3508c623\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mfvb" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.061308 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsrw8\" (UniqueName: \"kubernetes.io/projected/019be0a2-be0d-43c7-a91d-280a3508c623-kube-api-access-rsrw8\") pod \"openshift-apiserver-operator-796bbdcf4f-2mfvb\" (UID: \"019be0a2-be0d-43c7-a91d-280a3508c623\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mfvb" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.061335 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4cfb4573-1a3c-43b5-aa58-83774b1b9212-service-ca-bundle\") pod \"authentication-operator-69f744f599-g227d\" (UID: \"4cfb4573-1a3c-43b5-aa58-83774b1b9212\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g227d" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.061371 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/23977121-e0f0-4055-a727-c4050a20f2a6-images\") pod \"machine-config-operator-74547568cd-dh8pj\" (UID: \"23977121-e0f0-4055-a727-c4050a20f2a6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dh8pj" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.061391 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/061238bd-c978-4bf9-9868-5ef174d414f2-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-hm525\" (UID: \"061238bd-c978-4bf9-9868-5ef174d414f2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hm525" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.061406 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b85a2376-eba6-4a1e-b6eb-870ffc696f31-trusted-ca-bundle\") pod \"apiserver-76f77b778f-lx74d\" (UID: \"b85a2376-eba6-4a1e-b6eb-870ffc696f31\") " pod="openshift-apiserver/apiserver-76f77b778f-lx74d" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.061421 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f2bc292-c394-4dd0-9ce5-51d960430aa4-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-5b7f7\" (UID: \"1f2bc292-c394-4dd0-9ce5-51d960430aa4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5b7f7" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.061456 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f2bc292-c394-4dd0-9ce5-51d960430aa4-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-5b7f7\" (UID: \"1f2bc292-c394-4dd0-9ce5-51d960430aa4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5b7f7" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.061491 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/967c6d72-d998-4b42-8de2-b9fd1712fc12-signing-cabundle\") pod \"service-ca-9c57cc56f-2nj82\" (UID: \"967c6d72-d998-4b42-8de2-b9fd1712fc12\") " pod="openshift-service-ca/service-ca-9c57cc56f-2nj82" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.061534 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4q46\" (UniqueName: \"kubernetes.io/projected/e6c322b6-b29e-4177-9e8c-7fefbf9d7e4a-kube-api-access-q4q46\") pod \"machine-approver-56656f9798-sghgg\" (UID: \"e6c322b6-b29e-4177-9e8c-7fefbf9d7e4a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sghgg" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.061553 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqzqt\" (UniqueName: \"kubernetes.io/projected/d950d1cc-546a-4650-ab9c-e58388bda769-kube-api-access-sqzqt\") pod \"multus-admission-controller-857f4d67dd-2drts\" (UID: \"d950d1cc-546a-4650-ab9c-e58388bda769\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2drts" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.061570 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1859af1a-8cea-4330-b44f-69c94692bfde-webhook-cert\") pod \"packageserver-d55dfcdfc-m5hbt\" (UID: \"1859af1a-8cea-4330-b44f-69c94692bfde\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m5hbt" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.061584 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcjlp\" (UniqueName: \"kubernetes.io/projected/1859af1a-8cea-4330-b44f-69c94692bfde-kube-api-access-pcjlp\") pod \"packageserver-d55dfcdfc-m5hbt\" (UID: \"1859af1a-8cea-4330-b44f-69c94692bfde\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m5hbt" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.061636 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-72fb2\" (UID: \"51755494-2de8-480e-a1e5-fc10c9af3d06\") " pod="openshift-authentication/oauth-openshift-558db77b4-72fb2" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.061664 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0bccdee-ee49-4d76-9826-0e8ece077528-config\") pod \"machine-api-operator-5694c8668f-flwtp\" (UID: \"d0bccdee-ee49-4d76-9826-0e8ece077528\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-flwtp" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.061728 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b85a2376-eba6-4a1e-b6eb-870ffc696f31-audit-dir\") pod \"apiserver-76f77b778f-lx74d\" (UID: \"b85a2376-eba6-4a1e-b6eb-870ffc696f31\") " pod="openshift-apiserver/apiserver-76f77b778f-lx74d" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.061750 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8s77\" (UniqueName: \"kubernetes.io/projected/a50236c5-0779-4d32-968b-2d4aee931dd6-kube-api-access-x8s77\") pod \"etcd-operator-b45778765-gkrmk\" (UID: \"a50236c5-0779-4d32-968b-2d4aee931dd6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gkrmk" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.061804 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5h8f\" (UniqueName: \"kubernetes.io/projected/8ff97c32-0757-44e2-8cad-55b8bfadf0a8-kube-api-access-v5h8f\") pod \"controller-manager-879f6c89f-2vgqt\" (UID: \"8ff97c32-0757-44e2-8cad-55b8bfadf0a8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2vgqt" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.061830 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpb2s\" (UniqueName: \"kubernetes.io/projected/20404d7a-857c-4f60-beef-e6ef9116804d-kube-api-access-hpb2s\") pod \"route-controller-manager-6576b87f9c-z8t9v\" (UID: \"20404d7a-857c-4f60-beef-e6ef9116804d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8t9v" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.061885 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b85a2376-eba6-4a1e-b6eb-870ffc696f31-config\") pod \"apiserver-76f77b778f-lx74d\" (UID: \"b85a2376-eba6-4a1e-b6eb-870ffc696f31\") " pod="openshift-apiserver/apiserver-76f77b778f-lx74d" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.061904 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l2qs\" (UniqueName: \"kubernetes.io/projected/a4cf8b6f-d7b3-4cc2-adf5-2494f5d21b71-kube-api-access-8l2qs\") pod \"router-default-5444994796-6ndbx\" (UID: \"a4cf8b6f-d7b3-4cc2-adf5-2494f5d21b71\") " pod="openshift-ingress/router-default-5444994796-6ndbx" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.061922 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-72fb2\" (UID: \"51755494-2de8-480e-a1e5-fc10c9af3d06\") " pod="openshift-authentication/oauth-openshift-558db77b4-72fb2" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.061981 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95k4w\" (UniqueName: \"kubernetes.io/projected/2ce0e4b6-b001-41b7-a850-a77a9b7131d9-kube-api-access-95k4w\") pod \"cluster-samples-operator-665b6dd947-b6rpb\" (UID: \"2ce0e4b6-b001-41b7-a850-a77a9b7131d9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b6rpb" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.062006 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d950d1cc-546a-4650-ab9c-e58388bda769-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-2drts\" (UID: \"d950d1cc-546a-4650-ab9c-e58388bda769\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2drts" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.062054 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07b30d68-832e-44c3-aa22-18c8f1cbb6e6-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-cwrn9\" (UID: \"07b30d68-832e-44c3-aa22-18c8f1cbb6e6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cwrn9" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.062054 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5b7f7"] Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.062089 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b85a2376-eba6-4a1e-b6eb-870ffc696f31-image-import-ca\") pod \"apiserver-76f77b778f-lx74d\" (UID: \"b85a2376-eba6-4a1e-b6eb-870ffc696f31\") " pod="openshift-apiserver/apiserver-76f77b778f-lx74d" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.062138 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07b30d68-832e-44c3-aa22-18c8f1cbb6e6-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-cwrn9\" (UID: \"07b30d68-832e-44c3-aa22-18c8f1cbb6e6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cwrn9" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.062188 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cfb4573-1a3c-43b5-aa58-83774b1b9212-serving-cert\") pod \"authentication-operator-69f744f599-g227d\" (UID: \"4cfb4573-1a3c-43b5-aa58-83774b1b9212\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g227d" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.062231 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt7mh\" (UniqueName: \"kubernetes.io/projected/79190f34-e70f-4fa8-b8da-7db3b29678a0-kube-api-access-wt7mh\") pod \"machine-config-controller-84d6567774-lkf5s\" (UID: \"79190f34-e70f-4fa8-b8da-7db3b29678a0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lkf5s" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.062265 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-72fb2\" (UID: \"51755494-2de8-480e-a1e5-fc10c9af3d06\") " pod="openshift-authentication/oauth-openshift-558db77b4-72fb2" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.062287 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a689b7f0-2ae8-4200-9e32-0ed56e5791d1-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-9rskb\" (UID: \"a689b7f0-2ae8-4200-9e32-0ed56e5791d1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9rskb" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.062306 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4cf8b6f-d7b3-4cc2-adf5-2494f5d21b71-service-ca-bundle\") pod \"router-default-5444994796-6ndbx\" (UID: \"a4cf8b6f-d7b3-4cc2-adf5-2494f5d21b71\") " pod="openshift-ingress/router-default-5444994796-6ndbx" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.062413 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6m4f\" (UniqueName: \"kubernetes.io/projected/061238bd-c978-4bf9-9868-5ef174d414f2-kube-api-access-f6m4f\") pod \"cluster-image-registry-operator-dc59b4c8b-hm525\" (UID: \"061238bd-c978-4bf9-9868-5ef174d414f2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hm525" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.062431 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a689b7f0-2ae8-4200-9e32-0ed56e5791d1-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-9rskb\" (UID: \"a689b7f0-2ae8-4200-9e32-0ed56e5791d1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9rskb" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.062448 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4cfb4573-1a3c-43b5-aa58-83774b1b9212-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-g227d\" (UID: \"4cfb4573-1a3c-43b5-aa58-83774b1b9212\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g227d" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.062466 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a4cf8b6f-d7b3-4cc2-adf5-2494f5d21b71-default-certificate\") pod \"router-default-5444994796-6ndbx\" (UID: \"a4cf8b6f-d7b3-4cc2-adf5-2494f5d21b71\") " pod="openshift-ingress/router-default-5444994796-6ndbx" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.062488 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a50236c5-0779-4d32-968b-2d4aee931dd6-etcd-service-ca\") pod \"etcd-operator-b45778765-gkrmk\" (UID: \"a50236c5-0779-4d32-968b-2d4aee931dd6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gkrmk" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.062506 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cfb4573-1a3c-43b5-aa58-83774b1b9212-config\") pod \"authentication-operator-69f744f599-g227d\" (UID: \"4cfb4573-1a3c-43b5-aa58-83774b1b9212\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g227d" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.063351 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b85a2376-eba6-4a1e-b6eb-870ffc696f31-audit-dir\") pod \"apiserver-76f77b778f-lx74d\" (UID: \"b85a2376-eba6-4a1e-b6eb-870ffc696f31\") " pod="openshift-apiserver/apiserver-76f77b778f-lx74d" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.063833 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/061238bd-c978-4bf9-9868-5ef174d414f2-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-hm525\" (UID: \"061238bd-c978-4bf9-9868-5ef174d414f2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hm525" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.064035 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b85a2376-eba6-4a1e-b6eb-870ffc696f31-config\") pod \"apiserver-76f77b778f-lx74d\" (UID: \"b85a2376-eba6-4a1e-b6eb-870ffc696f31\") " pod="openshift-apiserver/apiserver-76f77b778f-lx74d" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.065028 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-p7wlh"] Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.065045 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96b6a564-8c15-4680-a2e2-c8bf9289fdf8-config\") pod \"console-operator-58897d9998-mcs9v\" (UID: \"96b6a564-8c15-4680-a2e2-c8bf9289fdf8\") " pod="openshift-console-operator/console-operator-58897d9998-mcs9v" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.065084 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b85a2376-eba6-4a1e-b6eb-870ffc696f31-node-pullsecrets\") pod \"apiserver-76f77b778f-lx74d\" (UID: \"b85a2376-eba6-4a1e-b6eb-870ffc696f31\") " pod="openshift-apiserver/apiserver-76f77b778f-lx74d" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.065435 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0bccdee-ee49-4d76-9826-0e8ece077528-config\") pod \"machine-api-operator-5694c8668f-flwtp\" (UID: \"d0bccdee-ee49-4d76-9826-0e8ece077528\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-flwtp" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.065892 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b85a2376-eba6-4a1e-b6eb-870ffc696f31-audit\") pod \"apiserver-76f77b778f-lx74d\" (UID: \"b85a2376-eba6-4a1e-b6eb-870ffc696f31\") " pod="openshift-apiserver/apiserver-76f77b778f-lx74d" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.066291 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a689b7f0-2ae8-4200-9e32-0ed56e5791d1-config\") pod \"kube-controller-manager-operator-78b949d7b-9rskb\" (UID: \"a689b7f0-2ae8-4200-9e32-0ed56e5791d1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9rskb" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.066534 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-p7wlh" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.069594 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4cfb4573-1a3c-43b5-aa58-83774b1b9212-service-ca-bundle\") pod \"authentication-operator-69f744f599-g227d\" (UID: \"4cfb4573-1a3c-43b5-aa58-83774b1b9212\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g227d" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.069638 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b85a2376-eba6-4a1e-b6eb-870ffc696f31-trusted-ca-bundle\") pod \"apiserver-76f77b778f-lx74d\" (UID: \"b85a2376-eba6-4a1e-b6eb-870ffc696f31\") " pod="openshift-apiserver/apiserver-76f77b778f-lx74d" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.070096 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4cfb4573-1a3c-43b5-aa58-83774b1b9212-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-g227d\" (UID: \"4cfb4573-1a3c-43b5-aa58-83774b1b9212\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g227d" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.070155 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b85a2376-eba6-4a1e-b6eb-870ffc696f31-image-import-ca\") pod \"apiserver-76f77b778f-lx74d\" (UID: \"b85a2376-eba6-4a1e-b6eb-870ffc696f31\") " pod="openshift-apiserver/apiserver-76f77b778f-lx74d" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.070241 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/061238bd-c978-4bf9-9868-5ef174d414f2-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-hm525\" (UID: \"061238bd-c978-4bf9-9868-5ef174d414f2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hm525" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.070498 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b85a2376-eba6-4a1e-b6eb-870ffc696f31-encryption-config\") pod \"apiserver-76f77b778f-lx74d\" (UID: \"b85a2376-eba6-4a1e-b6eb-870ffc696f31\") " pod="openshift-apiserver/apiserver-76f77b778f-lx74d" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.070789 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8ff97c32-0757-44e2-8cad-55b8bfadf0a8-client-ca\") pod \"controller-manager-879f6c89f-2vgqt\" (UID: \"8ff97c32-0757-44e2-8cad-55b8bfadf0a8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2vgqt" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.070875 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a50236c5-0779-4d32-968b-2d4aee931dd6-etcd-service-ca\") pod \"etcd-operator-b45778765-gkrmk\" (UID: \"a50236c5-0779-4d32-968b-2d4aee931dd6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gkrmk" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.071476 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cfb4573-1a3c-43b5-aa58-83774b1b9212-config\") pod \"authentication-operator-69f744f599-g227d\" (UID: \"4cfb4573-1a3c-43b5-aa58-83774b1b9212\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g227d" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.071572 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/79190f34-e70f-4fa8-b8da-7db3b29678a0-proxy-tls\") pod \"machine-config-controller-84d6567774-lkf5s\" (UID: \"79190f34-e70f-4fa8-b8da-7db3b29678a0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lkf5s" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.071606 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a4cf8b6f-d7b3-4cc2-adf5-2494f5d21b71-metrics-certs\") pod \"router-default-5444994796-6ndbx\" (UID: \"a4cf8b6f-d7b3-4cc2-adf5-2494f5d21b71\") " pod="openshift-ingress/router-default-5444994796-6ndbx" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.071629 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwvbd\" (UniqueName: \"kubernetes.io/projected/51755494-2de8-480e-a1e5-fc10c9af3d06-kube-api-access-dwvbd\") pod \"oauth-openshift-558db77b4-72fb2\" (UID: \"51755494-2de8-480e-a1e5-fc10c9af3d06\") " pod="openshift-authentication/oauth-openshift-558db77b4-72fb2" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.071854 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a50236c5-0779-4d32-968b-2d4aee931dd6-etcd-client\") pod \"etcd-operator-b45778765-gkrmk\" (UID: \"a50236c5-0779-4d32-968b-2d4aee931dd6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gkrmk" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.072029 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b85a2376-eba6-4a1e-b6eb-870ffc696f31-serving-cert\") pod \"apiserver-76f77b778f-lx74d\" (UID: \"b85a2376-eba6-4a1e-b6eb-870ffc696f31\") " pod="openshift-apiserver/apiserver-76f77b778f-lx74d" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.072078 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ff97c32-0757-44e2-8cad-55b8bfadf0a8-serving-cert\") pod \"controller-manager-879f6c89f-2vgqt\" (UID: \"8ff97c32-0757-44e2-8cad-55b8bfadf0a8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2vgqt" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.072076 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4bmr\" (UniqueName: \"kubernetes.io/projected/23977121-e0f0-4055-a727-c4050a20f2a6-kube-api-access-r4bmr\") pod \"machine-config-operator-74547568cd-dh8pj\" (UID: \"23977121-e0f0-4055-a727-c4050a20f2a6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dh8pj" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.072181 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-72fb2\" (UID: \"51755494-2de8-480e-a1e5-fc10c9af3d06\") " pod="openshift-authentication/oauth-openshift-558db77b4-72fb2" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.073417 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a689b7f0-2ae8-4200-9e32-0ed56e5791d1-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-9rskb\" (UID: \"a689b7f0-2ae8-4200-9e32-0ed56e5791d1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9rskb" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.073494 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e6c322b6-b29e-4177-9e8c-7fefbf9d7e4a-auth-proxy-config\") pod \"machine-approver-56656f9798-sghgg\" (UID: \"e6c322b6-b29e-4177-9e8c-7fefbf9d7e4a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sghgg" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.074171 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e6c322b6-b29e-4177-9e8c-7fefbf9d7e4a-auth-proxy-config\") pod \"machine-approver-56656f9798-sghgg\" (UID: \"e6c322b6-b29e-4177-9e8c-7fefbf9d7e4a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sghgg" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.074280 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/79190f34-e70f-4fa8-b8da-7db3b29678a0-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-lkf5s\" (UID: \"79190f34-e70f-4fa8-b8da-7db3b29678a0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lkf5s" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.074328 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/51755494-2de8-480e-a1e5-fc10c9af3d06-audit-dir\") pod \"oauth-openshift-558db77b4-72fb2\" (UID: \"51755494-2de8-480e-a1e5-fc10c9af3d06\") " pod="openshift-authentication/oauth-openshift-558db77b4-72fb2" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.074394 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/2ce0e4b6-b001-41b7-a850-a77a9b7131d9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-b6rpb\" (UID: \"2ce0e4b6-b001-41b7-a850-a77a9b7131d9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b6rpb" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.074464 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/061238bd-c978-4bf9-9868-5ef174d414f2-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-hm525\" (UID: \"061238bd-c978-4bf9-9868-5ef174d414f2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hm525" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.074491 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20404d7a-857c-4f60-beef-e6ef9116804d-client-ca\") pod \"route-controller-manager-6576b87f9c-z8t9v\" (UID: \"20404d7a-857c-4f60-beef-e6ef9116804d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8t9v" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.075435 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a50236c5-0779-4d32-968b-2d4aee931dd6-serving-cert\") pod \"etcd-operator-b45778765-gkrmk\" (UID: \"a50236c5-0779-4d32-968b-2d4aee931dd6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gkrmk" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.076307 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rmgg\" (UniqueName: \"kubernetes.io/projected/96b6a564-8c15-4680-a2e2-c8bf9289fdf8-kube-api-access-4rmgg\") pod \"console-operator-58897d9998-mcs9v\" (UID: \"96b6a564-8c15-4680-a2e2-c8bf9289fdf8\") " pod="openshift-console-operator/console-operator-58897d9998-mcs9v" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.076583 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jl4h\" (UniqueName: \"kubernetes.io/projected/d0bccdee-ee49-4d76-9826-0e8ece077528-kube-api-access-7jl4h\") pod \"machine-api-operator-5694c8668f-flwtp\" (UID: \"d0bccdee-ee49-4d76-9826-0e8ece077528\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-flwtp" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.076698 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8ff97c32-0757-44e2-8cad-55b8bfadf0a8-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-2vgqt\" (UID: \"8ff97c32-0757-44e2-8cad-55b8bfadf0a8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2vgqt" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.077427 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20404d7a-857c-4f60-beef-e6ef9116804d-client-ca\") pod \"route-controller-manager-6576b87f9c-z8t9v\" (UID: \"20404d7a-857c-4f60-beef-e6ef9116804d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8t9v" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.077678 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.078255 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a50236c5-0779-4d32-968b-2d4aee931dd6-etcd-client\") pod \"etcd-operator-b45778765-gkrmk\" (UID: \"a50236c5-0779-4d32-968b-2d4aee931dd6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gkrmk" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.077883 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a50236c5-0779-4d32-968b-2d4aee931dd6-config\") pod \"etcd-operator-b45778765-gkrmk\" (UID: \"a50236c5-0779-4d32-968b-2d4aee931dd6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gkrmk" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.078380 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a50236c5-0779-4d32-968b-2d4aee931dd6-config\") pod \"etcd-operator-b45778765-gkrmk\" (UID: \"a50236c5-0779-4d32-968b-2d4aee931dd6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gkrmk" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.078397 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-lkf5s"] Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.078686 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/2ce0e4b6-b001-41b7-a850-a77a9b7131d9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-b6rpb\" (UID: \"2ce0e4b6-b001-41b7-a850-a77a9b7131d9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b6rpb" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.079086 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8ff97c32-0757-44e2-8cad-55b8bfadf0a8-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-2vgqt\" (UID: \"8ff97c32-0757-44e2-8cad-55b8bfadf0a8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2vgqt" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.079493 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8a3f832-b756-47ec-9729-7beacc669293-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dzzqk\" (UID: \"e8a3f832-b756-47ec-9729-7beacc669293\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dzzqk" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.079628 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1859af1a-8cea-4330-b44f-69c94692bfde-tmpfs\") pod \"packageserver-d55dfcdfc-m5hbt\" (UID: \"1859af1a-8cea-4330-b44f-69c94692bfde\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m5hbt" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.079661 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/51755494-2de8-480e-a1e5-fc10c9af3d06-audit-policies\") pod \"oauth-openshift-558db77b4-72fb2\" (UID: \"51755494-2de8-480e-a1e5-fc10c9af3d06\") " pod="openshift-authentication/oauth-openshift-558db77b4-72fb2" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.079709 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b85a2376-eba6-4a1e-b6eb-870ffc696f31-etcd-serving-ca\") pod \"apiserver-76f77b778f-lx74d\" (UID: \"b85a2376-eba6-4a1e-b6eb-870ffc696f31\") " pod="openshift-apiserver/apiserver-76f77b778f-lx74d" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.079742 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt594\" (UniqueName: \"kubernetes.io/projected/1f2bc292-c394-4dd0-9ce5-51d960430aa4-kube-api-access-dt594\") pod \"openshift-controller-manager-operator-756b6f6bc6-5b7f7\" (UID: \"1f2bc292-c394-4dd0-9ce5-51d960430aa4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5b7f7" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.079771 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/96b6a564-8c15-4680-a2e2-c8bf9289fdf8-trusted-ca\") pod \"console-operator-58897d9998-mcs9v\" (UID: \"96b6a564-8c15-4680-a2e2-c8bf9289fdf8\") " pod="openshift-console-operator/console-operator-58897d9998-mcs9v" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.079802 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79bq6\" (UniqueName: \"kubernetes.io/projected/b85a2376-eba6-4a1e-b6eb-870ffc696f31-kube-api-access-79bq6\") pod \"apiserver-76f77b778f-lx74d\" (UID: \"b85a2376-eba6-4a1e-b6eb-870ffc696f31\") " pod="openshift-apiserver/apiserver-76f77b778f-lx74d" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.079833 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/019be0a2-be0d-43c7-a91d-280a3508c623-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2mfvb\" (UID: \"019be0a2-be0d-43c7-a91d-280a3508c623\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mfvb" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.079863 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6f31928-dd2b-41c9-8103-3652eb01b1ad-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-bkmbq\" (UID: \"e6f31928-dd2b-41c9-8103-3652eb01b1ad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bkmbq" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.079898 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcctf\" (UniqueName: \"kubernetes.io/projected/4cfb4573-1a3c-43b5-aa58-83774b1b9212-kube-api-access-hcctf\") pod \"authentication-operator-69f744f599-g227d\" (UID: \"4cfb4573-1a3c-43b5-aa58-83774b1b9212\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g227d" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.079924 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-72fb2\" (UID: \"51755494-2de8-480e-a1e5-fc10c9af3d06\") " pod="openshift-authentication/oauth-openshift-558db77b4-72fb2" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.079958 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-72fb2\" (UID: \"51755494-2de8-480e-a1e5-fc10c9af3d06\") " pod="openshift-authentication/oauth-openshift-558db77b4-72fb2" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.079992 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/d0bccdee-ee49-4d76-9826-0e8ece077528-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-flwtp\" (UID: \"d0bccdee-ee49-4d76-9826-0e8ece077528\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-flwtp" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.080028 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f060d79a-f223-455c-b203-0bd9e430a896-metrics-tls\") pod \"dns-operator-744455d44c-tcchr\" (UID: \"f060d79a-f223-455c-b203-0bd9e430a896\") " pod="openshift-dns-operator/dns-operator-744455d44c-tcchr" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.080059 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-72fb2\" (UID: \"51755494-2de8-480e-a1e5-fc10c9af3d06\") " pod="openshift-authentication/oauth-openshift-558db77b4-72fb2" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.080068 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a50236c5-0779-4d32-968b-2d4aee931dd6-serving-cert\") pod \"etcd-operator-b45778765-gkrmk\" (UID: \"a50236c5-0779-4d32-968b-2d4aee931dd6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gkrmk" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.080092 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts4gg\" (UniqueName: \"kubernetes.io/projected/5da50f13-a25d-403c-8fda-39f93a5cf4fd-kube-api-access-ts4gg\") pod \"olm-operator-6b444d44fb-qv66j\" (UID: \"5da50f13-a25d-403c-8fda-39f93a5cf4fd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qv66j" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.080122 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8a3f832-b756-47ec-9729-7beacc669293-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dzzqk\" (UID: \"e8a3f832-b756-47ec-9729-7beacc669293\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dzzqk" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.080136 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-2nj82"] Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.080159 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07b30d68-832e-44c3-aa22-18c8f1cbb6e6-config\") pod \"kube-apiserver-operator-766d6c64bb-cwrn9\" (UID: \"07b30d68-832e-44c3-aa22-18c8f1cbb6e6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cwrn9" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.080193 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d0bccdee-ee49-4d76-9826-0e8ece077528-images\") pod \"machine-api-operator-5694c8668f-flwtp\" (UID: \"d0bccdee-ee49-4d76-9826-0e8ece077528\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-flwtp" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.080256 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20404d7a-857c-4f60-beef-e6ef9116804d-serving-cert\") pod \"route-controller-manager-6576b87f9c-z8t9v\" (UID: \"20404d7a-857c-4f60-beef-e6ef9116804d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8t9v" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.081135 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d0bccdee-ee49-4d76-9826-0e8ece077528-images\") pod \"machine-api-operator-5694c8668f-flwtp\" (UID: \"d0bccdee-ee49-4d76-9826-0e8ece077528\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-flwtp" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.081247 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6mhws"] Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.081258 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5da50f13-a25d-403c-8fda-39f93a5cf4fd-srv-cert\") pod \"olm-operator-6b444d44fb-qv66j\" (UID: \"5da50f13-a25d-403c-8fda-39f93a5cf4fd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qv66j" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.081259 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b85a2376-eba6-4a1e-b6eb-870ffc696f31-etcd-serving-ca\") pod \"apiserver-76f77b778f-lx74d\" (UID: \"b85a2376-eba6-4a1e-b6eb-870ffc696f31\") " pod="openshift-apiserver/apiserver-76f77b778f-lx74d" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.081679 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b85a2376-eba6-4a1e-b6eb-870ffc696f31-serving-cert\") pod \"apiserver-76f77b778f-lx74d\" (UID: \"b85a2376-eba6-4a1e-b6eb-870ffc696f31\") " pod="openshift-apiserver/apiserver-76f77b778f-lx74d" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.081713 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtjqt\" (UniqueName: \"kubernetes.io/projected/e0e2a5e3-321b-4774-bf83-dd727fc954d2-kube-api-access-wtjqt\") pod \"downloads-7954f5f757-zcdgs\" (UID: \"e0e2a5e3-321b-4774-bf83-dd727fc954d2\") " pod="openshift-console/downloads-7954f5f757-zcdgs" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.081735 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/96b6a564-8c15-4680-a2e2-c8bf9289fdf8-trusted-ca\") pod \"console-operator-58897d9998-mcs9v\" (UID: \"96b6a564-8c15-4680-a2e2-c8bf9289fdf8\") " pod="openshift-console-operator/console-operator-58897d9998-mcs9v" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.081743 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1859af1a-8cea-4330-b44f-69c94692bfde-apiservice-cert\") pod \"packageserver-d55dfcdfc-m5hbt\" (UID: \"1859af1a-8cea-4330-b44f-69c94692bfde\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m5hbt" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.081822 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a50236c5-0779-4d32-968b-2d4aee931dd6-etcd-ca\") pod \"etcd-operator-b45778765-gkrmk\" (UID: \"a50236c5-0779-4d32-968b-2d4aee931dd6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gkrmk" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.081863 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96b6a564-8c15-4680-a2e2-c8bf9289fdf8-serving-cert\") pod \"console-operator-58897d9998-mcs9v\" (UID: \"96b6a564-8c15-4680-a2e2-c8bf9289fdf8\") " pod="openshift-console-operator/console-operator-58897d9998-mcs9v" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.081953 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20404d7a-857c-4f60-beef-e6ef9116804d-config\") pod \"route-controller-manager-6576b87f9c-z8t9v\" (UID: \"20404d7a-857c-4f60-beef-e6ef9116804d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8t9v" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.082349 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a50236c5-0779-4d32-968b-2d4aee931dd6-etcd-ca\") pod \"etcd-operator-b45778765-gkrmk\" (UID: \"a50236c5-0779-4d32-968b-2d4aee931dd6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gkrmk" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.082846 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/23977121-e0f0-4055-a727-c4050a20f2a6-auth-proxy-config\") pod \"machine-config-operator-74547568cd-dh8pj\" (UID: \"23977121-e0f0-4055-a727-c4050a20f2a6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dh8pj" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.082978 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6c322b6-b29e-4177-9e8c-7fefbf9d7e4a-config\") pod \"machine-approver-56656f9798-sghgg\" (UID: \"e6c322b6-b29e-4177-9e8c-7fefbf9d7e4a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sghgg" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.083070 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwf66\" (UniqueName: \"kubernetes.io/projected/e6f31928-dd2b-41c9-8103-3652eb01b1ad-kube-api-access-vwf66\") pod \"kube-storage-version-migrator-operator-b67b599dd-bkmbq\" (UID: \"e6f31928-dd2b-41c9-8103-3652eb01b1ad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bkmbq" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.083178 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/23977121-e0f0-4055-a727-c4050a20f2a6-proxy-tls\") pod \"machine-config-operator-74547568cd-dh8pj\" (UID: \"23977121-e0f0-4055-a727-c4050a20f2a6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dh8pj" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.083263 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cfb4573-1a3c-43b5-aa58-83774b1b9212-serving-cert\") pod \"authentication-operator-69f744f599-g227d\" (UID: \"4cfb4573-1a3c-43b5-aa58-83774b1b9212\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g227d" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.082994 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20404d7a-857c-4f60-beef-e6ef9116804d-config\") pod \"route-controller-manager-6576b87f9c-z8t9v\" (UID: \"20404d7a-857c-4f60-beef-e6ef9116804d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8t9v" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.083388 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ff97c32-0757-44e2-8cad-55b8bfadf0a8-config\") pod \"controller-manager-879f6c89f-2vgqt\" (UID: \"8ff97c32-0757-44e2-8cad-55b8bfadf0a8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2vgqt" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.083450 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20404d7a-857c-4f60-beef-e6ef9116804d-serving-cert\") pod \"route-controller-manager-6576b87f9c-z8t9v\" (UID: \"20404d7a-857c-4f60-beef-e6ef9116804d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8t9v" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.083466 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6c322b6-b29e-4177-9e8c-7fefbf9d7e4a-config\") pod \"machine-approver-56656f9798-sghgg\" (UID: \"e6c322b6-b29e-4177-9e8c-7fefbf9d7e4a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sghgg" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.083553 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bccmc\" (UniqueName: \"kubernetes.io/projected/f060d79a-f223-455c-b203-0bd9e430a896-kube-api-access-bccmc\") pod \"dns-operator-744455d44c-tcchr\" (UID: \"f060d79a-f223-455c-b203-0bd9e430a896\") " pod="openshift-dns-operator/dns-operator-744455d44c-tcchr" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.083623 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e6c322b6-b29e-4177-9e8c-7fefbf9d7e4a-machine-approver-tls\") pod \"machine-approver-56656f9798-sghgg\" (UID: \"e6c322b6-b29e-4177-9e8c-7fefbf9d7e4a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sghgg" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.083782 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-xxz9x"] Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.084818 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ff97c32-0757-44e2-8cad-55b8bfadf0a8-config\") pod \"controller-manager-879f6c89f-2vgqt\" (UID: \"8ff97c32-0757-44e2-8cad-55b8bfadf0a8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2vgqt" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.084828 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b85a2376-eba6-4a1e-b6eb-870ffc696f31-etcd-client\") pod \"apiserver-76f77b778f-lx74d\" (UID: \"b85a2376-eba6-4a1e-b6eb-870ffc696f31\") " pod="openshift-apiserver/apiserver-76f77b778f-lx74d" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.085018 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f060d79a-f223-455c-b203-0bd9e430a896-metrics-tls\") pod \"dns-operator-744455d44c-tcchr\" (UID: \"f060d79a-f223-455c-b203-0bd9e430a896\") " pod="openshift-dns-operator/dns-operator-744455d44c-tcchr" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.085141 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96b6a564-8c15-4680-a2e2-c8bf9289fdf8-serving-cert\") pod \"console-operator-58897d9998-mcs9v\" (UID: \"96b6a564-8c15-4680-a2e2-c8bf9289fdf8\") " pod="openshift-console-operator/console-operator-58897d9998-mcs9v" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.085488 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/d0bccdee-ee49-4d76-9826-0e8ece077528-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-flwtp\" (UID: \"d0bccdee-ee49-4d76-9826-0e8ece077528\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-flwtp" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.085991 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-r88sq"] Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.086676 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e6c322b6-b29e-4177-9e8c-7fefbf9d7e4a-machine-approver-tls\") pod \"machine-approver-56656f9798-sghgg\" (UID: \"e6c322b6-b29e-4177-9e8c-7fefbf9d7e4a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sghgg" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.087852 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-zcdgs"] Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.090035 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cwrn9"] Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.091286 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-72fb2"] Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.092376 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-td4m7"] Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.093450 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-66xmw"] Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.094737 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-2drts"] Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.094770 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.095892 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-p7wlh"] Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.097385 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xjxbf"] Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.099066 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-7jmxk"] Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.100637 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406630-wnfkh"] Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.101951 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-l7pgj"] Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.102731 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-l7pgj" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.103360 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-4cgzv"] Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.104100 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-4cgzv" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.104443 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-4cgzv"] Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.135465 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.154999 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.175132 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.184365 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/019be0a2-be0d-43c7-a91d-280a3508c623-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2mfvb\" (UID: \"019be0a2-be0d-43c7-a91d-280a3508c623\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mfvb" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.184404 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6f31928-dd2b-41c9-8103-3652eb01b1ad-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-bkmbq\" (UID: \"e6f31928-dd2b-41c9-8103-3652eb01b1ad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bkmbq" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.184430 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-72fb2\" (UID: \"51755494-2de8-480e-a1e5-fc10c9af3d06\") " pod="openshift-authentication/oauth-openshift-558db77b4-72fb2" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.184450 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-72fb2\" (UID: \"51755494-2de8-480e-a1e5-fc10c9af3d06\") " pod="openshift-authentication/oauth-openshift-558db77b4-72fb2" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.184476 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ts4gg\" (UniqueName: \"kubernetes.io/projected/5da50f13-a25d-403c-8fda-39f93a5cf4fd-kube-api-access-ts4gg\") pod \"olm-operator-6b444d44fb-qv66j\" (UID: \"5da50f13-a25d-403c-8fda-39f93a5cf4fd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qv66j" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.184493 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8a3f832-b756-47ec-9729-7beacc669293-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dzzqk\" (UID: \"e8a3f832-b756-47ec-9729-7beacc669293\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dzzqk" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.184513 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07b30d68-832e-44c3-aa22-18c8f1cbb6e6-config\") pod \"kube-apiserver-operator-766d6c64bb-cwrn9\" (UID: \"07b30d68-832e-44c3-aa22-18c8f1cbb6e6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cwrn9" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.184529 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-72fb2\" (UID: \"51755494-2de8-480e-a1e5-fc10c9af3d06\") " pod="openshift-authentication/oauth-openshift-558db77b4-72fb2" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.184544 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5da50f13-a25d-403c-8fda-39f93a5cf4fd-srv-cert\") pod \"olm-operator-6b444d44fb-qv66j\" (UID: \"5da50f13-a25d-403c-8fda-39f93a5cf4fd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qv66j" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.184563 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtjqt\" (UniqueName: \"kubernetes.io/projected/e0e2a5e3-321b-4774-bf83-dd727fc954d2-kube-api-access-wtjqt\") pod \"downloads-7954f5f757-zcdgs\" (UID: \"e0e2a5e3-321b-4774-bf83-dd727fc954d2\") " pod="openshift-console/downloads-7954f5f757-zcdgs" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.184580 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1859af1a-8cea-4330-b44f-69c94692bfde-apiservice-cert\") pod \"packageserver-d55dfcdfc-m5hbt\" (UID: \"1859af1a-8cea-4330-b44f-69c94692bfde\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m5hbt" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.184599 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwf66\" (UniqueName: \"kubernetes.io/projected/e6f31928-dd2b-41c9-8103-3652eb01b1ad-kube-api-access-vwf66\") pod \"kube-storage-version-migrator-operator-b67b599dd-bkmbq\" (UID: \"e6f31928-dd2b-41c9-8103-3652eb01b1ad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bkmbq" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.184616 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/23977121-e0f0-4055-a727-c4050a20f2a6-proxy-tls\") pod \"machine-config-operator-74547568cd-dh8pj\" (UID: \"23977121-e0f0-4055-a727-c4050a20f2a6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dh8pj" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.184629 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/23977121-e0f0-4055-a727-c4050a20f2a6-auth-proxy-config\") pod \"machine-config-operator-74547568cd-dh8pj\" (UID: \"23977121-e0f0-4055-a727-c4050a20f2a6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dh8pj" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.184658 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6f31928-dd2b-41c9-8103-3652eb01b1ad-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-bkmbq\" (UID: \"e6f31928-dd2b-41c9-8103-3652eb01b1ad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bkmbq" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.184674 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a4cf8b6f-d7b3-4cc2-adf5-2494f5d21b71-stats-auth\") pod \"router-default-5444994796-6ndbx\" (UID: \"a4cf8b6f-d7b3-4cc2-adf5-2494f5d21b71\") " pod="openshift-ingress/router-default-5444994796-6ndbx" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.184872 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e8a3f832-b756-47ec-9729-7beacc669293-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dzzqk\" (UID: \"e8a3f832-b756-47ec-9729-7beacc669293\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dzzqk" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.184891 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-72fb2\" (UID: \"51755494-2de8-480e-a1e5-fc10c9af3d06\") " pod="openshift-authentication/oauth-openshift-558db77b4-72fb2" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.184935 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-72fb2\" (UID: \"51755494-2de8-480e-a1e5-fc10c9af3d06\") " pod="openshift-authentication/oauth-openshift-558db77b4-72fb2" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.184953 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-72fb2\" (UID: \"51755494-2de8-480e-a1e5-fc10c9af3d06\") " pod="openshift-authentication/oauth-openshift-558db77b4-72fb2" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.185088 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtck7\" (UniqueName: \"kubernetes.io/projected/967c6d72-d998-4b42-8de2-b9fd1712fc12-kube-api-access-rtck7\") pod \"service-ca-9c57cc56f-2nj82\" (UID: \"967c6d72-d998-4b42-8de2-b9fd1712fc12\") " pod="openshift-service-ca/service-ca-9c57cc56f-2nj82" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.185678 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/23977121-e0f0-4055-a727-c4050a20f2a6-auth-proxy-config\") pod \"machine-config-operator-74547568cd-dh8pj\" (UID: \"23977121-e0f0-4055-a727-c4050a20f2a6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dh8pj" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.185885 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-72fb2\" (UID: \"51755494-2de8-480e-a1e5-fc10c9af3d06\") " pod="openshift-authentication/oauth-openshift-558db77b4-72fb2" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.185938 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-72fb2\" (UID: \"51755494-2de8-480e-a1e5-fc10c9af3d06\") " pod="openshift-authentication/oauth-openshift-558db77b4-72fb2" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.185983 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5da50f13-a25d-403c-8fda-39f93a5cf4fd-profile-collector-cert\") pod \"olm-operator-6b444d44fb-qv66j\" (UID: \"5da50f13-a25d-403c-8fda-39f93a5cf4fd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qv66j" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.186004 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-72fb2\" (UID: \"51755494-2de8-480e-a1e5-fc10c9af3d06\") " pod="openshift-authentication/oauth-openshift-558db77b4-72fb2" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.186446 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/967c6d72-d998-4b42-8de2-b9fd1712fc12-signing-key\") pod \"service-ca-9c57cc56f-2nj82\" (UID: \"967c6d72-d998-4b42-8de2-b9fd1712fc12\") " pod="openshift-service-ca/service-ca-9c57cc56f-2nj82" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.186481 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/019be0a2-be0d-43c7-a91d-280a3508c623-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2mfvb\" (UID: \"019be0a2-be0d-43c7-a91d-280a3508c623\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mfvb" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.186498 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsrw8\" (UniqueName: \"kubernetes.io/projected/019be0a2-be0d-43c7-a91d-280a3508c623-kube-api-access-rsrw8\") pod \"openshift-apiserver-operator-796bbdcf4f-2mfvb\" (UID: \"019be0a2-be0d-43c7-a91d-280a3508c623\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mfvb" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.186515 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/23977121-e0f0-4055-a727-c4050a20f2a6-images\") pod \"machine-config-operator-74547568cd-dh8pj\" (UID: \"23977121-e0f0-4055-a727-c4050a20f2a6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dh8pj" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.186532 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f2bc292-c394-4dd0-9ce5-51d960430aa4-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-5b7f7\" (UID: \"1f2bc292-c394-4dd0-9ce5-51d960430aa4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5b7f7" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.186549 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f2bc292-c394-4dd0-9ce5-51d960430aa4-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-5b7f7\" (UID: \"1f2bc292-c394-4dd0-9ce5-51d960430aa4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5b7f7" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.186601 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/967c6d72-d998-4b42-8de2-b9fd1712fc12-signing-cabundle\") pod \"service-ca-9c57cc56f-2nj82\" (UID: \"967c6d72-d998-4b42-8de2-b9fd1712fc12\") " pod="openshift-service-ca/service-ca-9c57cc56f-2nj82" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.186630 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqzqt\" (UniqueName: \"kubernetes.io/projected/d950d1cc-546a-4650-ab9c-e58388bda769-kube-api-access-sqzqt\") pod \"multus-admission-controller-857f4d67dd-2drts\" (UID: \"d950d1cc-546a-4650-ab9c-e58388bda769\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2drts" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.186647 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1859af1a-8cea-4330-b44f-69c94692bfde-webhook-cert\") pod \"packageserver-d55dfcdfc-m5hbt\" (UID: \"1859af1a-8cea-4330-b44f-69c94692bfde\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m5hbt" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.186663 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcjlp\" (UniqueName: \"kubernetes.io/projected/1859af1a-8cea-4330-b44f-69c94692bfde-kube-api-access-pcjlp\") pod \"packageserver-d55dfcdfc-m5hbt\" (UID: \"1859af1a-8cea-4330-b44f-69c94692bfde\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m5hbt" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.186680 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-72fb2\" (UID: \"51755494-2de8-480e-a1e5-fc10c9af3d06\") " pod="openshift-authentication/oauth-openshift-558db77b4-72fb2" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.186843 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8l2qs\" (UniqueName: \"kubernetes.io/projected/a4cf8b6f-d7b3-4cc2-adf5-2494f5d21b71-kube-api-access-8l2qs\") pod \"router-default-5444994796-6ndbx\" (UID: \"a4cf8b6f-d7b3-4cc2-adf5-2494f5d21b71\") " pod="openshift-ingress/router-default-5444994796-6ndbx" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.186885 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-72fb2\" (UID: \"51755494-2de8-480e-a1e5-fc10c9af3d06\") " pod="openshift-authentication/oauth-openshift-558db77b4-72fb2" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.186915 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d950d1cc-546a-4650-ab9c-e58388bda769-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-2drts\" (UID: \"d950d1cc-546a-4650-ab9c-e58388bda769\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2drts" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.186932 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07b30d68-832e-44c3-aa22-18c8f1cbb6e6-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-cwrn9\" (UID: \"07b30d68-832e-44c3-aa22-18c8f1cbb6e6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cwrn9" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.186949 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07b30d68-832e-44c3-aa22-18c8f1cbb6e6-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-cwrn9\" (UID: \"07b30d68-832e-44c3-aa22-18c8f1cbb6e6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cwrn9" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.186971 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt7mh\" (UniqueName: \"kubernetes.io/projected/79190f34-e70f-4fa8-b8da-7db3b29678a0-kube-api-access-wt7mh\") pod \"machine-config-controller-84d6567774-lkf5s\" (UID: \"79190f34-e70f-4fa8-b8da-7db3b29678a0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lkf5s" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.186990 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-72fb2\" (UID: \"51755494-2de8-480e-a1e5-fc10c9af3d06\") " pod="openshift-authentication/oauth-openshift-558db77b4-72fb2" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.187007 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4cf8b6f-d7b3-4cc2-adf5-2494f5d21b71-service-ca-bundle\") pod \"router-default-5444994796-6ndbx\" (UID: \"a4cf8b6f-d7b3-4cc2-adf5-2494f5d21b71\") " pod="openshift-ingress/router-default-5444994796-6ndbx" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.187035 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a4cf8b6f-d7b3-4cc2-adf5-2494f5d21b71-default-certificate\") pod \"router-default-5444994796-6ndbx\" (UID: \"a4cf8b6f-d7b3-4cc2-adf5-2494f5d21b71\") " pod="openshift-ingress/router-default-5444994796-6ndbx" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.187054 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/79190f34-e70f-4fa8-b8da-7db3b29678a0-proxy-tls\") pod \"machine-config-controller-84d6567774-lkf5s\" (UID: \"79190f34-e70f-4fa8-b8da-7db3b29678a0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lkf5s" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.187072 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a4cf8b6f-d7b3-4cc2-adf5-2494f5d21b71-metrics-certs\") pod \"router-default-5444994796-6ndbx\" (UID: \"a4cf8b6f-d7b3-4cc2-adf5-2494f5d21b71\") " pod="openshift-ingress/router-default-5444994796-6ndbx" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.187090 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwvbd\" (UniqueName: \"kubernetes.io/projected/51755494-2de8-480e-a1e5-fc10c9af3d06-kube-api-access-dwvbd\") pod \"oauth-openshift-558db77b4-72fb2\" (UID: \"51755494-2de8-480e-a1e5-fc10c9af3d06\") " pod="openshift-authentication/oauth-openshift-558db77b4-72fb2" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.187121 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4bmr\" (UniqueName: \"kubernetes.io/projected/23977121-e0f0-4055-a727-c4050a20f2a6-kube-api-access-r4bmr\") pod \"machine-config-operator-74547568cd-dh8pj\" (UID: \"23977121-e0f0-4055-a727-c4050a20f2a6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dh8pj" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.187139 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-72fb2\" (UID: \"51755494-2de8-480e-a1e5-fc10c9af3d06\") " pod="openshift-authentication/oauth-openshift-558db77b4-72fb2" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.187185 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/79190f34-e70f-4fa8-b8da-7db3b29678a0-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-lkf5s\" (UID: \"79190f34-e70f-4fa8-b8da-7db3b29678a0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lkf5s" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.187201 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/51755494-2de8-480e-a1e5-fc10c9af3d06-audit-dir\") pod \"oauth-openshift-558db77b4-72fb2\" (UID: \"51755494-2de8-480e-a1e5-fc10c9af3d06\") " pod="openshift-authentication/oauth-openshift-558db77b4-72fb2" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.187248 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8a3f832-b756-47ec-9729-7beacc669293-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dzzqk\" (UID: \"e8a3f832-b756-47ec-9729-7beacc669293\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dzzqk" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.187264 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1859af1a-8cea-4330-b44f-69c94692bfde-tmpfs\") pod \"packageserver-d55dfcdfc-m5hbt\" (UID: \"1859af1a-8cea-4330-b44f-69c94692bfde\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m5hbt" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.187282 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/51755494-2de8-480e-a1e5-fc10c9af3d06-audit-policies\") pod \"oauth-openshift-558db77b4-72fb2\" (UID: \"51755494-2de8-480e-a1e5-fc10c9af3d06\") " pod="openshift-authentication/oauth-openshift-558db77b4-72fb2" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.187301 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dt594\" (UniqueName: \"kubernetes.io/projected/1f2bc292-c394-4dd0-9ce5-51d960430aa4-kube-api-access-dt594\") pod \"openshift-controller-manager-operator-756b6f6bc6-5b7f7\" (UID: \"1f2bc292-c394-4dd0-9ce5-51d960430aa4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5b7f7" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.187595 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/019be0a2-be0d-43c7-a91d-280a3508c623-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2mfvb\" (UID: \"019be0a2-be0d-43c7-a91d-280a3508c623\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mfvb" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.187919 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/51755494-2de8-480e-a1e5-fc10c9af3d06-audit-dir\") pod \"oauth-openshift-558db77b4-72fb2\" (UID: \"51755494-2de8-480e-a1e5-fc10c9af3d06\") " pod="openshift-authentication/oauth-openshift-558db77b4-72fb2" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.188255 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-72fb2\" (UID: \"51755494-2de8-480e-a1e5-fc10c9af3d06\") " pod="openshift-authentication/oauth-openshift-558db77b4-72fb2" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.188298 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/79190f34-e70f-4fa8-b8da-7db3b29678a0-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-lkf5s\" (UID: \"79190f34-e70f-4fa8-b8da-7db3b29678a0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lkf5s" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.188515 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1859af1a-8cea-4330-b44f-69c94692bfde-tmpfs\") pod \"packageserver-d55dfcdfc-m5hbt\" (UID: \"1859af1a-8cea-4330-b44f-69c94692bfde\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m5hbt" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.188795 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/51755494-2de8-480e-a1e5-fc10c9af3d06-audit-policies\") pod \"oauth-openshift-558db77b4-72fb2\" (UID: \"51755494-2de8-480e-a1e5-fc10c9af3d06\") " pod="openshift-authentication/oauth-openshift-558db77b4-72fb2" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.189176 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-72fb2\" (UID: \"51755494-2de8-480e-a1e5-fc10c9af3d06\") " pod="openshift-authentication/oauth-openshift-558db77b4-72fb2" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.189544 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/019be0a2-be0d-43c7-a91d-280a3508c623-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2mfvb\" (UID: \"019be0a2-be0d-43c7-a91d-280a3508c623\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mfvb" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.189581 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-72fb2\" (UID: \"51755494-2de8-480e-a1e5-fc10c9af3d06\") " pod="openshift-authentication/oauth-openshift-558db77b4-72fb2" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.189596 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-72fb2\" (UID: \"51755494-2de8-480e-a1e5-fc10c9af3d06\") " pod="openshift-authentication/oauth-openshift-558db77b4-72fb2" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.190126 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-72fb2\" (UID: \"51755494-2de8-480e-a1e5-fc10c9af3d06\") " pod="openshift-authentication/oauth-openshift-558db77b4-72fb2" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.190678 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-72fb2\" (UID: \"51755494-2de8-480e-a1e5-fc10c9af3d06\") " pod="openshift-authentication/oauth-openshift-558db77b4-72fb2" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.190965 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-72fb2\" (UID: \"51755494-2de8-480e-a1e5-fc10c9af3d06\") " pod="openshift-authentication/oauth-openshift-558db77b4-72fb2" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.191395 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-72fb2\" (UID: \"51755494-2de8-480e-a1e5-fc10c9af3d06\") " pod="openshift-authentication/oauth-openshift-558db77b4-72fb2" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.191564 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-72fb2\" (UID: \"51755494-2de8-480e-a1e5-fc10c9af3d06\") " pod="openshift-authentication/oauth-openshift-558db77b4-72fb2" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.204043 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.214926 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.221047 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.235266 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.254871 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.275655 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.288289 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/23977121-e0f0-4055-a727-c4050a20f2a6-proxy-tls\") pod \"machine-config-operator-74547568cd-dh8pj\" (UID: \"23977121-e0f0-4055-a727-c4050a20f2a6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dh8pj" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.294850 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.298028 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/23977121-e0f0-4055-a727-c4050a20f2a6-images\") pod \"machine-config-operator-74547568cd-dh8pj\" (UID: \"23977121-e0f0-4055-a727-c4050a20f2a6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dh8pj" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.315491 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.334563 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.355169 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.375972 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.395549 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.415932 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.435632 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.441418 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a4cf8b6f-d7b3-4cc2-adf5-2494f5d21b71-default-certificate\") pod \"router-default-5444994796-6ndbx\" (UID: \"a4cf8b6f-d7b3-4cc2-adf5-2494f5d21b71\") " pod="openshift-ingress/router-default-5444994796-6ndbx" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.455307 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.468495 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a4cf8b6f-d7b3-4cc2-adf5-2494f5d21b71-stats-auth\") pod \"router-default-5444994796-6ndbx\" (UID: \"a4cf8b6f-d7b3-4cc2-adf5-2494f5d21b71\") " pod="openshift-ingress/router-default-5444994796-6ndbx" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.474782 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.481928 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a4cf8b6f-d7b3-4cc2-adf5-2494f5d21b71-metrics-certs\") pod \"router-default-5444994796-6ndbx\" (UID: \"a4cf8b6f-d7b3-4cc2-adf5-2494f5d21b71\") " pod="openshift-ingress/router-default-5444994796-6ndbx" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.495932 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.497919 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4cf8b6f-d7b3-4cc2-adf5-2494f5d21b71-service-ca-bundle\") pod \"router-default-5444994796-6ndbx\" (UID: \"a4cf8b6f-d7b3-4cc2-adf5-2494f5d21b71\") " pod="openshift-ingress/router-default-5444994796-6ndbx" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.514927 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.535649 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.555402 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.574736 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.595054 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.616132 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.635722 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.655604 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.676166 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.690086 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6f31928-dd2b-41c9-8103-3652eb01b1ad-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-bkmbq\" (UID: \"e6f31928-dd2b-41c9-8103-3652eb01b1ad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bkmbq" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.696142 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.706442 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6f31928-dd2b-41c9-8103-3652eb01b1ad-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-bkmbq\" (UID: \"e6f31928-dd2b-41c9-8103-3652eb01b1ad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bkmbq" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.715682 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.736081 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.749260 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/79190f34-e70f-4fa8-b8da-7db3b29678a0-proxy-tls\") pod \"machine-config-controller-84d6567774-lkf5s\" (UID: \"79190f34-e70f-4fa8-b8da-7db3b29678a0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lkf5s" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.756196 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.776342 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.796297 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.816491 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.836335 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.856414 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.876056 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.894980 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.903908 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8a3f832-b756-47ec-9729-7beacc669293-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dzzqk\" (UID: \"e8a3f832-b756-47ec-9729-7beacc669293\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dzzqk" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.915543 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.926085 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8a3f832-b756-47ec-9729-7beacc669293-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dzzqk\" (UID: \"e8a3f832-b756-47ec-9729-7beacc669293\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dzzqk" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.936620 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.955867 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.975842 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.981929 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f2bc292-c394-4dd0-9ce5-51d960430aa4-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-5b7f7\" (UID: \"1f2bc292-c394-4dd0-9ce5-51d960430aa4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5b7f7" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.994354 4947 request.go:700] Waited for 1.000863111s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-controller-manager-operator-config&limit=500&resourceVersion=0 Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.996504 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 29 06:36:25 crc kubenswrapper[4947]: I1129 06:36:25.998214 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f2bc292-c394-4dd0-9ce5-51d960430aa4-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-5b7f7\" (UID: \"1f2bc292-c394-4dd0-9ce5-51d960430aa4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5b7f7" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.016948 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.036874 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.051564 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07b30d68-832e-44c3-aa22-18c8f1cbb6e6-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-cwrn9\" (UID: \"07b30d68-832e-44c3-aa22-18c8f1cbb6e6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cwrn9" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.052036 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07b30d68-832e-44c3-aa22-18c8f1cbb6e6-config\") pod \"kube-apiserver-operator-766d6c64bb-cwrn9\" (UID: \"07b30d68-832e-44c3-aa22-18c8f1cbb6e6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cwrn9" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.056358 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.075594 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.095955 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.115806 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.131077 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5da50f13-a25d-403c-8fda-39f93a5cf4fd-profile-collector-cert\") pod \"olm-operator-6b444d44fb-qv66j\" (UID: \"5da50f13-a25d-403c-8fda-39f93a5cf4fd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qv66j" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.135533 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.148694 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5da50f13-a25d-403c-8fda-39f93a5cf4fd-srv-cert\") pod \"olm-operator-6b444d44fb-qv66j\" (UID: \"5da50f13-a25d-403c-8fda-39f93a5cf4fd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qv66j" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.156440 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.159879 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1859af1a-8cea-4330-b44f-69c94692bfde-webhook-cert\") pod \"packageserver-d55dfcdfc-m5hbt\" (UID: \"1859af1a-8cea-4330-b44f-69c94692bfde\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m5hbt" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.169199 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1859af1a-8cea-4330-b44f-69c94692bfde-apiservice-cert\") pod \"packageserver-d55dfcdfc-m5hbt\" (UID: \"1859af1a-8cea-4330-b44f-69c94692bfde\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m5hbt" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.176436 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.182696 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d950d1cc-546a-4650-ab9c-e58388bda769-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-2drts\" (UID: \"d950d1cc-546a-4650-ab9c-e58388bda769\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2drts" Nov 29 06:36:26 crc kubenswrapper[4947]: E1129 06:36:26.187066 4947 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: failed to sync configmap cache: timed out waiting for the condition Nov 29 06:36:26 crc kubenswrapper[4947]: E1129 06:36:26.187075 4947 secret.go:188] Couldn't get secret openshift-service-ca/signing-key: failed to sync secret cache: timed out waiting for the condition Nov 29 06:36:26 crc kubenswrapper[4947]: E1129 06:36:26.187152 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/967c6d72-d998-4b42-8de2-b9fd1712fc12-signing-cabundle podName:967c6d72-d998-4b42-8de2-b9fd1712fc12 nodeName:}" failed. No retries permitted until 2025-11-29 06:36:26.687132027 +0000 UTC m=+137.731514178 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/967c6d72-d998-4b42-8de2-b9fd1712fc12-signing-cabundle") pod "service-ca-9c57cc56f-2nj82" (UID: "967c6d72-d998-4b42-8de2-b9fd1712fc12") : failed to sync configmap cache: timed out waiting for the condition Nov 29 06:36:26 crc kubenswrapper[4947]: E1129 06:36:26.187176 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/967c6d72-d998-4b42-8de2-b9fd1712fc12-signing-key podName:967c6d72-d998-4b42-8de2-b9fd1712fc12 nodeName:}" failed. No retries permitted until 2025-11-29 06:36:26.687164848 +0000 UTC m=+137.731547039 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/967c6d72-d998-4b42-8de2-b9fd1712fc12-signing-key") pod "service-ca-9c57cc56f-2nj82" (UID: "967c6d72-d998-4b42-8de2-b9fd1712fc12") : failed to sync secret cache: timed out waiting for the condition Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.194913 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.215605 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.255473 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.275841 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.295753 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.315577 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.336722 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.356113 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.375427 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.395944 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.416439 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.435760 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.468853 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.476343 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.496564 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.515336 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.536017 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.556922 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.575875 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.595322 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.616485 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.635488 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.654798 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.675018 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.711854 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/967c6d72-d998-4b42-8de2-b9fd1712fc12-signing-key\") pod \"service-ca-9c57cc56f-2nj82\" (UID: \"967c6d72-d998-4b42-8de2-b9fd1712fc12\") " pod="openshift-service-ca/service-ca-9c57cc56f-2nj82" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.711930 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/967c6d72-d998-4b42-8de2-b9fd1712fc12-signing-cabundle\") pod \"service-ca-9c57cc56f-2nj82\" (UID: \"967c6d72-d998-4b42-8de2-b9fd1712fc12\") " pod="openshift-service-ca/service-ca-9c57cc56f-2nj82" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.713876 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/967c6d72-d998-4b42-8de2-b9fd1712fc12-signing-cabundle\") pod \"service-ca-9c57cc56f-2nj82\" (UID: \"967c6d72-d998-4b42-8de2-b9fd1712fc12\") " pod="openshift-service-ca/service-ca-9c57cc56f-2nj82" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.716168 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/967c6d72-d998-4b42-8de2-b9fd1712fc12-signing-key\") pod \"service-ca-9c57cc56f-2nj82\" (UID: \"967c6d72-d998-4b42-8de2-b9fd1712fc12\") " pod="openshift-service-ca/service-ca-9c57cc56f-2nj82" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.718362 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4q46\" (UniqueName: \"kubernetes.io/projected/e6c322b6-b29e-4177-9e8c-7fefbf9d7e4a-kube-api-access-q4q46\") pod \"machine-approver-56656f9798-sghgg\" (UID: \"e6c322b6-b29e-4177-9e8c-7fefbf9d7e4a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sghgg" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.735908 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8s77\" (UniqueName: \"kubernetes.io/projected/a50236c5-0779-4d32-968b-2d4aee931dd6-kube-api-access-x8s77\") pod \"etcd-operator-b45778765-gkrmk\" (UID: \"a50236c5-0779-4d32-968b-2d4aee931dd6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gkrmk" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.757647 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5h8f\" (UniqueName: \"kubernetes.io/projected/8ff97c32-0757-44e2-8cad-55b8bfadf0a8-kube-api-access-v5h8f\") pod \"controller-manager-879f6c89f-2vgqt\" (UID: \"8ff97c32-0757-44e2-8cad-55b8bfadf0a8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2vgqt" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.775570 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpb2s\" (UniqueName: \"kubernetes.io/projected/20404d7a-857c-4f60-beef-e6ef9116804d-kube-api-access-hpb2s\") pod \"route-controller-manager-6576b87f9c-z8t9v\" (UID: \"20404d7a-857c-4f60-beef-e6ef9116804d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8t9v" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.796688 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqcks\" (UniqueName: \"kubernetes.io/projected/13ecfe15-43e4-42ff-817f-fc95fb8f54aa-kube-api-access-nqcks\") pod \"migrator-59844c95c7-r88sq\" (UID: \"13ecfe15-43e4-42ff-817f-fc95fb8f54aa\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r88sq" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.824125 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95k4w\" (UniqueName: \"kubernetes.io/projected/2ce0e4b6-b001-41b7-a850-a77a9b7131d9-kube-api-access-95k4w\") pod \"cluster-samples-operator-665b6dd947-b6rpb\" (UID: \"2ce0e4b6-b001-41b7-a850-a77a9b7131d9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b6rpb" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.832579 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6m4f\" (UniqueName: \"kubernetes.io/projected/061238bd-c978-4bf9-9868-5ef174d414f2-kube-api-access-f6m4f\") pod \"cluster-image-registry-operator-dc59b4c8b-hm525\" (UID: \"061238bd-c978-4bf9-9868-5ef174d414f2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hm525" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.833915 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r88sq" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.840672 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sghgg" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.851986 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a689b7f0-2ae8-4200-9e32-0ed56e5791d1-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-9rskb\" (UID: \"a689b7f0-2ae8-4200-9e32-0ed56e5791d1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9rskb" Nov 29 06:36:26 crc kubenswrapper[4947]: W1129 06:36:26.855048 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6c322b6_b29e_4177_9e8c_7fefbf9d7e4a.slice/crio-92e1554e54113ece793b799f0d47256df3c1dc1e54de0ba6c05d06ec60218562 WatchSource:0}: Error finding container 92e1554e54113ece793b799f0d47256df3c1dc1e54de0ba6c05d06ec60218562: Status 404 returned error can't find the container with id 92e1554e54113ece793b799f0d47256df3c1dc1e54de0ba6c05d06ec60218562 Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.856190 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.856344 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-gkrmk" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.875800 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.896074 4947 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.911567 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9rskb" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.938153 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/061238bd-c978-4bf9-9868-5ef174d414f2-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-hm525\" (UID: \"061238bd-c978-4bf9-9868-5ef174d414f2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hm525" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.954884 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jl4h\" (UniqueName: \"kubernetes.io/projected/d0bccdee-ee49-4d76-9826-0e8ece077528-kube-api-access-7jl4h\") pod \"machine-api-operator-5694c8668f-flwtp\" (UID: \"d0bccdee-ee49-4d76-9826-0e8ece077528\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-flwtp" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.978476 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rmgg\" (UniqueName: \"kubernetes.io/projected/96b6a564-8c15-4680-a2e2-c8bf9289fdf8-kube-api-access-4rmgg\") pod \"console-operator-58897d9998-mcs9v\" (UID: \"96b6a564-8c15-4680-a2e2-c8bf9289fdf8\") " pod="openshift-console-operator/console-operator-58897d9998-mcs9v" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.979930 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-mcs9v" Nov 29 06:36:26 crc kubenswrapper[4947]: I1129 06:36:26.997330 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79bq6\" (UniqueName: \"kubernetes.io/projected/b85a2376-eba6-4a1e-b6eb-870ffc696f31-kube-api-access-79bq6\") pod \"apiserver-76f77b778f-lx74d\" (UID: \"b85a2376-eba6-4a1e-b6eb-870ffc696f31\") " pod="openshift-apiserver/apiserver-76f77b778f-lx74d" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.008736 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcctf\" (UniqueName: \"kubernetes.io/projected/4cfb4573-1a3c-43b5-aa58-83774b1b9212-kube-api-access-hcctf\") pod \"authentication-operator-69f744f599-g227d\" (UID: \"4cfb4573-1a3c-43b5-aa58-83774b1b9212\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g227d" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.014685 4947 request.go:700] Waited for 1.930996567s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/serviceaccounts/dns-operator/token Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.027035 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8t9v" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.029712 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bccmc\" (UniqueName: \"kubernetes.io/projected/f060d79a-f223-455c-b203-0bd9e430a896-kube-api-access-bccmc\") pod \"dns-operator-744455d44c-tcchr\" (UID: \"f060d79a-f223-455c-b203-0bd9e430a896\") " pod="openshift-dns-operator/dns-operator-744455d44c-tcchr" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.035303 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.043756 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-2vgqt" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.055400 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.062184 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-gkrmk"] Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.075539 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.078843 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-r88sq"] Nov 29 06:36:27 crc kubenswrapper[4947]: W1129 06:36:27.079767 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda50236c5_0779_4d32_968b_2d4aee931dd6.slice/crio-a78efae40697486cf2df27a16977329694d4b2f2902d061a494f2cbd91609042 WatchSource:0}: Error finding container a78efae40697486cf2df27a16977329694d4b2f2902d061a494f2cbd91609042: Status 404 returned error can't find the container with id a78efae40697486cf2df27a16977329694d4b2f2902d061a494f2cbd91609042 Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.087840 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hm525" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.096207 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9rskb"] Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.096257 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.100484 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b6rpb" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.149503 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-flwtp" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.150747 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-g227d" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.162182 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.162543 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.162684 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.170350 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-tcchr" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.216657 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ts4gg\" (UniqueName: \"kubernetes.io/projected/5da50f13-a25d-403c-8fda-39f93a5cf4fd-kube-api-access-ts4gg\") pod \"olm-operator-6b444d44fb-qv66j\" (UID: \"5da50f13-a25d-403c-8fda-39f93a5cf4fd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qv66j" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.236348 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtjqt\" (UniqueName: \"kubernetes.io/projected/e0e2a5e3-321b-4774-bf83-dd727fc954d2-kube-api-access-wtjqt\") pod \"downloads-7954f5f757-zcdgs\" (UID: \"e0e2a5e3-321b-4774-bf83-dd727fc954d2\") " pod="openshift-console/downloads-7954f5f757-zcdgs" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.242357 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-zcdgs" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.252948 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwf66\" (UniqueName: \"kubernetes.io/projected/e6f31928-dd2b-41c9-8103-3652eb01b1ad-kube-api-access-vwf66\") pod \"kube-storage-version-migrator-operator-b67b599dd-bkmbq\" (UID: \"e6f31928-dd2b-41c9-8103-3652eb01b1ad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bkmbq" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.260123 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-mcs9v"] Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.260339 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-lx74d" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.270948 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e8a3f832-b756-47ec-9729-7beacc669293-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dzzqk\" (UID: \"e8a3f832-b756-47ec-9729-7beacc669293\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dzzqk" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.281663 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bkmbq" Nov 29 06:36:27 crc kubenswrapper[4947]: W1129 06:36:27.283045 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96b6a564_8c15_4680_a2e2_c8bf9289fdf8.slice/crio-0065b4ce0a66bbe900ee78c4aafe64a2bc0e535adb2b4d27a041144f83441ec9 WatchSource:0}: Error finding container 0065b4ce0a66bbe900ee78c4aafe64a2bc0e535adb2b4d27a041144f83441ec9: Status 404 returned error can't find the container with id 0065b4ce0a66bbe900ee78c4aafe64a2bc0e535adb2b4d27a041144f83441ec9 Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.293776 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtck7\" (UniqueName: \"kubernetes.io/projected/967c6d72-d998-4b42-8de2-b9fd1712fc12-kube-api-access-rtck7\") pod \"service-ca-9c57cc56f-2nj82\" (UID: \"967c6d72-d998-4b42-8de2-b9fd1712fc12\") " pod="openshift-service-ca/service-ca-9c57cc56f-2nj82" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.304979 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8t9v"] Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.313944 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsrw8\" (UniqueName: \"kubernetes.io/projected/019be0a2-be0d-43c7-a91d-280a3508c623-kube-api-access-rsrw8\") pod \"openshift-apiserver-operator-796bbdcf4f-2mfvb\" (UID: \"019be0a2-be0d-43c7-a91d-280a3508c623\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mfvb" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.331711 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dzzqk" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.335081 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2vgqt"] Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.340025 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcjlp\" (UniqueName: \"kubernetes.io/projected/1859af1a-8cea-4330-b44f-69c94692bfde-kube-api-access-pcjlp\") pod \"packageserver-d55dfcdfc-m5hbt\" (UID: \"1859af1a-8cea-4330-b44f-69c94692bfde\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m5hbt" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.350383 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqzqt\" (UniqueName: \"kubernetes.io/projected/d950d1cc-546a-4650-ab9c-e58388bda769-kube-api-access-sqzqt\") pod \"multus-admission-controller-857f4d67dd-2drts\" (UID: \"d950d1cc-546a-4650-ab9c-e58388bda769\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2drts" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.359913 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qv66j" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.370959 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m5hbt" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.372378 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hm525"] Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.372573 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-2drts" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.373740 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt7mh\" (UniqueName: \"kubernetes.io/projected/79190f34-e70f-4fa8-b8da-7db3b29678a0-kube-api-access-wt7mh\") pod \"machine-config-controller-84d6567774-lkf5s\" (UID: \"79190f34-e70f-4fa8-b8da-7db3b29678a0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lkf5s" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.380551 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-2nj82" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.391239 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt594\" (UniqueName: \"kubernetes.io/projected/1f2bc292-c394-4dd0-9ce5-51d960430aa4-kube-api-access-dt594\") pod \"openshift-controller-manager-operator-756b6f6bc6-5b7f7\" (UID: \"1f2bc292-c394-4dd0-9ce5-51d960430aa4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5b7f7" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.416104 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8l2qs\" (UniqueName: \"kubernetes.io/projected/a4cf8b6f-d7b3-4cc2-adf5-2494f5d21b71-kube-api-access-8l2qs\") pod \"router-default-5444994796-6ndbx\" (UID: \"a4cf8b6f-d7b3-4cc2-adf5-2494f5d21b71\") " pod="openshift-ingress/router-default-5444994796-6ndbx" Nov 29 06:36:27 crc kubenswrapper[4947]: W1129 06:36:27.421519 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod061238bd_c978_4bf9_9868_5ef174d414f2.slice/crio-780f45b28b3ac82ea630632b0d67e44ab5660a2dd6a1a849c4a62317f08b9163 WatchSource:0}: Error finding container 780f45b28b3ac82ea630632b0d67e44ab5660a2dd6a1a849c4a62317f08b9163: Status 404 returned error can't find the container with id 780f45b28b3ac82ea630632b0d67e44ab5660a2dd6a1a849c4a62317f08b9163 Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.448205 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07b30d68-832e-44c3-aa22-18c8f1cbb6e6-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-cwrn9\" (UID: \"07b30d68-832e-44c3-aa22-18c8f1cbb6e6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cwrn9" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.453710 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwvbd\" (UniqueName: \"kubernetes.io/projected/51755494-2de8-480e-a1e5-fc10c9af3d06-kube-api-access-dwvbd\") pod \"oauth-openshift-558db77b4-72fb2\" (UID: \"51755494-2de8-480e-a1e5-fc10c9af3d06\") " pod="openshift-authentication/oauth-openshift-558db77b4-72fb2" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.459538 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b6rpb"] Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.476269 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4bmr\" (UniqueName: \"kubernetes.io/projected/23977121-e0f0-4055-a727-c4050a20f2a6-kube-api-access-r4bmr\") pod \"machine-config-operator-74547568cd-dh8pj\" (UID: \"23977121-e0f0-4055-a727-c4050a20f2a6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dh8pj" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.513872 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-72fb2" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.535698 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mfvb" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.554159 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f903f69c-2db9-478a-9141-22f6aeb27ce3-encryption-config\") pod \"apiserver-7bbb656c7d-ts6sm\" (UID: \"f903f69c-2db9-478a-9141-22f6aeb27ce3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ts6sm" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.554234 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f903f69c-2db9-478a-9141-22f6aeb27ce3-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-ts6sm\" (UID: \"f903f69c-2db9-478a-9141-22f6aeb27ce3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ts6sm" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.554274 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/711e27d0-dd37-4f6f-adae-5c04bb856f47-console-config\") pod \"console-f9d7485db-nswtf\" (UID: \"711e27d0-dd37-4f6f-adae-5c04bb856f47\") " pod="openshift-console/console-f9d7485db-nswtf" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.554308 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/711e27d0-dd37-4f6f-adae-5c04bb856f47-trusted-ca-bundle\") pod \"console-f9d7485db-nswtf\" (UID: \"711e27d0-dd37-4f6f-adae-5c04bb856f47\") " pod="openshift-console/console-f9d7485db-nswtf" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.554343 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/819051f4-236d-42d3-b3cf-c82103136dce-trusted-ca\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.554380 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.554401 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/ec9073bd-31ec-4b35-93a9-08d26b62c60d-available-featuregates\") pod \"openshift-config-operator-7777fb866f-b45sk\" (UID: \"ec9073bd-31ec-4b35-93a9-08d26b62c60d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-b45sk" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.554424 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f903f69c-2db9-478a-9141-22f6aeb27ce3-audit-dir\") pod \"apiserver-7bbb656c7d-ts6sm\" (UID: \"f903f69c-2db9-478a-9141-22f6aeb27ce3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ts6sm" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.554447 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec9073bd-31ec-4b35-93a9-08d26b62c60d-serving-cert\") pod \"openshift-config-operator-7777fb866f-b45sk\" (UID: \"ec9073bd-31ec-4b35-93a9-08d26b62c60d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-b45sk" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.554467 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/711e27d0-dd37-4f6f-adae-5c04bb856f47-console-serving-cert\") pod \"console-f9d7485db-nswtf\" (UID: \"711e27d0-dd37-4f6f-adae-5c04bb856f47\") " pod="openshift-console/console-f9d7485db-nswtf" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.554498 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/819051f4-236d-42d3-b3cf-c82103136dce-bound-sa-token\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.554517 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1627cfb6-29e2-4b2e-ae8b-0dcd6d125da0-metrics-tls\") pod \"ingress-operator-5b745b69d9-xxz9x\" (UID: \"1627cfb6-29e2-4b2e-ae8b-0dcd6d125da0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xxz9x" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.554541 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr29b\" (UniqueName: \"kubernetes.io/projected/ec9073bd-31ec-4b35-93a9-08d26b62c60d-kube-api-access-dr29b\") pod \"openshift-config-operator-7777fb866f-b45sk\" (UID: \"ec9073bd-31ec-4b35-93a9-08d26b62c60d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-b45sk" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.554573 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/819051f4-236d-42d3-b3cf-c82103136dce-registry-tls\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.554595 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j95cb\" (UniqueName: \"kubernetes.io/projected/819051f4-236d-42d3-b3cf-c82103136dce-kube-api-access-j95cb\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.554613 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f903f69c-2db9-478a-9141-22f6aeb27ce3-etcd-client\") pod \"apiserver-7bbb656c7d-ts6sm\" (UID: \"f903f69c-2db9-478a-9141-22f6aeb27ce3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ts6sm" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.554633 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/711e27d0-dd37-4f6f-adae-5c04bb856f47-oauth-serving-cert\") pod \"console-f9d7485db-nswtf\" (UID: \"711e27d0-dd37-4f6f-adae-5c04bb856f47\") " pod="openshift-console/console-f9d7485db-nswtf" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.554665 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/711e27d0-dd37-4f6f-adae-5c04bb856f47-console-oauth-config\") pod \"console-f9d7485db-nswtf\" (UID: \"711e27d0-dd37-4f6f-adae-5c04bb856f47\") " pod="openshift-console/console-f9d7485db-nswtf" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.554690 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1627cfb6-29e2-4b2e-ae8b-0dcd6d125da0-bound-sa-token\") pod \"ingress-operator-5b745b69d9-xxz9x\" (UID: \"1627cfb6-29e2-4b2e-ae8b-0dcd6d125da0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xxz9x" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.554713 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqwlb\" (UniqueName: \"kubernetes.io/projected/59495e16-0371-48e0-b517-0adb7ac8eb4f-kube-api-access-pqwlb\") pod \"package-server-manager-789f6589d5-mzd69\" (UID: \"59495e16-0371-48e0-b517-0adb7ac8eb4f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mzd69" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.554738 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/59495e16-0371-48e0-b517-0adb7ac8eb4f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-mzd69\" (UID: \"59495e16-0371-48e0-b517-0adb7ac8eb4f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mzd69" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.558251 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4nrx\" (UniqueName: \"kubernetes.io/projected/1627cfb6-29e2-4b2e-ae8b-0dcd6d125da0-kube-api-access-h4nrx\") pod \"ingress-operator-5b745b69d9-xxz9x\" (UID: \"1627cfb6-29e2-4b2e-ae8b-0dcd6d125da0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xxz9x" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.558291 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2fqm\" (UniqueName: \"kubernetes.io/projected/711e27d0-dd37-4f6f-adae-5c04bb856f47-kube-api-access-c2fqm\") pod \"console-f9d7485db-nswtf\" (UID: \"711e27d0-dd37-4f6f-adae-5c04bb856f47\") " pod="openshift-console/console-f9d7485db-nswtf" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.558321 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/819051f4-236d-42d3-b3cf-c82103136dce-ca-trust-extracted\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.558345 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/819051f4-236d-42d3-b3cf-c82103136dce-installation-pull-secrets\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.558367 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx5ng\" (UniqueName: \"kubernetes.io/projected/f903f69c-2db9-478a-9141-22f6aeb27ce3-kube-api-access-cx5ng\") pod \"apiserver-7bbb656c7d-ts6sm\" (UID: \"f903f69c-2db9-478a-9141-22f6aeb27ce3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ts6sm" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.558438 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1627cfb6-29e2-4b2e-ae8b-0dcd6d125da0-trusted-ca\") pod \"ingress-operator-5b745b69d9-xxz9x\" (UID: \"1627cfb6-29e2-4b2e-ae8b-0dcd6d125da0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xxz9x" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.558568 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dh8pj" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.559068 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/711e27d0-dd37-4f6f-adae-5c04bb856f47-service-ca\") pod \"console-f9d7485db-nswtf\" (UID: \"711e27d0-dd37-4f6f-adae-5c04bb856f47\") " pod="openshift-console/console-f9d7485db-nswtf" Nov 29 06:36:27 crc kubenswrapper[4947]: E1129 06:36:27.559072 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 06:36:28.059058282 +0000 UTC m=+139.103440363 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcrf" (UID: "819051f4-236d-42d3-b3cf-c82103136dce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.559136 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f903f69c-2db9-478a-9141-22f6aeb27ce3-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-ts6sm\" (UID: \"f903f69c-2db9-478a-9141-22f6aeb27ce3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ts6sm" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.559293 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f903f69c-2db9-478a-9141-22f6aeb27ce3-serving-cert\") pod \"apiserver-7bbb656c7d-ts6sm\" (UID: \"f903f69c-2db9-478a-9141-22f6aeb27ce3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ts6sm" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.559442 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/819051f4-236d-42d3-b3cf-c82103136dce-registry-certificates\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.559503 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f903f69c-2db9-478a-9141-22f6aeb27ce3-audit-policies\") pod \"apiserver-7bbb656c7d-ts6sm\" (UID: \"f903f69c-2db9-478a-9141-22f6aeb27ce3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ts6sm" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.572094 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-6ndbx" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.619183 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lkf5s" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.619895 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cwrn9" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.638406 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5b7f7" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.660652 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.660838 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxr7h\" (UniqueName: \"kubernetes.io/projected/f05a65e3-0462-4718-a98a-864597cfb0e7-kube-api-access-cxr7h\") pod \"machine-config-server-l7pgj\" (UID: \"f05a65e3-0462-4718-a98a-864597cfb0e7\") " pod="openshift-machine-config-operator/machine-config-server-l7pgj" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.660883 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/711e27d0-dd37-4f6f-adae-5c04bb856f47-console-oauth-config\") pod \"console-f9d7485db-nswtf\" (UID: \"711e27d0-dd37-4f6f-adae-5c04bb856f47\") " pod="openshift-console/console-f9d7485db-nswtf" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.660913 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1627cfb6-29e2-4b2e-ae8b-0dcd6d125da0-bound-sa-token\") pod \"ingress-operator-5b745b69d9-xxz9x\" (UID: \"1627cfb6-29e2-4b2e-ae8b-0dcd6d125da0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xxz9x" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.660936 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfljr\" (UniqueName: \"kubernetes.io/projected/0eed2dde-d158-43e1-8df2-f5a309ef3da3-kube-api-access-wfljr\") pod \"dns-default-td4m7\" (UID: \"0eed2dde-d158-43e1-8df2-f5a309ef3da3\") " pod="openshift-dns/dns-default-td4m7" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.660957 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f05a65e3-0462-4718-a98a-864597cfb0e7-node-bootstrap-token\") pod \"machine-config-server-l7pgj\" (UID: \"f05a65e3-0462-4718-a98a-864597cfb0e7\") " pod="openshift-machine-config-operator/machine-config-server-l7pgj" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.660978 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7fc6440e-f991-4421-b078-9496ffdfb74d-csi-data-dir\") pod \"csi-hostpathplugin-p7wlh\" (UID: \"7fc6440e-f991-4421-b078-9496ffdfb74d\") " pod="hostpath-provisioner/csi-hostpathplugin-p7wlh" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.661006 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7fc6440e-f991-4421-b078-9496ffdfb74d-socket-dir\") pod \"csi-hostpathplugin-p7wlh\" (UID: \"7fc6440e-f991-4421-b078-9496ffdfb74d\") " pod="hostpath-provisioner/csi-hostpathplugin-p7wlh" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.661051 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqwlb\" (UniqueName: \"kubernetes.io/projected/59495e16-0371-48e0-b517-0adb7ac8eb4f-kube-api-access-pqwlb\") pod \"package-server-manager-789f6589d5-mzd69\" (UID: \"59495e16-0371-48e0-b517-0adb7ac8eb4f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mzd69" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.661074 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0eed2dde-d158-43e1-8df2-f5a309ef3da3-config-volume\") pod \"dns-default-td4m7\" (UID: \"0eed2dde-d158-43e1-8df2-f5a309ef3da3\") " pod="openshift-dns/dns-default-td4m7" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.661163 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9jcs\" (UniqueName: \"kubernetes.io/projected/338a8f3b-127a-44a4-af55-0d7f034bbf17-kube-api-access-k9jcs\") pod \"service-ca-operator-777779d784-7jmxk\" (UID: \"338a8f3b-127a-44a4-af55-0d7f034bbf17\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7jmxk" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.661198 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/59495e16-0371-48e0-b517-0adb7ac8eb4f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-mzd69\" (UID: \"59495e16-0371-48e0-b517-0adb7ac8eb4f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mzd69" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.661338 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4a5aec17-3235-4678-afa1-08da4b223f45-srv-cert\") pod \"catalog-operator-68c6474976-xjxbf\" (UID: \"4a5aec17-3235-4678-afa1-08da4b223f45\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xjxbf" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.661360 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7fc6440e-f991-4421-b078-9496ffdfb74d-registration-dir\") pod \"csi-hostpathplugin-p7wlh\" (UID: \"7fc6440e-f991-4421-b078-9496ffdfb74d\") " pod="hostpath-provisioner/csi-hostpathplugin-p7wlh" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.661410 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4nrx\" (UniqueName: \"kubernetes.io/projected/1627cfb6-29e2-4b2e-ae8b-0dcd6d125da0-kube-api-access-h4nrx\") pod \"ingress-operator-5b745b69d9-xxz9x\" (UID: \"1627cfb6-29e2-4b2e-ae8b-0dcd6d125da0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xxz9x" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.661429 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/338a8f3b-127a-44a4-af55-0d7f034bbf17-serving-cert\") pod \"service-ca-operator-777779d784-7jmxk\" (UID: \"338a8f3b-127a-44a4-af55-0d7f034bbf17\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7jmxk" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.661445 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9qc6\" (UniqueName: \"kubernetes.io/projected/26b3766f-e08f-47e8-803e-d138e6e7620f-kube-api-access-n9qc6\") pod \"ingress-canary-4cgzv\" (UID: \"26b3766f-e08f-47e8-803e-d138e6e7620f\") " pod="openshift-ingress-canary/ingress-canary-4cgzv" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.661476 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2fqm\" (UniqueName: \"kubernetes.io/projected/711e27d0-dd37-4f6f-adae-5c04bb856f47-kube-api-access-c2fqm\") pod \"console-f9d7485db-nswtf\" (UID: \"711e27d0-dd37-4f6f-adae-5c04bb856f47\") " pod="openshift-console/console-f9d7485db-nswtf" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.661505 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/675de8ae-169a-4737-a290-54cdb32d8cb0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6mhws\" (UID: \"675de8ae-169a-4737-a290-54cdb32d8cb0\") " pod="openshift-marketplace/marketplace-operator-79b997595-6mhws" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.661539 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/819051f4-236d-42d3-b3cf-c82103136dce-ca-trust-extracted\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.661554 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/819051f4-236d-42d3-b3cf-c82103136dce-installation-pull-secrets\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.661570 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cx5ng\" (UniqueName: \"kubernetes.io/projected/f903f69c-2db9-478a-9141-22f6aeb27ce3-kube-api-access-cx5ng\") pod \"apiserver-7bbb656c7d-ts6sm\" (UID: \"f903f69c-2db9-478a-9141-22f6aeb27ce3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ts6sm" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.661592 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7fc6440e-f991-4421-b078-9496ffdfb74d-mountpoint-dir\") pod \"csi-hostpathplugin-p7wlh\" (UID: \"7fc6440e-f991-4421-b078-9496ffdfb74d\") " pod="hostpath-provisioner/csi-hostpathplugin-p7wlh" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.661630 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1627cfb6-29e2-4b2e-ae8b-0dcd6d125da0-trusted-ca\") pod \"ingress-operator-5b745b69d9-xxz9x\" (UID: \"1627cfb6-29e2-4b2e-ae8b-0dcd6d125da0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xxz9x" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.661647 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/4cd078d1-1fb6-4997-a55e-f90cfea7bf7a-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-66xmw\" (UID: \"4cd078d1-1fb6-4997-a55e-f90cfea7bf7a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-66xmw" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.661690 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/711e27d0-dd37-4f6f-adae-5c04bb856f47-service-ca\") pod \"console-f9d7485db-nswtf\" (UID: \"711e27d0-dd37-4f6f-adae-5c04bb856f47\") " pod="openshift-console/console-f9d7485db-nswtf" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.661706 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f903f69c-2db9-478a-9141-22f6aeb27ce3-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-ts6sm\" (UID: \"f903f69c-2db9-478a-9141-22f6aeb27ce3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ts6sm" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.661721 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/675de8ae-169a-4737-a290-54cdb32d8cb0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6mhws\" (UID: \"675de8ae-169a-4737-a290-54cdb32d8cb0\") " pod="openshift-marketplace/marketplace-operator-79b997595-6mhws" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.661752 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f903f69c-2db9-478a-9141-22f6aeb27ce3-serving-cert\") pod \"apiserver-7bbb656c7d-ts6sm\" (UID: \"f903f69c-2db9-478a-9141-22f6aeb27ce3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ts6sm" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.661804 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f161f69-0220-4b3f-9f46-76277cd105f9-config-volume\") pod \"collect-profiles-29406630-wnfkh\" (UID: \"3f161f69-0220-4b3f-9f46-76277cd105f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406630-wnfkh" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.661838 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/819051f4-236d-42d3-b3cf-c82103136dce-registry-certificates\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.661855 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f903f69c-2db9-478a-9141-22f6aeb27ce3-audit-policies\") pod \"apiserver-7bbb656c7d-ts6sm\" (UID: \"f903f69c-2db9-478a-9141-22f6aeb27ce3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ts6sm" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.661882 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmdhm\" (UniqueName: \"kubernetes.io/projected/675de8ae-169a-4737-a290-54cdb32d8cb0-kube-api-access-wmdhm\") pod \"marketplace-operator-79b997595-6mhws\" (UID: \"675de8ae-169a-4737-a290-54cdb32d8cb0\") " pod="openshift-marketplace/marketplace-operator-79b997595-6mhws" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.661898 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7fc6440e-f991-4421-b078-9496ffdfb74d-plugins-dir\") pod \"csi-hostpathplugin-p7wlh\" (UID: \"7fc6440e-f991-4421-b078-9496ffdfb74d\") " pod="hostpath-provisioner/csi-hostpathplugin-p7wlh" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.661982 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f903f69c-2db9-478a-9141-22f6aeb27ce3-encryption-config\") pod \"apiserver-7bbb656c7d-ts6sm\" (UID: \"f903f69c-2db9-478a-9141-22f6aeb27ce3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ts6sm" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.662007 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q78tr\" (UniqueName: \"kubernetes.io/projected/4a5aec17-3235-4678-afa1-08da4b223f45-kube-api-access-q78tr\") pod \"catalog-operator-68c6474976-xjxbf\" (UID: \"4a5aec17-3235-4678-afa1-08da4b223f45\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xjxbf" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.662023 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0eed2dde-d158-43e1-8df2-f5a309ef3da3-metrics-tls\") pod \"dns-default-td4m7\" (UID: \"0eed2dde-d158-43e1-8df2-f5a309ef3da3\") " pod="openshift-dns/dns-default-td4m7" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.662510 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9hwv\" (UniqueName: \"kubernetes.io/projected/4cd078d1-1fb6-4997-a55e-f90cfea7bf7a-kube-api-access-n9hwv\") pod \"control-plane-machine-set-operator-78cbb6b69f-66xmw\" (UID: \"4cd078d1-1fb6-4997-a55e-f90cfea7bf7a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-66xmw" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.662559 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f903f69c-2db9-478a-9141-22f6aeb27ce3-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-ts6sm\" (UID: \"f903f69c-2db9-478a-9141-22f6aeb27ce3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ts6sm" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.662602 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r9pv\" (UniqueName: \"kubernetes.io/projected/7fc6440e-f991-4421-b078-9496ffdfb74d-kube-api-access-5r9pv\") pod \"csi-hostpathplugin-p7wlh\" (UID: \"7fc6440e-f991-4421-b078-9496ffdfb74d\") " pod="hostpath-provisioner/csi-hostpathplugin-p7wlh" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.662624 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/711e27d0-dd37-4f6f-adae-5c04bb856f47-console-config\") pod \"console-f9d7485db-nswtf\" (UID: \"711e27d0-dd37-4f6f-adae-5c04bb856f47\") " pod="openshift-console/console-f9d7485db-nswtf" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.662662 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/711e27d0-dd37-4f6f-adae-5c04bb856f47-trusted-ca-bundle\") pod \"console-f9d7485db-nswtf\" (UID: \"711e27d0-dd37-4f6f-adae-5c04bb856f47\") " pod="openshift-console/console-f9d7485db-nswtf" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.662705 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f161f69-0220-4b3f-9f46-76277cd105f9-secret-volume\") pod \"collect-profiles-29406630-wnfkh\" (UID: \"3f161f69-0220-4b3f-9f46-76277cd105f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406630-wnfkh" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.662729 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l2xv\" (UniqueName: \"kubernetes.io/projected/3f161f69-0220-4b3f-9f46-76277cd105f9-kube-api-access-8l2xv\") pod \"collect-profiles-29406630-wnfkh\" (UID: \"3f161f69-0220-4b3f-9f46-76277cd105f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406630-wnfkh" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.662744 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f05a65e3-0462-4718-a98a-864597cfb0e7-certs\") pod \"machine-config-server-l7pgj\" (UID: \"f05a65e3-0462-4718-a98a-864597cfb0e7\") " pod="openshift-machine-config-operator/machine-config-server-l7pgj" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.662770 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/26b3766f-e08f-47e8-803e-d138e6e7620f-cert\") pod \"ingress-canary-4cgzv\" (UID: \"26b3766f-e08f-47e8-803e-d138e6e7620f\") " pod="openshift-ingress-canary/ingress-canary-4cgzv" Nov 29 06:36:27 crc kubenswrapper[4947]: E1129 06:36:27.662796 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:36:28.162772629 +0000 UTC m=+139.207154820 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.662863 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/819051f4-236d-42d3-b3cf-c82103136dce-trusted-ca\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.662924 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.662944 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/ec9073bd-31ec-4b35-93a9-08d26b62c60d-available-featuregates\") pod \"openshift-config-operator-7777fb866f-b45sk\" (UID: \"ec9073bd-31ec-4b35-93a9-08d26b62c60d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-b45sk" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.662966 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f903f69c-2db9-478a-9141-22f6aeb27ce3-audit-dir\") pod \"apiserver-7bbb656c7d-ts6sm\" (UID: \"f903f69c-2db9-478a-9141-22f6aeb27ce3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ts6sm" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.663052 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec9073bd-31ec-4b35-93a9-08d26b62c60d-serving-cert\") pod \"openshift-config-operator-7777fb866f-b45sk\" (UID: \"ec9073bd-31ec-4b35-93a9-08d26b62c60d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-b45sk" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.663081 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/338a8f3b-127a-44a4-af55-0d7f034bbf17-config\") pod \"service-ca-operator-777779d784-7jmxk\" (UID: \"338a8f3b-127a-44a4-af55-0d7f034bbf17\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7jmxk" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.663130 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/819051f4-236d-42d3-b3cf-c82103136dce-bound-sa-token\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.663145 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/711e27d0-dd37-4f6f-adae-5c04bb856f47-console-serving-cert\") pod \"console-f9d7485db-nswtf\" (UID: \"711e27d0-dd37-4f6f-adae-5c04bb856f47\") " pod="openshift-console/console-f9d7485db-nswtf" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.663170 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1627cfb6-29e2-4b2e-ae8b-0dcd6d125da0-metrics-tls\") pod \"ingress-operator-5b745b69d9-xxz9x\" (UID: \"1627cfb6-29e2-4b2e-ae8b-0dcd6d125da0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xxz9x" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.663243 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4a5aec17-3235-4678-afa1-08da4b223f45-profile-collector-cert\") pod \"catalog-operator-68c6474976-xjxbf\" (UID: \"4a5aec17-3235-4678-afa1-08da4b223f45\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xjxbf" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.663270 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dr29b\" (UniqueName: \"kubernetes.io/projected/ec9073bd-31ec-4b35-93a9-08d26b62c60d-kube-api-access-dr29b\") pod \"openshift-config-operator-7777fb866f-b45sk\" (UID: \"ec9073bd-31ec-4b35-93a9-08d26b62c60d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-b45sk" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.663295 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j95cb\" (UniqueName: \"kubernetes.io/projected/819051f4-236d-42d3-b3cf-c82103136dce-kube-api-access-j95cb\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.663315 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f903f69c-2db9-478a-9141-22f6aeb27ce3-etcd-client\") pod \"apiserver-7bbb656c7d-ts6sm\" (UID: \"f903f69c-2db9-478a-9141-22f6aeb27ce3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ts6sm" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.663331 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/819051f4-236d-42d3-b3cf-c82103136dce-registry-tls\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.663357 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/711e27d0-dd37-4f6f-adae-5c04bb856f47-oauth-serving-cert\") pod \"console-f9d7485db-nswtf\" (UID: \"711e27d0-dd37-4f6f-adae-5c04bb856f47\") " pod="openshift-console/console-f9d7485db-nswtf" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.664489 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/819051f4-236d-42d3-b3cf-c82103136dce-ca-trust-extracted\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.670238 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f903f69c-2db9-478a-9141-22f6aeb27ce3-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-ts6sm\" (UID: \"f903f69c-2db9-478a-9141-22f6aeb27ce3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ts6sm" Nov 29 06:36:27 crc kubenswrapper[4947]: E1129 06:36:27.670586 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 06:36:28.170566421 +0000 UTC m=+139.214948702 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcrf" (UID: "819051f4-236d-42d3-b3cf-c82103136dce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.672158 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/711e27d0-dd37-4f6f-adae-5c04bb856f47-service-ca\") pod \"console-f9d7485db-nswtf\" (UID: \"711e27d0-dd37-4f6f-adae-5c04bb856f47\") " pod="openshift-console/console-f9d7485db-nswtf" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.672667 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/711e27d0-dd37-4f6f-adae-5c04bb856f47-console-config\") pod \"console-f9d7485db-nswtf\" (UID: \"711e27d0-dd37-4f6f-adae-5c04bb856f47\") " pod="openshift-console/console-f9d7485db-nswtf" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.672748 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f903f69c-2db9-478a-9141-22f6aeb27ce3-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-ts6sm\" (UID: \"f903f69c-2db9-478a-9141-22f6aeb27ce3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ts6sm" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.673175 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f903f69c-2db9-478a-9141-22f6aeb27ce3-audit-policies\") pod \"apiserver-7bbb656c7d-ts6sm\" (UID: \"f903f69c-2db9-478a-9141-22f6aeb27ce3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ts6sm" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.674268 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/711e27d0-dd37-4f6f-adae-5c04bb856f47-trusted-ca-bundle\") pod \"console-f9d7485db-nswtf\" (UID: \"711e27d0-dd37-4f6f-adae-5c04bb856f47\") " pod="openshift-console/console-f9d7485db-nswtf" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.676768 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/711e27d0-dd37-4f6f-adae-5c04bb856f47-console-oauth-config\") pod \"console-f9d7485db-nswtf\" (UID: \"711e27d0-dd37-4f6f-adae-5c04bb856f47\") " pod="openshift-console/console-f9d7485db-nswtf" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.676843 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f903f69c-2db9-478a-9141-22f6aeb27ce3-audit-dir\") pod \"apiserver-7bbb656c7d-ts6sm\" (UID: \"f903f69c-2db9-478a-9141-22f6aeb27ce3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ts6sm" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.677755 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/711e27d0-dd37-4f6f-adae-5c04bb856f47-oauth-serving-cert\") pod \"console-f9d7485db-nswtf\" (UID: \"711e27d0-dd37-4f6f-adae-5c04bb856f47\") " pod="openshift-console/console-f9d7485db-nswtf" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.678692 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/819051f4-236d-42d3-b3cf-c82103136dce-registry-certificates\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.679160 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/819051f4-236d-42d3-b3cf-c82103136dce-trusted-ca\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.679617 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1627cfb6-29e2-4b2e-ae8b-0dcd6d125da0-trusted-ca\") pod \"ingress-operator-5b745b69d9-xxz9x\" (UID: \"1627cfb6-29e2-4b2e-ae8b-0dcd6d125da0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xxz9x" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.679633 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-flwtp"] Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.681485 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/819051f4-236d-42d3-b3cf-c82103136dce-installation-pull-secrets\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.684416 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/ec9073bd-31ec-4b35-93a9-08d26b62c60d-available-featuregates\") pod \"openshift-config-operator-7777fb866f-b45sk\" (UID: \"ec9073bd-31ec-4b35-93a9-08d26b62c60d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-b45sk" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.685013 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/711e27d0-dd37-4f6f-adae-5c04bb856f47-console-serving-cert\") pod \"console-f9d7485db-nswtf\" (UID: \"711e27d0-dd37-4f6f-adae-5c04bb856f47\") " pod="openshift-console/console-f9d7485db-nswtf" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.694288 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f903f69c-2db9-478a-9141-22f6aeb27ce3-encryption-config\") pod \"apiserver-7bbb656c7d-ts6sm\" (UID: \"f903f69c-2db9-478a-9141-22f6aeb27ce3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ts6sm" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.694610 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec9073bd-31ec-4b35-93a9-08d26b62c60d-serving-cert\") pod \"openshift-config-operator-7777fb866f-b45sk\" (UID: \"ec9073bd-31ec-4b35-93a9-08d26b62c60d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-b45sk" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.710166 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1627cfb6-29e2-4b2e-ae8b-0dcd6d125da0-bound-sa-token\") pod \"ingress-operator-5b745b69d9-xxz9x\" (UID: \"1627cfb6-29e2-4b2e-ae8b-0dcd6d125da0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xxz9x" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.710166 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/819051f4-236d-42d3-b3cf-c82103136dce-registry-tls\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.713209 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1627cfb6-29e2-4b2e-ae8b-0dcd6d125da0-metrics-tls\") pod \"ingress-operator-5b745b69d9-xxz9x\" (UID: \"1627cfb6-29e2-4b2e-ae8b-0dcd6d125da0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xxz9x" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.718822 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f903f69c-2db9-478a-9141-22f6aeb27ce3-serving-cert\") pod \"apiserver-7bbb656c7d-ts6sm\" (UID: \"f903f69c-2db9-478a-9141-22f6aeb27ce3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ts6sm" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.711459 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f903f69c-2db9-478a-9141-22f6aeb27ce3-etcd-client\") pod \"apiserver-7bbb656c7d-ts6sm\" (UID: \"f903f69c-2db9-478a-9141-22f6aeb27ce3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ts6sm" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.733513 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/59495e16-0371-48e0-b517-0adb7ac8eb4f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-mzd69\" (UID: \"59495e16-0371-48e0-b517-0adb7ac8eb4f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mzd69" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.743347 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqwlb\" (UniqueName: \"kubernetes.io/projected/59495e16-0371-48e0-b517-0adb7ac8eb4f-kube-api-access-pqwlb\") pod \"package-server-manager-789f6589d5-mzd69\" (UID: \"59495e16-0371-48e0-b517-0adb7ac8eb4f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mzd69" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.745398 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cx5ng\" (UniqueName: \"kubernetes.io/projected/f903f69c-2db9-478a-9141-22f6aeb27ce3-kube-api-access-cx5ng\") pod \"apiserver-7bbb656c7d-ts6sm\" (UID: \"f903f69c-2db9-478a-9141-22f6aeb27ce3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ts6sm" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.746234 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-lx74d"] Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.764468 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.764727 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q78tr\" (UniqueName: \"kubernetes.io/projected/4a5aec17-3235-4678-afa1-08da4b223f45-kube-api-access-q78tr\") pod \"catalog-operator-68c6474976-xjxbf\" (UID: \"4a5aec17-3235-4678-afa1-08da4b223f45\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xjxbf" Nov 29 06:36:27 crc kubenswrapper[4947]: E1129 06:36:27.764759 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:36:28.264732244 +0000 UTC m=+139.309114355 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.764800 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0eed2dde-d158-43e1-8df2-f5a309ef3da3-metrics-tls\") pod \"dns-default-td4m7\" (UID: \"0eed2dde-d158-43e1-8df2-f5a309ef3da3\") " pod="openshift-dns/dns-default-td4m7" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.764841 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9hwv\" (UniqueName: \"kubernetes.io/projected/4cd078d1-1fb6-4997-a55e-f90cfea7bf7a-kube-api-access-n9hwv\") pod \"control-plane-machine-set-operator-78cbb6b69f-66xmw\" (UID: \"4cd078d1-1fb6-4997-a55e-f90cfea7bf7a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-66xmw" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.764869 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r9pv\" (UniqueName: \"kubernetes.io/projected/7fc6440e-f991-4421-b078-9496ffdfb74d-kube-api-access-5r9pv\") pod \"csi-hostpathplugin-p7wlh\" (UID: \"7fc6440e-f991-4421-b078-9496ffdfb74d\") " pod="hostpath-provisioner/csi-hostpathplugin-p7wlh" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.764910 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f161f69-0220-4b3f-9f46-76277cd105f9-secret-volume\") pod \"collect-profiles-29406630-wnfkh\" (UID: \"3f161f69-0220-4b3f-9f46-76277cd105f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406630-wnfkh" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.764930 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8l2xv\" (UniqueName: \"kubernetes.io/projected/3f161f69-0220-4b3f-9f46-76277cd105f9-kube-api-access-8l2xv\") pod \"collect-profiles-29406630-wnfkh\" (UID: \"3f161f69-0220-4b3f-9f46-76277cd105f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406630-wnfkh" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.764945 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f05a65e3-0462-4718-a98a-864597cfb0e7-certs\") pod \"machine-config-server-l7pgj\" (UID: \"f05a65e3-0462-4718-a98a-864597cfb0e7\") " pod="openshift-machine-config-operator/machine-config-server-l7pgj" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.764961 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/26b3766f-e08f-47e8-803e-d138e6e7620f-cert\") pod \"ingress-canary-4cgzv\" (UID: \"26b3766f-e08f-47e8-803e-d138e6e7620f\") " pod="openshift-ingress-canary/ingress-canary-4cgzv" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.764999 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.765031 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/338a8f3b-127a-44a4-af55-0d7f034bbf17-config\") pod \"service-ca-operator-777779d784-7jmxk\" (UID: \"338a8f3b-127a-44a4-af55-0d7f034bbf17\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7jmxk" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.765070 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4a5aec17-3235-4678-afa1-08da4b223f45-profile-collector-cert\") pod \"catalog-operator-68c6474976-xjxbf\" (UID: \"4a5aec17-3235-4678-afa1-08da4b223f45\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xjxbf" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.765105 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxr7h\" (UniqueName: \"kubernetes.io/projected/f05a65e3-0462-4718-a98a-864597cfb0e7-kube-api-access-cxr7h\") pod \"machine-config-server-l7pgj\" (UID: \"f05a65e3-0462-4718-a98a-864597cfb0e7\") " pod="openshift-machine-config-operator/machine-config-server-l7pgj" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.765127 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfljr\" (UniqueName: \"kubernetes.io/projected/0eed2dde-d158-43e1-8df2-f5a309ef3da3-kube-api-access-wfljr\") pod \"dns-default-td4m7\" (UID: \"0eed2dde-d158-43e1-8df2-f5a309ef3da3\") " pod="openshift-dns/dns-default-td4m7" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.765143 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f05a65e3-0462-4718-a98a-864597cfb0e7-node-bootstrap-token\") pod \"machine-config-server-l7pgj\" (UID: \"f05a65e3-0462-4718-a98a-864597cfb0e7\") " pod="openshift-machine-config-operator/machine-config-server-l7pgj" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.765156 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7fc6440e-f991-4421-b078-9496ffdfb74d-csi-data-dir\") pod \"csi-hostpathplugin-p7wlh\" (UID: \"7fc6440e-f991-4421-b078-9496ffdfb74d\") " pod="hostpath-provisioner/csi-hostpathplugin-p7wlh" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.765185 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7fc6440e-f991-4421-b078-9496ffdfb74d-socket-dir\") pod \"csi-hostpathplugin-p7wlh\" (UID: \"7fc6440e-f991-4421-b078-9496ffdfb74d\") " pod="hostpath-provisioner/csi-hostpathplugin-p7wlh" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.765203 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0eed2dde-d158-43e1-8df2-f5a309ef3da3-config-volume\") pod \"dns-default-td4m7\" (UID: \"0eed2dde-d158-43e1-8df2-f5a309ef3da3\") " pod="openshift-dns/dns-default-td4m7" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.765286 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9jcs\" (UniqueName: \"kubernetes.io/projected/338a8f3b-127a-44a4-af55-0d7f034bbf17-kube-api-access-k9jcs\") pod \"service-ca-operator-777779d784-7jmxk\" (UID: \"338a8f3b-127a-44a4-af55-0d7f034bbf17\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7jmxk" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.765316 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4a5aec17-3235-4678-afa1-08da4b223f45-srv-cert\") pod \"catalog-operator-68c6474976-xjxbf\" (UID: \"4a5aec17-3235-4678-afa1-08da4b223f45\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xjxbf" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.765331 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7fc6440e-f991-4421-b078-9496ffdfb74d-registration-dir\") pod \"csi-hostpathplugin-p7wlh\" (UID: \"7fc6440e-f991-4421-b078-9496ffdfb74d\") " pod="hostpath-provisioner/csi-hostpathplugin-p7wlh" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.765355 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/338a8f3b-127a-44a4-af55-0d7f034bbf17-serving-cert\") pod \"service-ca-operator-777779d784-7jmxk\" (UID: \"338a8f3b-127a-44a4-af55-0d7f034bbf17\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7jmxk" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.765372 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9qc6\" (UniqueName: \"kubernetes.io/projected/26b3766f-e08f-47e8-803e-d138e6e7620f-kube-api-access-n9qc6\") pod \"ingress-canary-4cgzv\" (UID: \"26b3766f-e08f-47e8-803e-d138e6e7620f\") " pod="openshift-ingress-canary/ingress-canary-4cgzv" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.765400 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/675de8ae-169a-4737-a290-54cdb32d8cb0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6mhws\" (UID: \"675de8ae-169a-4737-a290-54cdb32d8cb0\") " pod="openshift-marketplace/marketplace-operator-79b997595-6mhws" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.765425 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7fc6440e-f991-4421-b078-9496ffdfb74d-mountpoint-dir\") pod \"csi-hostpathplugin-p7wlh\" (UID: \"7fc6440e-f991-4421-b078-9496ffdfb74d\") " pod="hostpath-provisioner/csi-hostpathplugin-p7wlh" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.765450 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/4cd078d1-1fb6-4997-a55e-f90cfea7bf7a-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-66xmw\" (UID: \"4cd078d1-1fb6-4997-a55e-f90cfea7bf7a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-66xmw" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.765484 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/675de8ae-169a-4737-a290-54cdb32d8cb0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6mhws\" (UID: \"675de8ae-169a-4737-a290-54cdb32d8cb0\") " pod="openshift-marketplace/marketplace-operator-79b997595-6mhws" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.765505 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f161f69-0220-4b3f-9f46-76277cd105f9-config-volume\") pod \"collect-profiles-29406630-wnfkh\" (UID: \"3f161f69-0220-4b3f-9f46-76277cd105f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406630-wnfkh" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.765539 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmdhm\" (UniqueName: \"kubernetes.io/projected/675de8ae-169a-4737-a290-54cdb32d8cb0-kube-api-access-wmdhm\") pod \"marketplace-operator-79b997595-6mhws\" (UID: \"675de8ae-169a-4737-a290-54cdb32d8cb0\") " pod="openshift-marketplace/marketplace-operator-79b997595-6mhws" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.765559 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7fc6440e-f991-4421-b078-9496ffdfb74d-plugins-dir\") pod \"csi-hostpathplugin-p7wlh\" (UID: \"7fc6440e-f991-4421-b078-9496ffdfb74d\") " pod="hostpath-provisioner/csi-hostpathplugin-p7wlh" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.766029 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7fc6440e-f991-4421-b078-9496ffdfb74d-plugins-dir\") pod \"csi-hostpathplugin-p7wlh\" (UID: \"7fc6440e-f991-4421-b078-9496ffdfb74d\") " pod="hostpath-provisioner/csi-hostpathplugin-p7wlh" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.766354 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7fc6440e-f991-4421-b078-9496ffdfb74d-socket-dir\") pod \"csi-hostpathplugin-p7wlh\" (UID: \"7fc6440e-f991-4421-b078-9496ffdfb74d\") " pod="hostpath-provisioner/csi-hostpathplugin-p7wlh" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.766820 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0eed2dde-d158-43e1-8df2-f5a309ef3da3-config-volume\") pod \"dns-default-td4m7\" (UID: \"0eed2dde-d158-43e1-8df2-f5a309ef3da3\") " pod="openshift-dns/dns-default-td4m7" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.766879 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7fc6440e-f991-4421-b078-9496ffdfb74d-mountpoint-dir\") pod \"csi-hostpathplugin-p7wlh\" (UID: \"7fc6440e-f991-4421-b078-9496ffdfb74d\") " pod="hostpath-provisioner/csi-hostpathplugin-p7wlh" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.768118 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f161f69-0220-4b3f-9f46-76277cd105f9-config-volume\") pod \"collect-profiles-29406630-wnfkh\" (UID: \"3f161f69-0220-4b3f-9f46-76277cd105f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406630-wnfkh" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.769264 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7fc6440e-f991-4421-b078-9496ffdfb74d-registration-dir\") pod \"csi-hostpathplugin-p7wlh\" (UID: \"7fc6440e-f991-4421-b078-9496ffdfb74d\") " pod="hostpath-provisioner/csi-hostpathplugin-p7wlh" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.774931 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/675de8ae-169a-4737-a290-54cdb32d8cb0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6mhws\" (UID: \"675de8ae-169a-4737-a290-54cdb32d8cb0\") " pod="openshift-marketplace/marketplace-operator-79b997595-6mhws" Nov 29 06:36:27 crc kubenswrapper[4947]: E1129 06:36:27.780189 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 06:36:28.280168583 +0000 UTC m=+139.324550744 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcrf" (UID: "819051f4-236d-42d3-b3cf-c82103136dce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.780488 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7fc6440e-f991-4421-b078-9496ffdfb74d-csi-data-dir\") pod \"csi-hostpathplugin-p7wlh\" (UID: \"7fc6440e-f991-4421-b078-9496ffdfb74d\") " pod="hostpath-provisioner/csi-hostpathplugin-p7wlh" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.782973 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f05a65e3-0462-4718-a98a-864597cfb0e7-certs\") pod \"machine-config-server-l7pgj\" (UID: \"f05a65e3-0462-4718-a98a-864597cfb0e7\") " pod="openshift-machine-config-operator/machine-config-server-l7pgj" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.783211 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4a5aec17-3235-4678-afa1-08da4b223f45-srv-cert\") pod \"catalog-operator-68c6474976-xjxbf\" (UID: \"4a5aec17-3235-4678-afa1-08da4b223f45\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xjxbf" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.783994 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/338a8f3b-127a-44a4-af55-0d7f034bbf17-config\") pod \"service-ca-operator-777779d784-7jmxk\" (UID: \"338a8f3b-127a-44a4-af55-0d7f034bbf17\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7jmxk" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.786838 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/4cd078d1-1fb6-4997-a55e-f90cfea7bf7a-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-66xmw\" (UID: \"4cd078d1-1fb6-4997-a55e-f90cfea7bf7a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-66xmw" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.787780 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f161f69-0220-4b3f-9f46-76277cd105f9-secret-volume\") pod \"collect-profiles-29406630-wnfkh\" (UID: \"3f161f69-0220-4b3f-9f46-76277cd105f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406630-wnfkh" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.791551 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/26b3766f-e08f-47e8-803e-d138e6e7620f-cert\") pod \"ingress-canary-4cgzv\" (UID: \"26b3766f-e08f-47e8-803e-d138e6e7620f\") " pod="openshift-ingress-canary/ingress-canary-4cgzv" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.797070 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2fqm\" (UniqueName: \"kubernetes.io/projected/711e27d0-dd37-4f6f-adae-5c04bb856f47-kube-api-access-c2fqm\") pod \"console-f9d7485db-nswtf\" (UID: \"711e27d0-dd37-4f6f-adae-5c04bb856f47\") " pod="openshift-console/console-f9d7485db-nswtf" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.797677 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/675de8ae-169a-4737-a290-54cdb32d8cb0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6mhws\" (UID: \"675de8ae-169a-4737-a290-54cdb32d8cb0\") " pod="openshift-marketplace/marketplace-operator-79b997595-6mhws" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.798445 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-tcchr"] Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.800772 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0eed2dde-d158-43e1-8df2-f5a309ef3da3-metrics-tls\") pod \"dns-default-td4m7\" (UID: \"0eed2dde-d158-43e1-8df2-f5a309ef3da3\") " pod="openshift-dns/dns-default-td4m7" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.807907 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/338a8f3b-127a-44a4-af55-0d7f034bbf17-serving-cert\") pod \"service-ca-operator-777779d784-7jmxk\" (UID: \"338a8f3b-127a-44a4-af55-0d7f034bbf17\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7jmxk" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.808338 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4a5aec17-3235-4678-afa1-08da4b223f45-profile-collector-cert\") pod \"catalog-operator-68c6474976-xjxbf\" (UID: \"4a5aec17-3235-4678-afa1-08da4b223f45\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xjxbf" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.813347 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f05a65e3-0462-4718-a98a-864597cfb0e7-node-bootstrap-token\") pod \"machine-config-server-l7pgj\" (UID: \"f05a65e3-0462-4718-a98a-864597cfb0e7\") " pod="openshift-machine-config-operator/machine-config-server-l7pgj" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.815721 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4nrx\" (UniqueName: \"kubernetes.io/projected/1627cfb6-29e2-4b2e-ae8b-0dcd6d125da0-kube-api-access-h4nrx\") pod \"ingress-operator-5b745b69d9-xxz9x\" (UID: \"1627cfb6-29e2-4b2e-ae8b-0dcd6d125da0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xxz9x" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.823470 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-g227d"] Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.827705 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-nswtf" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.831156 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/819051f4-236d-42d3-b3cf-c82103136dce-bound-sa-token\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.836634 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dr29b\" (UniqueName: \"kubernetes.io/projected/ec9073bd-31ec-4b35-93a9-08d26b62c60d-kube-api-access-dr29b\") pod \"openshift-config-operator-7777fb866f-b45sk\" (UID: \"ec9073bd-31ec-4b35-93a9-08d26b62c60d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-b45sk" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.852674 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xxz9x" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.857391 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j95cb\" (UniqueName: \"kubernetes.io/projected/819051f4-236d-42d3-b3cf-c82103136dce-kube-api-access-j95cb\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.869108 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:36:27 crc kubenswrapper[4947]: E1129 06:36:27.872511 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:36:28.372481541 +0000 UTC m=+139.416863622 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.873342 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ts6sm" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.877099 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9rskb" event={"ID":"a689b7f0-2ae8-4200-9e32-0ed56e5791d1","Type":"ContainerStarted","Data":"763ef3d15a5fa62ed1c0df0a24024730c775996bc48333a1d700c44c15f9b1c3"} Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.884459 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sghgg" event={"ID":"e6c322b6-b29e-4177-9e8c-7fefbf9d7e4a","Type":"ContainerStarted","Data":"147b4907f6b4c2cf81d1dd04276ebda11ebff9542d7b537ef1ab24a65291d800"} Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.884516 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sghgg" event={"ID":"e6c322b6-b29e-4177-9e8c-7fefbf9d7e4a","Type":"ContainerStarted","Data":"92e1554e54113ece793b799f0d47256df3c1dc1e54de0ba6c05d06ec60218562"} Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.891037 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hm525" event={"ID":"061238bd-c978-4bf9-9868-5ef174d414f2","Type":"ContainerStarted","Data":"780f45b28b3ac82ea630632b0d67e44ab5660a2dd6a1a849c4a62317f08b9163"} Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.899356 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q78tr\" (UniqueName: \"kubernetes.io/projected/4a5aec17-3235-4678-afa1-08da4b223f45-kube-api-access-q78tr\") pod \"catalog-operator-68c6474976-xjxbf\" (UID: \"4a5aec17-3235-4678-afa1-08da4b223f45\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xjxbf" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.909596 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-gkrmk" event={"ID":"a50236c5-0779-4d32-968b-2d4aee931dd6","Type":"ContainerStarted","Data":"947c6f2d84ed2d9e8ae601ba4b9687702e0d47ef97db1c109e7c4b9e77815c75"} Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.909653 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-gkrmk" event={"ID":"a50236c5-0779-4d32-968b-2d4aee931dd6","Type":"ContainerStarted","Data":"a78efae40697486cf2df27a16977329694d4b2f2902d061a494f2cbd91609042"} Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.926077 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8t9v" event={"ID":"20404d7a-857c-4f60-beef-e6ef9116804d","Type":"ContainerStarted","Data":"2b390e87f6ffcc305614e4daccab46067f900eae4e11463e1241d9aecbfbc308"} Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.927237 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8t9v" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.930489 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-6ndbx" event={"ID":"a4cf8b6f-d7b3-4cc2-adf5-2494f5d21b71","Type":"ContainerStarted","Data":"576a9e34239221ee9b11920dc4eec4f393bd7a6a310e3b53eb73382ba203e279"} Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.932000 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-flwtp" event={"ID":"d0bccdee-ee49-4d76-9826-0e8ece077528","Type":"ContainerStarted","Data":"9d74399bebfa96038812876754a91c7f65a843c9dfe9796ff9375067a07a4087"} Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.932354 4947 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-z8t9v container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.932410 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8t9v" podUID="20404d7a-857c-4f60-beef-e6ef9116804d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.941453 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-2vgqt" event={"ID":"8ff97c32-0757-44e2-8cad-55b8bfadf0a8","Type":"ContainerStarted","Data":"5fc41fdc4f8e60ff9b865b0ad50f446ce7ea47c4e71ad1c71a696910a8cb6862"} Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.941508 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-2vgqt" event={"ID":"8ff97c32-0757-44e2-8cad-55b8bfadf0a8","Type":"ContainerStarted","Data":"a975cb0d2bf72458514f62e38943ed62940c8573c7801a00877100e6de726b42"} Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.944025 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-2vgqt" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.944997 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r88sq" event={"ID":"13ecfe15-43e4-42ff-817f-fc95fb8f54aa","Type":"ContainerStarted","Data":"5aaa177fd72f4ae4de76bd96c17e280d16f8e944a27a6238829bafd9097a48f6"} Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.945043 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r88sq" event={"ID":"13ecfe15-43e4-42ff-817f-fc95fb8f54aa","Type":"ContainerStarted","Data":"2d7701bdaf972bea06607488a22fa3cd2551009bbafe194bb0702cf2f9438723"} Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.945851 4947 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-2vgqt container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.945906 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-2vgqt" podUID="8ff97c32-0757-44e2-8cad-55b8bfadf0a8" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.947786 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mzd69" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.956513 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b6rpb" event={"ID":"2ce0e4b6-b001-41b7-a850-a77a9b7131d9","Type":"ContainerStarted","Data":"34d15e3abe8712e7f0bec00a8b56e12d10961c91dc982b9b3378459989dc3f1e"} Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.960087 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-mcs9v" event={"ID":"96b6a564-8c15-4680-a2e2-c8bf9289fdf8","Type":"ContainerStarted","Data":"500aea9f64fe01b0167e92df434f91626b961eb930ff9369f921a55b6a3016c4"} Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.960152 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-mcs9v" event={"ID":"96b6a564-8c15-4680-a2e2-c8bf9289fdf8","Type":"ContainerStarted","Data":"0065b4ce0a66bbe900ee78c4aafe64a2bc0e535adb2b4d27a041144f83441ec9"} Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.967688 4947 patch_prober.go:28] interesting pod/console-operator-58897d9998-mcs9v container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/readyz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.967751 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-mcs9v" podUID="96b6a564-8c15-4680-a2e2-c8bf9289fdf8" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.6:8443/readyz\": dial tcp 10.217.0.6:8443: connect: connection refused" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.968441 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-mcs9v" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.973529 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.974342 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9hwv\" (UniqueName: \"kubernetes.io/projected/4cd078d1-1fb6-4997-a55e-f90cfea7bf7a-kube-api-access-n9hwv\") pod \"control-plane-machine-set-operator-78cbb6b69f-66xmw\" (UID: \"4cd078d1-1fb6-4997-a55e-f90cfea7bf7a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-66xmw" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.977461 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r9pv\" (UniqueName: \"kubernetes.io/projected/7fc6440e-f991-4421-b078-9496ffdfb74d-kube-api-access-5r9pv\") pod \"csi-hostpathplugin-p7wlh\" (UID: \"7fc6440e-f991-4421-b078-9496ffdfb74d\") " pod="hostpath-provisioner/csi-hostpathplugin-p7wlh" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.978320 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9jcs\" (UniqueName: \"kubernetes.io/projected/338a8f3b-127a-44a4-af55-0d7f034bbf17-kube-api-access-k9jcs\") pod \"service-ca-operator-777779d784-7jmxk\" (UID: \"338a8f3b-127a-44a4-af55-0d7f034bbf17\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7jmxk" Nov 29 06:36:27 crc kubenswrapper[4947]: E1129 06:36:27.978611 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 06:36:28.47859453 +0000 UTC m=+139.522976611 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcrf" (UID: "819051f4-236d-42d3-b3cf-c82103136dce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.988453 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xjxbf" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.989346 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmdhm\" (UniqueName: \"kubernetes.io/projected/675de8ae-169a-4737-a290-54cdb32d8cb0-kube-api-access-wmdhm\") pod \"marketplace-operator-79b997595-6mhws\" (UID: \"675de8ae-169a-4737-a290-54cdb32d8cb0\") " pod="openshift-marketplace/marketplace-operator-79b997595-6mhws" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.996021 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-7jmxk" Nov 29 06:36:27 crc kubenswrapper[4947]: I1129 06:36:27.996313 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8l2xv\" (UniqueName: \"kubernetes.io/projected/3f161f69-0220-4b3f-9f46-76277cd105f9-kube-api-access-8l2xv\") pod \"collect-profiles-29406630-wnfkh\" (UID: \"3f161f69-0220-4b3f-9f46-76277cd105f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406630-wnfkh" Nov 29 06:36:28 crc kubenswrapper[4947]: I1129 06:36:28.004908 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6mhws" Nov 29 06:36:28 crc kubenswrapper[4947]: I1129 06:36:28.016106 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9qc6\" (UniqueName: \"kubernetes.io/projected/26b3766f-e08f-47e8-803e-d138e6e7620f-kube-api-access-n9qc6\") pod \"ingress-canary-4cgzv\" (UID: \"26b3766f-e08f-47e8-803e-d138e6e7620f\") " pod="openshift-ingress-canary/ingress-canary-4cgzv" Nov 29 06:36:28 crc kubenswrapper[4947]: I1129 06:36:28.016609 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406630-wnfkh" Nov 29 06:36:28 crc kubenswrapper[4947]: I1129 06:36:28.022815 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-66xmw" Nov 29 06:36:28 crc kubenswrapper[4947]: I1129 06:36:28.034719 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxr7h\" (UniqueName: \"kubernetes.io/projected/f05a65e3-0462-4718-a98a-864597cfb0e7-kube-api-access-cxr7h\") pod \"machine-config-server-l7pgj\" (UID: \"f05a65e3-0462-4718-a98a-864597cfb0e7\") " pod="openshift-machine-config-operator/machine-config-server-l7pgj" Nov 29 06:36:28 crc kubenswrapper[4947]: I1129 06:36:28.037504 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-zcdgs"] Nov 29 06:36:28 crc kubenswrapper[4947]: I1129 06:36:28.052512 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-p7wlh" Nov 29 06:36:28 crc kubenswrapper[4947]: I1129 06:36:28.062442 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-l7pgj" Nov 29 06:36:28 crc kubenswrapper[4947]: I1129 06:36:28.070769 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-4cgzv" Nov 29 06:36:28 crc kubenswrapper[4947]: I1129 06:36:28.070804 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfljr\" (UniqueName: \"kubernetes.io/projected/0eed2dde-d158-43e1-8df2-f5a309ef3da3-kube-api-access-wfljr\") pod \"dns-default-td4m7\" (UID: \"0eed2dde-d158-43e1-8df2-f5a309ef3da3\") " pod="openshift-dns/dns-default-td4m7" Nov 29 06:36:28 crc kubenswrapper[4947]: I1129 06:36:28.074957 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:36:28 crc kubenswrapper[4947]: E1129 06:36:28.078736 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:36:28.57871683 +0000 UTC m=+139.623098911 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:28 crc kubenswrapper[4947]: I1129 06:36:28.137069 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-b45sk" Nov 29 06:36:28 crc kubenswrapper[4947]: I1129 06:36:28.186064 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:28 crc kubenswrapper[4947]: E1129 06:36:28.186359 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 06:36:28.686347664 +0000 UTC m=+139.730729745 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcrf" (UID: "819051f4-236d-42d3-b3cf-c82103136dce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:28 crc kubenswrapper[4947]: I1129 06:36:28.206866 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dzzqk"] Nov 29 06:36:28 crc kubenswrapper[4947]: I1129 06:36:28.289351 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:36:28 crc kubenswrapper[4947]: E1129 06:36:28.289930 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:36:28.789907136 +0000 UTC m=+139.834289207 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:28 crc kubenswrapper[4947]: I1129 06:36:28.329755 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-td4m7" Nov 29 06:36:28 crc kubenswrapper[4947]: I1129 06:36:28.394776 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:28 crc kubenswrapper[4947]: E1129 06:36:28.395450 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 06:36:28.895429447 +0000 UTC m=+139.939811538 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcrf" (UID: "819051f4-236d-42d3-b3cf-c82103136dce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:28 crc kubenswrapper[4947]: I1129 06:36:28.496665 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:36:28 crc kubenswrapper[4947]: E1129 06:36:28.496786 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:36:28.996760623 +0000 UTC m=+140.041142704 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:28 crc kubenswrapper[4947]: I1129 06:36:28.496936 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:28 crc kubenswrapper[4947]: E1129 06:36:28.497247 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 06:36:28.997209597 +0000 UTC m=+140.041591678 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcrf" (UID: "819051f4-236d-42d3-b3cf-c82103136dce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:28 crc kubenswrapper[4947]: I1129 06:36:28.512643 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8t9v" podStartSLOduration=118.512623505 podStartE2EDuration="1m58.512623505s" podCreationTimestamp="2025-11-29 06:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:36:28.510201173 +0000 UTC m=+139.554583254" watchObservedRunningTime="2025-11-29 06:36:28.512623505 +0000 UTC m=+139.557005586" Nov 29 06:36:28 crc kubenswrapper[4947]: I1129 06:36:28.600976 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:36:28 crc kubenswrapper[4947]: E1129 06:36:28.607111 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:36:29.106988064 +0000 UTC m=+140.151370145 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:28 crc kubenswrapper[4947]: I1129 06:36:28.708934 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:28 crc kubenswrapper[4947]: E1129 06:36:28.709732 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 06:36:29.209718302 +0000 UTC m=+140.254100393 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcrf" (UID: "819051f4-236d-42d3-b3cf-c82103136dce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:28 crc kubenswrapper[4947]: I1129 06:36:28.796189 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-gkrmk" podStartSLOduration=119.796167385 podStartE2EDuration="1m59.796167385s" podCreationTimestamp="2025-11-29 06:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:36:28.794037032 +0000 UTC m=+139.838419113" watchObservedRunningTime="2025-11-29 06:36:28.796167385 +0000 UTC m=+139.840549486" Nov 29 06:36:28 crc kubenswrapper[4947]: I1129 06:36:28.812639 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:36:28 crc kubenswrapper[4947]: E1129 06:36:28.812987 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:36:29.312972275 +0000 UTC m=+140.357354356 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:28 crc kubenswrapper[4947]: I1129 06:36:28.879171 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-2vgqt" podStartSLOduration=119.879154865 podStartE2EDuration="1m59.879154865s" podCreationTimestamp="2025-11-29 06:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:36:28.878842826 +0000 UTC m=+139.923224907" watchObservedRunningTime="2025-11-29 06:36:28.879154865 +0000 UTC m=+139.923536946" Nov 29 06:36:28 crc kubenswrapper[4947]: I1129 06:36:28.915002 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:28 crc kubenswrapper[4947]: E1129 06:36:28.916025 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 06:36:29.416008632 +0000 UTC m=+140.460390713 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcrf" (UID: "819051f4-236d-42d3-b3cf-c82103136dce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:28 crc kubenswrapper[4947]: I1129 06:36:28.987270 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8t9v" event={"ID":"20404d7a-857c-4f60-beef-e6ef9116804d","Type":"ContainerStarted","Data":"ad5f00e402dafba5eab9ea6138a38baba64b289740a22cb0c03187ec38a1a389"} Nov 29 06:36:28 crc kubenswrapper[4947]: I1129 06:36:28.995911 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dzzqk" event={"ID":"e8a3f832-b756-47ec-9729-7beacc669293","Type":"ContainerStarted","Data":"8fd811513cff7c75cdd1ffdad580eca09f1942556130536ca111e79a6af910d8"} Nov 29 06:36:29 crc kubenswrapper[4947]: I1129 06:36:29.003324 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-6ndbx" event={"ID":"a4cf8b6f-d7b3-4cc2-adf5-2494f5d21b71","Type":"ContainerStarted","Data":"e618ff38681e4802d4765e6e24dd3f43ee842d47c99399f7297bc0aa663e1d29"} Nov 29 06:36:29 crc kubenswrapper[4947]: I1129 06:36:29.005701 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hm525" event={"ID":"061238bd-c978-4bf9-9868-5ef174d414f2","Type":"ContainerStarted","Data":"be1a53a132170015cf99b7d1fce069bc9f142362cfd899cd37e004b5a525b04e"} Nov 29 06:36:29 crc kubenswrapper[4947]: I1129 06:36:29.008706 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-lx74d" event={"ID":"b85a2376-eba6-4a1e-b6eb-870ffc696f31","Type":"ContainerStarted","Data":"e577cbcd1b10e9583afc62e346d0b99bdcbeb09a76d051db19aca9f90074d0f1"} Nov 29 06:36:29 crc kubenswrapper[4947]: I1129 06:36:29.010996 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r88sq" event={"ID":"13ecfe15-43e4-42ff-817f-fc95fb8f54aa","Type":"ContainerStarted","Data":"6d73f659a172d473c6cd3edf9b08bed20e5c53ce1a624e0dc5768ced2ff592e8"} Nov 29 06:36:29 crc kubenswrapper[4947]: I1129 06:36:29.014619 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b6rpb" event={"ID":"2ce0e4b6-b001-41b7-a850-a77a9b7131d9","Type":"ContainerStarted","Data":"11b2d453d75685dd6e014a6b92e67d63139666b19085aca8f74b5557c18626c5"} Nov 29 06:36:29 crc kubenswrapper[4947]: I1129 06:36:29.016142 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:36:29 crc kubenswrapper[4947]: E1129 06:36:29.016698 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:36:29.516680339 +0000 UTC m=+140.561062420 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:29 crc kubenswrapper[4947]: I1129 06:36:29.018831 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sghgg" event={"ID":"e6c322b6-b29e-4177-9e8c-7fefbf9d7e4a","Type":"ContainerStarted","Data":"86b2c705ea366b42df7dd6aca63a17370e3fb121328a6e8f4781a7f57bbf7ed5"} Nov 29 06:36:29 crc kubenswrapper[4947]: I1129 06:36:29.021323 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-zcdgs" event={"ID":"e0e2a5e3-321b-4774-bf83-dd727fc954d2","Type":"ContainerStarted","Data":"acf2a28d598bbb442717b4735affb75ffbb4def2c49eb5df7dbb1e2acbb12c00"} Nov 29 06:36:29 crc kubenswrapper[4947]: I1129 06:36:29.025860 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-g227d" event={"ID":"4cfb4573-1a3c-43b5-aa58-83774b1b9212","Type":"ContainerStarted","Data":"971e6f5eae027707374d3b0849da027772dcec739f7d0ea071c1ff430af3e741"} Nov 29 06:36:29 crc kubenswrapper[4947]: I1129 06:36:29.029869 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9rskb" event={"ID":"a689b7f0-2ae8-4200-9e32-0ed56e5791d1","Type":"ContainerStarted","Data":"d00ed5359d1df3ad6c55d2c282803d9abf4bd37caa23bb60fbfcf92d2718982c"} Nov 29 06:36:29 crc kubenswrapper[4947]: I1129 06:36:29.032915 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-tcchr" event={"ID":"f060d79a-f223-455c-b203-0bd9e430a896","Type":"ContainerStarted","Data":"29614be7b7af5c7a8fc3eb6aacaa1da34894b1f9c4989c3e9eb5a41987363cbe"} Nov 29 06:36:29 crc kubenswrapper[4947]: I1129 06:36:29.034328 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-flwtp" event={"ID":"d0bccdee-ee49-4d76-9826-0e8ece077528","Type":"ContainerStarted","Data":"2a4043c7ebcee6afaaa69b736c24acc0fd904331791c92a12ab7c7c4ae54ec9f"} Nov 29 06:36:29 crc kubenswrapper[4947]: I1129 06:36:29.038527 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-l7pgj" event={"ID":"f05a65e3-0462-4718-a98a-864597cfb0e7","Type":"ContainerStarted","Data":"941a712413a0b86d86693e1b804644800b2c7278b5264b84143aeb88834f33bd"} Nov 29 06:36:29 crc kubenswrapper[4947]: I1129 06:36:29.042680 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-2vgqt" Nov 29 06:36:29 crc kubenswrapper[4947]: I1129 06:36:29.117981 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:29 crc kubenswrapper[4947]: E1129 06:36:29.118443 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 06:36:29.618424197 +0000 UTC m=+140.662806358 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcrf" (UID: "819051f4-236d-42d3-b3cf-c82103136dce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:29 crc kubenswrapper[4947]: I1129 06:36:29.223105 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:36:29 crc kubenswrapper[4947]: E1129 06:36:29.223431 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:36:29.723408962 +0000 UTC m=+140.767791033 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:29 crc kubenswrapper[4947]: I1129 06:36:29.223697 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:29 crc kubenswrapper[4947]: E1129 06:36:29.224007 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 06:36:29.72399154 +0000 UTC m=+140.768373621 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcrf" (UID: "819051f4-236d-42d3-b3cf-c82103136dce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:29 crc kubenswrapper[4947]: I1129 06:36:29.242271 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bkmbq"] Nov 29 06:36:29 crc kubenswrapper[4947]: I1129 06:36:29.242324 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qv66j"] Nov 29 06:36:29 crc kubenswrapper[4947]: I1129 06:36:29.242338 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-2drts"] Nov 29 06:36:29 crc kubenswrapper[4947]: I1129 06:36:29.242349 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-2nj82"] Nov 29 06:36:29 crc kubenswrapper[4947]: I1129 06:36:29.257538 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m5hbt"] Nov 29 06:36:29 crc kubenswrapper[4947]: I1129 06:36:29.266095 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mfvb"] Nov 29 06:36:29 crc kubenswrapper[4947]: I1129 06:36:29.267494 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-72fb2"] Nov 29 06:36:29 crc kubenswrapper[4947]: W1129 06:36:29.291742 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1859af1a_8cea_4330_b44f_69c94692bfde.slice/crio-1491cb839c574cd7415b2899b517fce4b2af5f61418c94163557c61fc6e9d6fb WatchSource:0}: Error finding container 1491cb839c574cd7415b2899b517fce4b2af5f61418c94163557c61fc6e9d6fb: Status 404 returned error can't find the container with id 1491cb839c574cd7415b2899b517fce4b2af5f61418c94163557c61fc6e9d6fb Nov 29 06:36:29 crc kubenswrapper[4947]: I1129 06:36:29.312318 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-nswtf"] Nov 29 06:36:29 crc kubenswrapper[4947]: I1129 06:36:29.324739 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:36:29 crc kubenswrapper[4947]: E1129 06:36:29.325055 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:36:29.825028197 +0000 UTC m=+140.869410278 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:29 crc kubenswrapper[4947]: I1129 06:36:29.408325 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-dh8pj"] Nov 29 06:36:29 crc kubenswrapper[4947]: I1129 06:36:29.426898 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:29 crc kubenswrapper[4947]: E1129 06:36:29.427191 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 06:36:29.927179018 +0000 UTC m=+140.971561089 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcrf" (UID: "819051f4-236d-42d3-b3cf-c82103136dce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:29 crc kubenswrapper[4947]: I1129 06:36:29.528020 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:36:29 crc kubenswrapper[4947]: E1129 06:36:29.528239 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:36:30.028196835 +0000 UTC m=+141.072578926 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:29 crc kubenswrapper[4947]: I1129 06:36:29.528473 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:29 crc kubenswrapper[4947]: E1129 06:36:29.528865 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 06:36:30.028855024 +0000 UTC m=+141.073237105 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcrf" (UID: "819051f4-236d-42d3-b3cf-c82103136dce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:29 crc kubenswrapper[4947]: I1129 06:36:29.572973 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-6ndbx" Nov 29 06:36:29 crc kubenswrapper[4947]: I1129 06:36:29.576548 4947 patch_prober.go:28] interesting pod/router-default-5444994796-6ndbx container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Nov 29 06:36:29 crc kubenswrapper[4947]: I1129 06:36:29.576607 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6ndbx" podUID="a4cf8b6f-d7b3-4cc2-adf5-2494f5d21b71" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Nov 29 06:36:29 crc kubenswrapper[4947]: I1129 06:36:29.587282 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-7jmxk"] Nov 29 06:36:29 crc kubenswrapper[4947]: I1129 06:36:29.602650 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mzd69"] Nov 29 06:36:29 crc kubenswrapper[4947]: I1129 06:36:29.604448 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-lkf5s"] Nov 29 06:36:29 crc kubenswrapper[4947]: I1129 06:36:29.629847 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:36:29 crc kubenswrapper[4947]: E1129 06:36:29.630186 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:36:30.13017263 +0000 UTC m=+141.174554711 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:29 crc kubenswrapper[4947]: I1129 06:36:29.659530 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cwrn9"] Nov 29 06:36:29 crc kubenswrapper[4947]: I1129 06:36:29.660671 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-mcs9v" podStartSLOduration=120.660642477 podStartE2EDuration="2m0.660642477s" podCreationTimestamp="2025-11-29 06:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:36:29.655341959 +0000 UTC m=+140.699724040" watchObservedRunningTime="2025-11-29 06:36:29.660642477 +0000 UTC m=+140.705024558" Nov 29 06:36:29 crc kubenswrapper[4947]: I1129 06:36:29.710863 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8t9v" Nov 29 06:36:29 crc kubenswrapper[4947]: I1129 06:36:29.710936 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5b7f7"] Nov 29 06:36:29 crc kubenswrapper[4947]: I1129 06:36:29.710981 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-mcs9v" Nov 29 06:36:29 crc kubenswrapper[4947]: I1129 06:36:29.783078 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:29 crc kubenswrapper[4947]: E1129 06:36:29.784207 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 06:36:30.284151203 +0000 UTC m=+141.328533284 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcrf" (UID: "819051f4-236d-42d3-b3cf-c82103136dce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:29 crc kubenswrapper[4947]: I1129 06:36:29.808379 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-b45sk"] Nov 29 06:36:29 crc kubenswrapper[4947]: I1129 06:36:29.811564 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hm525" podStartSLOduration=120.811542778 podStartE2EDuration="2m0.811542778s" podCreationTimestamp="2025-11-29 06:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:36:29.758313084 +0000 UTC m=+140.802695165" watchObservedRunningTime="2025-11-29 06:36:29.811542778 +0000 UTC m=+140.855924859" Nov 29 06:36:29 crc kubenswrapper[4947]: I1129 06:36:29.830033 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sghgg" podStartSLOduration=121.830011448 podStartE2EDuration="2m1.830011448s" podCreationTimestamp="2025-11-29 06:34:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:36:29.804175519 +0000 UTC m=+140.848557600" watchObservedRunningTime="2025-11-29 06:36:29.830011448 +0000 UTC m=+140.874393529" Nov 29 06:36:29 crc kubenswrapper[4947]: I1129 06:36:29.864543 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-xxz9x"] Nov 29 06:36:29 crc kubenswrapper[4947]: I1129 06:36:29.908267 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:36:29 crc kubenswrapper[4947]: E1129 06:36:29.908575 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:36:30.408551306 +0000 UTC m=+141.452933387 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:29 crc kubenswrapper[4947]: I1129 06:36:29.908632 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406630-wnfkh"] Nov 29 06:36:29 crc kubenswrapper[4947]: I1129 06:36:29.910598 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-p7wlh"] Nov 29 06:36:29 crc kubenswrapper[4947]: I1129 06:36:29.935813 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6mhws"] Nov 29 06:36:29 crc kubenswrapper[4947]: I1129 06:36:29.938247 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-66xmw"] Nov 29 06:36:30 crc kubenswrapper[4947]: I1129 06:36:30.008992 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:30 crc kubenswrapper[4947]: E1129 06:36:30.009314 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 06:36:30.509301405 +0000 UTC m=+141.553683476 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcrf" (UID: "819051f4-236d-42d3-b3cf-c82103136dce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:30 crc kubenswrapper[4947]: I1129 06:36:30.038467 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9rskb" podStartSLOduration=121.038443352 podStartE2EDuration="2m1.038443352s" podCreationTimestamp="2025-11-29 06:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:36:30.034028701 +0000 UTC m=+141.078410782" watchObservedRunningTime="2025-11-29 06:36:30.038443352 +0000 UTC m=+141.082825433" Nov 29 06:36:30 crc kubenswrapper[4947]: I1129 06:36:30.064904 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-72fb2" event={"ID":"51755494-2de8-480e-a1e5-fc10c9af3d06","Type":"ContainerStarted","Data":"cf0ed6b71d6619d73770585a5962ccd2c8cbe1e644937cf9c5ac1b79bce0896b"} Nov 29 06:36:30 crc kubenswrapper[4947]: I1129 06:36:30.068872 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-2nj82" event={"ID":"967c6d72-d998-4b42-8de2-b9fd1712fc12","Type":"ContainerStarted","Data":"b234df5aaf9b4da5e9c2d6256dd69afdab7cb48b8cc523b8bad283d688ad5b0a"} Nov 29 06:36:30 crc kubenswrapper[4947]: I1129 06:36:30.069942 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-nswtf" event={"ID":"711e27d0-dd37-4f6f-adae-5c04bb856f47","Type":"ContainerStarted","Data":"1d39fe89576b447180b030d4a7d83e6ad08d56eab53acfadebcc72f307fc56d1"} Nov 29 06:36:30 crc kubenswrapper[4947]: I1129 06:36:30.071486 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bkmbq" event={"ID":"e6f31928-dd2b-41c9-8103-3652eb01b1ad","Type":"ContainerStarted","Data":"fc02d97f7f6428b1f60d633b88504652c490ff2ecec4c86c749748b5bf2de179"} Nov 29 06:36:30 crc kubenswrapper[4947]: I1129 06:36:30.072665 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mzd69" event={"ID":"59495e16-0371-48e0-b517-0adb7ac8eb4f","Type":"ContainerStarted","Data":"7c1a067170cec18cc51fa24b8cc4d10b77373795afa3bf7623a6b491b117dffa"} Nov 29 06:36:30 crc kubenswrapper[4947]: I1129 06:36:30.074320 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mfvb" event={"ID":"019be0a2-be0d-43c7-a91d-280a3508c623","Type":"ContainerStarted","Data":"83b5efc4f003b2f4d144cc3fa2b2bc511de32b48a96587bfba28a5b6d643b892"} Nov 29 06:36:30 crc kubenswrapper[4947]: I1129 06:36:30.076747 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-7jmxk" event={"ID":"338a8f3b-127a-44a4-af55-0d7f034bbf17","Type":"ContainerStarted","Data":"3acd3b7f3aa6e058051780d5b3beec5c2f4b2039586891c9927a59ef9e37697b"} Nov 29 06:36:30 crc kubenswrapper[4947]: I1129 06:36:30.077586 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-2drts" event={"ID":"d950d1cc-546a-4650-ab9c-e58388bda769","Type":"ContainerStarted","Data":"8714a973f6c609a2728d02b5a5ff750eef2e59418cba6310a716ffaf29e9c3a3"} Nov 29 06:36:30 crc kubenswrapper[4947]: I1129 06:36:30.078639 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cwrn9" event={"ID":"07b30d68-832e-44c3-aa22-18c8f1cbb6e6","Type":"ContainerStarted","Data":"5dfa21256e6aca6c7dcad4d30bfe26f204aaf707eaae6ba941a12a70ee0eac4a"} Nov 29 06:36:30 crc kubenswrapper[4947]: I1129 06:36:30.079578 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dh8pj" event={"ID":"23977121-e0f0-4055-a727-c4050a20f2a6","Type":"ContainerStarted","Data":"1e4ff41581097ec9493b5b2c09162209d0e79f59eedc85e3f8e43596e15c69d1"} Nov 29 06:36:30 crc kubenswrapper[4947]: I1129 06:36:30.320161 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lkf5s" event={"ID":"79190f34-e70f-4fa8-b8da-7db3b29678a0","Type":"ContainerStarted","Data":"c425b24e1512972e215428d7bd5c05ee08c7d3448d1d9db5aac622b35b4b0f48"} Nov 29 06:36:30 crc kubenswrapper[4947]: I1129 06:36:30.329036 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:36:30 crc kubenswrapper[4947]: E1129 06:36:30.329156 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:36:30.829139205 +0000 UTC m=+141.873521286 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:30 crc kubenswrapper[4947]: I1129 06:36:30.329555 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:30 crc kubenswrapper[4947]: E1129 06:36:30.329930 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 06:36:30.829915808 +0000 UTC m=+141.874297889 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcrf" (UID: "819051f4-236d-42d3-b3cf-c82103136dce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:30 crc kubenswrapper[4947]: W1129 06:36:30.346263 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec9073bd_31ec_4b35_93a9_08d26b62c60d.slice/crio-9686e6cb5ecbd6a599a7ba71d88930004f30021b931a7872b361991f5ba3257b WatchSource:0}: Error finding container 9686e6cb5ecbd6a599a7ba71d88930004f30021b931a7872b361991f5ba3257b: Status 404 returned error can't find the container with id 9686e6cb5ecbd6a599a7ba71d88930004f30021b931a7872b361991f5ba3257b Nov 29 06:36:30 crc kubenswrapper[4947]: I1129 06:36:30.347141 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qv66j" event={"ID":"5da50f13-a25d-403c-8fda-39f93a5cf4fd","Type":"ContainerStarted","Data":"367d2e015e69330c1a9561e008800a5a3b00f8ad385cfc193c24438e4e513bff"} Nov 29 06:36:30 crc kubenswrapper[4947]: I1129 06:36:30.391487 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5b7f7" event={"ID":"1f2bc292-c394-4dd0-9ce5-51d960430aa4","Type":"ContainerStarted","Data":"47d3e39de2513e468276c9bd3345e131b41d959e47be0acf43ce0762fdb9f3a7"} Nov 29 06:36:30 crc kubenswrapper[4947]: W1129 06:36:30.406305 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7fc6440e_f991_4421_b078_9496ffdfb74d.slice/crio-55a09d340e95bb18c40759249e4fe86fe7cf893c23fb819cf3e14397ca6966dd WatchSource:0}: Error finding container 55a09d340e95bb18c40759249e4fe86fe7cf893c23fb819cf3e14397ca6966dd: Status 404 returned error can't find the container with id 55a09d340e95bb18c40759249e4fe86fe7cf893c23fb819cf3e14397ca6966dd Nov 29 06:36:30 crc kubenswrapper[4947]: I1129 06:36:30.407926 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m5hbt" event={"ID":"1859af1a-8cea-4330-b44f-69c94692bfde","Type":"ContainerStarted","Data":"1491cb839c574cd7415b2899b517fce4b2af5f61418c94163557c61fc6e9d6fb"} Nov 29 06:36:30 crc kubenswrapper[4947]: I1129 06:36:30.450477 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:36:30 crc kubenswrapper[4947]: E1129 06:36:30.451816 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:36:30.951803036 +0000 UTC m=+141.996185117 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:30 crc kubenswrapper[4947]: I1129 06:36:30.461141 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-ts6sm"] Nov 29 06:36:30 crc kubenswrapper[4947]: I1129 06:36:30.480850 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-4cgzv"] Nov 29 06:36:30 crc kubenswrapper[4947]: I1129 06:36:30.482828 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r88sq" podStartSLOduration=121.482812449 podStartE2EDuration="2m1.482812449s" podCreationTimestamp="2025-11-29 06:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:36:30.4250613 +0000 UTC m=+141.469443381" watchObservedRunningTime="2025-11-29 06:36:30.482812449 +0000 UTC m=+141.527194530" Nov 29 06:36:30 crc kubenswrapper[4947]: I1129 06:36:30.534288 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-6ndbx" podStartSLOduration=121.534273931 podStartE2EDuration="2m1.534273931s" podCreationTimestamp="2025-11-29 06:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:36:30.512814062 +0000 UTC m=+141.557196143" watchObservedRunningTime="2025-11-29 06:36:30.534273931 +0000 UTC m=+141.578656012" Nov 29 06:36:30 crc kubenswrapper[4947]: I1129 06:36:30.534624 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xjxbf"] Nov 29 06:36:30 crc kubenswrapper[4947]: I1129 06:36:30.553298 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:30 crc kubenswrapper[4947]: E1129 06:36:30.553851 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 06:36:31.053838223 +0000 UTC m=+142.098220304 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcrf" (UID: "819051f4-236d-42d3-b3cf-c82103136dce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:30 crc kubenswrapper[4947]: I1129 06:36:30.569650 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-td4m7"] Nov 29 06:36:30 crc kubenswrapper[4947]: I1129 06:36:30.585459 4947 patch_prober.go:28] interesting pod/router-default-5444994796-6ndbx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 06:36:30 crc kubenswrapper[4947]: [-]has-synced failed: reason withheld Nov 29 06:36:30 crc kubenswrapper[4947]: [+]process-running ok Nov 29 06:36:30 crc kubenswrapper[4947]: healthz check failed Nov 29 06:36:30 crc kubenswrapper[4947]: I1129 06:36:30.585525 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6ndbx" podUID="a4cf8b6f-d7b3-4cc2-adf5-2494f5d21b71" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 06:36:30 crc kubenswrapper[4947]: I1129 06:36:30.659317 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:36:30 crc kubenswrapper[4947]: E1129 06:36:30.659845 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:36:31.159823968 +0000 UTC m=+142.204206049 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:30 crc kubenswrapper[4947]: I1129 06:36:30.761069 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:30 crc kubenswrapper[4947]: E1129 06:36:30.762318 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 06:36:31.262301829 +0000 UTC m=+142.306683910 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcrf" (UID: "819051f4-236d-42d3-b3cf-c82103136dce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:30 crc kubenswrapper[4947]: I1129 06:36:30.863549 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:36:30 crc kubenswrapper[4947]: E1129 06:36:30.863934 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:36:31.363919193 +0000 UTC m=+142.408301274 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:30 crc kubenswrapper[4947]: I1129 06:36:30.965976 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:30 crc kubenswrapper[4947]: E1129 06:36:30.967988 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 06:36:31.46797327 +0000 UTC m=+142.512355351 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcrf" (UID: "819051f4-236d-42d3-b3cf-c82103136dce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:31 crc kubenswrapper[4947]: I1129 06:36:31.066686 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:36:31 crc kubenswrapper[4947]: E1129 06:36:31.066956 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:36:31.566937716 +0000 UTC m=+142.611319807 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:31 crc kubenswrapper[4947]: I1129 06:36:31.067284 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:31 crc kubenswrapper[4947]: E1129 06:36:31.067627 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 06:36:31.567618247 +0000 UTC m=+142.612000328 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcrf" (UID: "819051f4-236d-42d3-b3cf-c82103136dce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:31 crc kubenswrapper[4947]: I1129 06:36:31.168260 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:36:31 crc kubenswrapper[4947]: E1129 06:36:31.168804 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:36:31.668785867 +0000 UTC m=+142.713167948 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:31 crc kubenswrapper[4947]: I1129 06:36:31.269372 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:31 crc kubenswrapper[4947]: E1129 06:36:31.270030 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 06:36:31.77001404 +0000 UTC m=+142.814396121 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcrf" (UID: "819051f4-236d-42d3-b3cf-c82103136dce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:31 crc kubenswrapper[4947]: I1129 06:36:31.370326 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:36:31 crc kubenswrapper[4947]: E1129 06:36:31.370763 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:36:31.870734408 +0000 UTC m=+142.915116489 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:31 crc kubenswrapper[4947]: I1129 06:36:31.449970 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-2drts" event={"ID":"d950d1cc-546a-4650-ab9c-e58388bda769","Type":"ContainerStarted","Data":"6ec56ed9353c910d43ef834a670ef2f16c55dbc08b27d37849fa9ad4a54c29dc"} Nov 29 06:36:31 crc kubenswrapper[4947]: I1129 06:36:31.471462 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:31 crc kubenswrapper[4947]: E1129 06:36:31.471807 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 06:36:31.971795506 +0000 UTC m=+143.016177587 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcrf" (UID: "819051f4-236d-42d3-b3cf-c82103136dce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:31 crc kubenswrapper[4947]: I1129 06:36:31.501665 4947 generic.go:334] "Generic (PLEG): container finished" podID="b85a2376-eba6-4a1e-b6eb-870ffc696f31" containerID="8c8a8323f881e566ad469a8b7c03fcefe9365e28a194696f8d50aa9945fa18dc" exitCode=0 Nov 29 06:36:31 crc kubenswrapper[4947]: I1129 06:36:31.505684 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-lx74d" event={"ID":"b85a2376-eba6-4a1e-b6eb-870ffc696f31","Type":"ContainerDied","Data":"8c8a8323f881e566ad469a8b7c03fcefe9365e28a194696f8d50aa9945fa18dc"} Nov 29 06:36:31 crc kubenswrapper[4947]: I1129 06:36:31.509506 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-66xmw" event={"ID":"4cd078d1-1fb6-4997-a55e-f90cfea7bf7a","Type":"ContainerStarted","Data":"9de25a79e5e6c7583230c726a55c9adb90ce814d2f6e0be14ee2852842990429"} Nov 29 06:36:31 crc kubenswrapper[4947]: I1129 06:36:31.525306 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-g227d" event={"ID":"4cfb4573-1a3c-43b5-aa58-83774b1b9212","Type":"ContainerStarted","Data":"fc5455d56f1dd1fb4acd7102253ae35f30eaf4dfbd617a467069ac0493b895aa"} Nov 29 06:36:31 crc kubenswrapper[4947]: I1129 06:36:31.550192 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mfvb" event={"ID":"019be0a2-be0d-43c7-a91d-280a3508c623","Type":"ContainerStarted","Data":"9bbae844ab0c4ed6d59f2123b5c22d6327aa35a8cd132c08cbfc21e13e652eae"} Nov 29 06:36:31 crc kubenswrapper[4947]: I1129 06:36:31.572367 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:36:31 crc kubenswrapper[4947]: E1129 06:36:31.573452 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:36:32.073435222 +0000 UTC m=+143.117817303 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:31 crc kubenswrapper[4947]: I1129 06:36:31.588622 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mfvb" podStartSLOduration=122.588605303 podStartE2EDuration="2m2.588605303s" podCreationTimestamp="2025-11-29 06:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:36:31.588036776 +0000 UTC m=+142.632418857" watchObservedRunningTime="2025-11-29 06:36:31.588605303 +0000 UTC m=+142.632987384" Nov 29 06:36:31 crc kubenswrapper[4947]: I1129 06:36:31.597674 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-72fb2" event={"ID":"51755494-2de8-480e-a1e5-fc10c9af3d06","Type":"ContainerStarted","Data":"8a1a85f4520f8e1fdf1498e0da61293652c59ed96f6786bee8819969576885dc"} Nov 29 06:36:31 crc kubenswrapper[4947]: I1129 06:36:31.598183 4947 patch_prober.go:28] interesting pod/router-default-5444994796-6ndbx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 06:36:31 crc kubenswrapper[4947]: [-]has-synced failed: reason withheld Nov 29 06:36:31 crc kubenswrapper[4947]: [+]process-running ok Nov 29 06:36:31 crc kubenswrapper[4947]: healthz check failed Nov 29 06:36:31 crc kubenswrapper[4947]: I1129 06:36:31.598266 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6ndbx" podUID="a4cf8b6f-d7b3-4cc2-adf5-2494f5d21b71" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 06:36:31 crc kubenswrapper[4947]: I1129 06:36:31.598477 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-72fb2" Nov 29 06:36:31 crc kubenswrapper[4947]: I1129 06:36:31.626318 4947 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-72fb2 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.36:6443/healthz\": dial tcp 10.217.0.36:6443: connect: connection refused" start-of-body= Nov 29 06:36:31 crc kubenswrapper[4947]: I1129 06:36:31.626381 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-72fb2" podUID="51755494-2de8-480e-a1e5-fc10c9af3d06" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.36:6443/healthz\": dial tcp 10.217.0.36:6443: connect: connection refused" Nov 29 06:36:31 crc kubenswrapper[4947]: I1129 06:36:31.627386 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-g227d" podStartSLOduration=122.627370257 podStartE2EDuration="2m2.627370257s" podCreationTimestamp="2025-11-29 06:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:36:31.626990286 +0000 UTC m=+142.671372367" watchObservedRunningTime="2025-11-29 06:36:31.627370257 +0000 UTC m=+142.671752338" Nov 29 06:36:31 crc kubenswrapper[4947]: I1129 06:36:31.678409 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:31 crc kubenswrapper[4947]: E1129 06:36:31.686561 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 06:36:32.186543928 +0000 UTC m=+143.230926009 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcrf" (UID: "819051f4-236d-42d3-b3cf-c82103136dce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:31 crc kubenswrapper[4947]: I1129 06:36:31.770053 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6mhws" event={"ID":"675de8ae-169a-4737-a290-54cdb32d8cb0","Type":"ContainerStarted","Data":"bbd1e7e850d1b521e66d40250ebb40ea0fa999a2aa6c6ae2912fcb9ab3031f7f"} Nov 29 06:36:31 crc kubenswrapper[4947]: E1129 06:36:31.782434 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:36:32.282398912 +0000 UTC m=+143.326781003 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:31 crc kubenswrapper[4947]: I1129 06:36:31.782474 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:36:31 crc kubenswrapper[4947]: I1129 06:36:31.782976 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:31 crc kubenswrapper[4947]: E1129 06:36:31.783454 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 06:36:32.283434702 +0000 UTC m=+143.327816783 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcrf" (UID: "819051f4-236d-42d3-b3cf-c82103136dce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:31 crc kubenswrapper[4947]: I1129 06:36:31.863469 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b6rpb" event={"ID":"2ce0e4b6-b001-41b7-a850-a77a9b7131d9","Type":"ContainerStarted","Data":"265539f3ac153eb026308317a5465e32fc4987f9aabf956aa4b16380d3099535"} Nov 29 06:36:31 crc kubenswrapper[4947]: I1129 06:36:31.886520 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:36:31 crc kubenswrapper[4947]: E1129 06:36:31.886976 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:36:32.386955914 +0000 UTC m=+143.431337995 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:31 crc kubenswrapper[4947]: I1129 06:36:31.921058 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-72fb2" podStartSLOduration=122.921043988 podStartE2EDuration="2m2.921043988s" podCreationTimestamp="2025-11-29 06:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:36:31.770429205 +0000 UTC m=+142.814811286" watchObservedRunningTime="2025-11-29 06:36:31.921043988 +0000 UTC m=+142.965426069" Nov 29 06:36:31 crc kubenswrapper[4947]: I1129 06:36:31.921653 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b6rpb" podStartSLOduration=122.921648236 podStartE2EDuration="2m2.921648236s" podCreationTimestamp="2025-11-29 06:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:36:31.912056331 +0000 UTC m=+142.956438412" watchObservedRunningTime="2025-11-29 06:36:31.921648236 +0000 UTC m=+142.966030317" Nov 29 06:36:31 crc kubenswrapper[4947]: I1129 06:36:31.921693 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406630-wnfkh" event={"ID":"3f161f69-0220-4b3f-9f46-76277cd105f9","Type":"ContainerStarted","Data":"de2481dba38f7f12860db2cb61b2f4648cfae5de917e0579aecfa5ce4be8dacb"} Nov 29 06:36:31 crc kubenswrapper[4947]: I1129 06:36:31.967109 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-l7pgj" event={"ID":"f05a65e3-0462-4718-a98a-864597cfb0e7","Type":"ContainerStarted","Data":"31c2e28939a513f25edc88140d43c0ef67c51d55a7c1bdc80410fe04630a0263"} Nov 29 06:36:31 crc kubenswrapper[4947]: I1129 06:36:31.991810 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:31 crc kubenswrapper[4947]: E1129 06:36:31.993087 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 06:36:32.493075633 +0000 UTC m=+143.537457714 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcrf" (UID: "819051f4-236d-42d3-b3cf-c82103136dce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:32 crc kubenswrapper[4947]: I1129 06:36:32.039478 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5b7f7" event={"ID":"1f2bc292-c394-4dd0-9ce5-51d960430aa4","Type":"ContainerStarted","Data":"ab2a8c4b8dbd12b2a334b8cf2c88cf36001af708192a175f3f5861c17e0731fd"} Nov 29 06:36:32 crc kubenswrapper[4947]: I1129 06:36:32.072528 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-b45sk" event={"ID":"ec9073bd-31ec-4b35-93a9-08d26b62c60d","Type":"ContainerStarted","Data":"9686e6cb5ecbd6a599a7ba71d88930004f30021b931a7872b361991f5ba3257b"} Nov 29 06:36:32 crc kubenswrapper[4947]: I1129 06:36:32.091171 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-zcdgs" event={"ID":"e0e2a5e3-321b-4774-bf83-dd727fc954d2","Type":"ContainerStarted","Data":"cb839807974f9cef91d290be0c19c082d90e237604c70154cabcd00ab0fd90be"} Nov 29 06:36:32 crc kubenswrapper[4947]: I1129 06:36:32.092284 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-zcdgs" Nov 29 06:36:32 crc kubenswrapper[4947]: I1129 06:36:32.092694 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:36:32 crc kubenswrapper[4947]: E1129 06:36:32.093031 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:36:32.593013267 +0000 UTC m=+143.637395358 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:32 crc kubenswrapper[4947]: I1129 06:36:32.109762 4947 patch_prober.go:28] interesting pod/downloads-7954f5f757-zcdgs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Nov 29 06:36:32 crc kubenswrapper[4947]: I1129 06:36:32.109835 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zcdgs" podUID="e0e2a5e3-321b-4774-bf83-dd727fc954d2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" Nov 29 06:36:32 crc kubenswrapper[4947]: I1129 06:36:32.110554 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-tcchr" event={"ID":"f060d79a-f223-455c-b203-0bd9e430a896","Type":"ContainerStarted","Data":"c915daa0bd58ecdefbcb70025611582792c3cb1bdbfd1cbf84068fb9dce65fee"} Nov 29 06:36:32 crc kubenswrapper[4947]: I1129 06:36:32.112098 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lkf5s" event={"ID":"79190f34-e70f-4fa8-b8da-7db3b29678a0","Type":"ContainerStarted","Data":"faabb8378ff887ce0196c5f55244021e29a04815bcf0a422ffba6f348d659a57"} Nov 29 06:36:32 crc kubenswrapper[4947]: I1129 06:36:32.133685 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-td4m7" event={"ID":"0eed2dde-d158-43e1-8df2-f5a309ef3da3","Type":"ContainerStarted","Data":"4921d5e8cb32f40c5fcb1d9a704a2226a490a9e865a0aa15374bdd10d1146091"} Nov 29 06:36:32 crc kubenswrapper[4947]: I1129 06:36:32.134923 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5b7f7" podStartSLOduration=123.134902824 podStartE2EDuration="2m3.134902824s" podCreationTimestamp="2025-11-29 06:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:36:32.134666877 +0000 UTC m=+143.179048968" watchObservedRunningTime="2025-11-29 06:36:32.134902824 +0000 UTC m=+143.179284905" Nov 29 06:36:32 crc kubenswrapper[4947]: I1129 06:36:32.135921 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qv66j" event={"ID":"5da50f13-a25d-403c-8fda-39f93a5cf4fd","Type":"ContainerStarted","Data":"46a0e2ae7d30f9e10307a370bf6ebd19acf2dbad8bf2cf60cc26dc3275e239bb"} Nov 29 06:36:32 crc kubenswrapper[4947]: I1129 06:36:32.136827 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qv66j" Nov 29 06:36:32 crc kubenswrapper[4947]: I1129 06:36:32.137101 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-l7pgj" podStartSLOduration=7.137094419 podStartE2EDuration="7.137094419s" podCreationTimestamp="2025-11-29 06:36:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:36:32.047173183 +0000 UTC m=+143.091555264" watchObservedRunningTime="2025-11-29 06:36:32.137094419 +0000 UTC m=+143.181476500" Nov 29 06:36:32 crc kubenswrapper[4947]: I1129 06:36:32.147928 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-p7wlh" event={"ID":"7fc6440e-f991-4421-b078-9496ffdfb74d","Type":"ContainerStarted","Data":"55a09d340e95bb18c40759249e4fe86fe7cf893c23fb819cf3e14397ca6966dd"} Nov 29 06:36:32 crc kubenswrapper[4947]: I1129 06:36:32.176006 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qv66j" Nov 29 06:36:32 crc kubenswrapper[4947]: I1129 06:36:32.194113 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:32 crc kubenswrapper[4947]: E1129 06:36:32.194483 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 06:36:32.694468487 +0000 UTC m=+143.738850568 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcrf" (UID: "819051f4-236d-42d3-b3cf-c82103136dce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:32 crc kubenswrapper[4947]: I1129 06:36:32.197465 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-zcdgs" podStartSLOduration=123.197450816 podStartE2EDuration="2m3.197450816s" podCreationTimestamp="2025-11-29 06:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:36:32.194648323 +0000 UTC m=+143.239030404" watchObservedRunningTime="2025-11-29 06:36:32.197450816 +0000 UTC m=+143.241832897" Nov 29 06:36:32 crc kubenswrapper[4947]: I1129 06:36:32.246668 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-nswtf" event={"ID":"711e27d0-dd37-4f6f-adae-5c04bb856f47","Type":"ContainerStarted","Data":"3d7a0856b3eed704fd39d030c26cc763b373fa73bc10b6f0f261564436644ef1"} Nov 29 06:36:32 crc kubenswrapper[4947]: I1129 06:36:32.258182 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qv66j" podStartSLOduration=123.258161763 podStartE2EDuration="2m3.258161763s" podCreationTimestamp="2025-11-29 06:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:36:32.249952089 +0000 UTC m=+143.294334170" watchObservedRunningTime="2025-11-29 06:36:32.258161763 +0000 UTC m=+143.302543854" Nov 29 06:36:32 crc kubenswrapper[4947]: I1129 06:36:32.297329 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:36:32 crc kubenswrapper[4947]: E1129 06:36:32.298576 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:36:32.798553225 +0000 UTC m=+143.842935306 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:32 crc kubenswrapper[4947]: I1129 06:36:32.303535 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-4cgzv" event={"ID":"26b3766f-e08f-47e8-803e-d138e6e7620f","Type":"ContainerStarted","Data":"1a499905b61147bbd840f543f04b02e40b7546dc131b2eeb4a453bd61cdd7a75"} Nov 29 06:36:32 crc kubenswrapper[4947]: I1129 06:36:32.320785 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-4cgzv" podStartSLOduration=8.320771727 podStartE2EDuration="8.320771727s" podCreationTimestamp="2025-11-29 06:36:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:36:32.319928642 +0000 UTC m=+143.364310733" watchObservedRunningTime="2025-11-29 06:36:32.320771727 +0000 UTC m=+143.365153808" Nov 29 06:36:32 crc kubenswrapper[4947]: I1129 06:36:32.321128 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-nswtf" podStartSLOduration=123.321124157 podStartE2EDuration="2m3.321124157s" podCreationTimestamp="2025-11-29 06:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:36:32.292048022 +0000 UTC m=+143.336430103" watchObservedRunningTime="2025-11-29 06:36:32.321124157 +0000 UTC m=+143.365506238" Nov 29 06:36:32 crc kubenswrapper[4947]: I1129 06:36:32.325165 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-flwtp" event={"ID":"d0bccdee-ee49-4d76-9826-0e8ece077528","Type":"ContainerStarted","Data":"911f0c25893435aa982834e583e724b9c74b32699766d246dc4b2d2d860a443d"} Nov 29 06:36:32 crc kubenswrapper[4947]: I1129 06:36:32.351583 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ts6sm" event={"ID":"f903f69c-2db9-478a-9141-22f6aeb27ce3","Type":"ContainerStarted","Data":"9ffd31e3254939340348e63e1d213b149dcfc2d8890d1314e95599a7e316aa49"} Nov 29 06:36:32 crc kubenswrapper[4947]: I1129 06:36:32.363323 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-flwtp" podStartSLOduration=123.363303803 podStartE2EDuration="2m3.363303803s" podCreationTimestamp="2025-11-29 06:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:36:32.361973443 +0000 UTC m=+143.406355534" watchObservedRunningTime="2025-11-29 06:36:32.363303803 +0000 UTC m=+143.407685884" Nov 29 06:36:32 crc kubenswrapper[4947]: I1129 06:36:32.374983 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xxz9x" event={"ID":"1627cfb6-29e2-4b2e-ae8b-0dcd6d125da0","Type":"ContainerStarted","Data":"dadfd9435627de8c13450c605cd9a9eaccfd9ca4263e2ae0cc4ee3355507dedd"} Nov 29 06:36:32 crc kubenswrapper[4947]: I1129 06:36:32.382325 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-7jmxk" event={"ID":"338a8f3b-127a-44a4-af55-0d7f034bbf17","Type":"ContainerStarted","Data":"9d91d8bdee3e3e6b5c1d1f21faea787561161f2676643370ac62ac03b55611b2"} Nov 29 06:36:32 crc kubenswrapper[4947]: I1129 06:36:32.402835 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:32 crc kubenswrapper[4947]: E1129 06:36:32.405497 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 06:36:32.905483928 +0000 UTC m=+143.949866009 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcrf" (UID: "819051f4-236d-42d3-b3cf-c82103136dce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:32 crc kubenswrapper[4947]: I1129 06:36:32.408520 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dzzqk" event={"ID":"e8a3f832-b756-47ec-9729-7beacc669293","Type":"ContainerStarted","Data":"d7647849e8675664189b7bd3c4ae2d3b343078198738bf4ecd3fc76a93237ba2"} Nov 29 06:36:32 crc kubenswrapper[4947]: I1129 06:36:32.415904 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-7jmxk" podStartSLOduration=122.415883058 podStartE2EDuration="2m2.415883058s" podCreationTimestamp="2025-11-29 06:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:36:32.414394453 +0000 UTC m=+143.458776534" watchObservedRunningTime="2025-11-29 06:36:32.415883058 +0000 UTC m=+143.460265139" Nov 29 06:36:32 crc kubenswrapper[4947]: I1129 06:36:32.453317 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bkmbq" event={"ID":"e6f31928-dd2b-41c9-8103-3652eb01b1ad","Type":"ContainerStarted","Data":"0416876e1aca11f3c94dbc2adf25c021676cdac3e28368b856528dc6b2193dc3"} Nov 29 06:36:32 crc kubenswrapper[4947]: I1129 06:36:32.489039 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mzd69" event={"ID":"59495e16-0371-48e0-b517-0adb7ac8eb4f","Type":"ContainerStarted","Data":"fa95adfea922161842d2d8d82e561003090f1e6d445ac16e01ce17cc63fac5c5"} Nov 29 06:36:32 crc kubenswrapper[4947]: I1129 06:36:32.489702 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mzd69" Nov 29 06:36:32 crc kubenswrapper[4947]: I1129 06:36:32.510500 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:36:32 crc kubenswrapper[4947]: E1129 06:36:32.511921 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:36:33.011898476 +0000 UTC m=+144.056280567 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:32 crc kubenswrapper[4947]: I1129 06:36:32.524415 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dzzqk" podStartSLOduration=123.524396568 podStartE2EDuration="2m3.524396568s" podCreationTimestamp="2025-11-29 06:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:36:32.466474404 +0000 UTC m=+143.510856495" watchObservedRunningTime="2025-11-29 06:36:32.524396568 +0000 UTC m=+143.568778659" Nov 29 06:36:32 crc kubenswrapper[4947]: I1129 06:36:32.526037 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bkmbq" podStartSLOduration=123.526027986 podStartE2EDuration="2m3.526027986s" podCreationTimestamp="2025-11-29 06:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:36:32.523417219 +0000 UTC m=+143.567799300" watchObservedRunningTime="2025-11-29 06:36:32.526027986 +0000 UTC m=+143.570410067" Nov 29 06:36:32 crc kubenswrapper[4947]: I1129 06:36:32.537464 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xjxbf" event={"ID":"4a5aec17-3235-4678-afa1-08da4b223f45","Type":"ContainerStarted","Data":"1e72024ae12b3692484b56b93f4b2687901092a74f97061fb1b0fc7019db7bf0"} Nov 29 06:36:32 crc kubenswrapper[4947]: I1129 06:36:32.538299 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xjxbf" Nov 29 06:36:32 crc kubenswrapper[4947]: I1129 06:36:32.539163 4947 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-xjxbf container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Nov 29 06:36:32 crc kubenswrapper[4947]: I1129 06:36:32.539200 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xjxbf" podUID="4a5aec17-3235-4678-afa1-08da4b223f45" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Nov 29 06:36:32 crc kubenswrapper[4947]: I1129 06:36:32.559194 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mzd69" podStartSLOduration=123.559177823 podStartE2EDuration="2m3.559177823s" podCreationTimestamp="2025-11-29 06:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:36:32.557482923 +0000 UTC m=+143.601865024" watchObservedRunningTime="2025-11-29 06:36:32.559177823 +0000 UTC m=+143.603559904" Nov 29 06:36:32 crc kubenswrapper[4947]: I1129 06:36:32.572449 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dh8pj" event={"ID":"23977121-e0f0-4055-a727-c4050a20f2a6","Type":"ContainerStarted","Data":"9b248a827cafb4b45ac721078b61b0d9364040fa961c6d9fbec0f050d8d60280"} Nov 29 06:36:32 crc kubenswrapper[4947]: I1129 06:36:32.586392 4947 patch_prober.go:28] interesting pod/router-default-5444994796-6ndbx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 06:36:32 crc kubenswrapper[4947]: [-]has-synced failed: reason withheld Nov 29 06:36:32 crc kubenswrapper[4947]: [+]process-running ok Nov 29 06:36:32 crc kubenswrapper[4947]: healthz check failed Nov 29 06:36:32 crc kubenswrapper[4947]: I1129 06:36:32.586450 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6ndbx" podUID="a4cf8b6f-d7b3-4cc2-adf5-2494f5d21b71" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 06:36:32 crc kubenswrapper[4947]: I1129 06:36:32.589028 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xjxbf" podStartSLOduration=123.589011541 podStartE2EDuration="2m3.589011541s" podCreationTimestamp="2025-11-29 06:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:36:32.588536417 +0000 UTC m=+143.632918498" watchObservedRunningTime="2025-11-29 06:36:32.589011541 +0000 UTC m=+143.633393622" Nov 29 06:36:32 crc kubenswrapper[4947]: I1129 06:36:32.671715 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:32 crc kubenswrapper[4947]: E1129 06:36:32.673642 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 06:36:33.173627772 +0000 UTC m=+144.218009853 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcrf" (UID: "819051f4-236d-42d3-b3cf-c82103136dce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:32 crc kubenswrapper[4947]: I1129 06:36:32.681136 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-2nj82" event={"ID":"967c6d72-d998-4b42-8de2-b9fd1712fc12","Type":"ContainerStarted","Data":"81cd01441efda151570ed36f762b69263cefeb692c704aa4666043c30e9076d8"} Nov 29 06:36:32 crc kubenswrapper[4947]: I1129 06:36:32.712792 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m5hbt" event={"ID":"1859af1a-8cea-4330-b44f-69c94692bfde","Type":"ContainerStarted","Data":"86d1d7b64a702553908b061b0d6e7adcf5d0227cabe85340b9d9619d2646f0aa"} Nov 29 06:36:32 crc kubenswrapper[4947]: I1129 06:36:32.712839 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m5hbt" Nov 29 06:36:32 crc kubenswrapper[4947]: I1129 06:36:32.791887 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:36:32 crc kubenswrapper[4947]: E1129 06:36:32.792352 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:36:33.292334583 +0000 UTC m=+144.336716664 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:32 crc kubenswrapper[4947]: I1129 06:36:32.831842 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-2nj82" podStartSLOduration=122.831824159 podStartE2EDuration="2m2.831824159s" podCreationTimestamp="2025-11-29 06:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:36:32.829988304 +0000 UTC m=+143.874370385" watchObservedRunningTime="2025-11-29 06:36:32.831824159 +0000 UTC m=+143.876206240" Nov 29 06:36:32 crc kubenswrapper[4947]: I1129 06:36:32.833637 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dh8pj" podStartSLOduration=123.833629432 podStartE2EDuration="2m3.833629432s" podCreationTimestamp="2025-11-29 06:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:36:32.791299462 +0000 UTC m=+143.835681553" watchObservedRunningTime="2025-11-29 06:36:32.833629432 +0000 UTC m=+143.878011513" Nov 29 06:36:32 crc kubenswrapper[4947]: I1129 06:36:32.928140 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:32 crc kubenswrapper[4947]: E1129 06:36:32.931287 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 06:36:33.431265109 +0000 UTC m=+144.475647190 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcrf" (UID: "819051f4-236d-42d3-b3cf-c82103136dce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:33 crc kubenswrapper[4947]: I1129 06:36:33.032855 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:36:33 crc kubenswrapper[4947]: E1129 06:36:33.033270 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:36:33.533249464 +0000 UTC m=+144.577631545 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:33 crc kubenswrapper[4947]: I1129 06:36:33.136161 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:33 crc kubenswrapper[4947]: E1129 06:36:33.136608 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 06:36:33.63659378 +0000 UTC m=+144.680975861 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcrf" (UID: "819051f4-236d-42d3-b3cf-c82103136dce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:33 crc kubenswrapper[4947]: I1129 06:36:33.239870 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:36:33 crc kubenswrapper[4947]: E1129 06:36:33.240418 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:36:33.74040242 +0000 UTC m=+144.784784501 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:33 crc kubenswrapper[4947]: I1129 06:36:33.341808 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:33 crc kubenswrapper[4947]: E1129 06:36:33.342175 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 06:36:33.842155849 +0000 UTC m=+144.886537980 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcrf" (UID: "819051f4-236d-42d3-b3cf-c82103136dce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:33 crc kubenswrapper[4947]: I1129 06:36:33.443032 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:36:33 crc kubenswrapper[4947]: E1129 06:36:33.443227 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:36:33.943189336 +0000 UTC m=+144.987571417 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:33 crc kubenswrapper[4947]: I1129 06:36:33.443453 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:33 crc kubenswrapper[4947]: E1129 06:36:33.443876 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 06:36:33.943866097 +0000 UTC m=+144.988248258 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcrf" (UID: "819051f4-236d-42d3-b3cf-c82103136dce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:33 crc kubenswrapper[4947]: I1129 06:36:33.544765 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:36:33 crc kubenswrapper[4947]: E1129 06:36:33.544968 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:36:34.044938065 +0000 UTC m=+145.089320146 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:33 crc kubenswrapper[4947]: I1129 06:36:33.545174 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:33 crc kubenswrapper[4947]: E1129 06:36:33.545542 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 06:36:34.045531273 +0000 UTC m=+145.089913424 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcrf" (UID: "819051f4-236d-42d3-b3cf-c82103136dce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:33 crc kubenswrapper[4947]: I1129 06:36:33.576507 4947 patch_prober.go:28] interesting pod/router-default-5444994796-6ndbx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 06:36:33 crc kubenswrapper[4947]: [-]has-synced failed: reason withheld Nov 29 06:36:33 crc kubenswrapper[4947]: [+]process-running ok Nov 29 06:36:33 crc kubenswrapper[4947]: healthz check failed Nov 29 06:36:33 crc kubenswrapper[4947]: I1129 06:36:33.576572 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6ndbx" podUID="a4cf8b6f-d7b3-4cc2-adf5-2494f5d21b71" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 06:36:33 crc kubenswrapper[4947]: I1129 06:36:33.645837 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:36:33 crc kubenswrapper[4947]: E1129 06:36:33.646004 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:36:34.145975452 +0000 UTC m=+145.190357533 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:33 crc kubenswrapper[4947]: I1129 06:36:33.646137 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:33 crc kubenswrapper[4947]: E1129 06:36:33.646437 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 06:36:34.146422716 +0000 UTC m=+145.190804787 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcrf" (UID: "819051f4-236d-42d3-b3cf-c82103136dce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:33 crc kubenswrapper[4947]: I1129 06:36:33.712583 4947 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-m5hbt container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 29 06:36:33 crc kubenswrapper[4947]: I1129 06:36:33.712657 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m5hbt" podUID="1859af1a-8cea-4330-b44f-69c94692bfde" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.23:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 29 06:36:33 crc kubenswrapper[4947]: I1129 06:36:33.735861 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-lx74d" event={"ID":"b85a2376-eba6-4a1e-b6eb-870ffc696f31","Type":"ContainerStarted","Data":"1414c7f5a209360fa451ea7db3bd4d99f66af74be679fda3d4de7bca1add3709"} Nov 29 06:36:33 crc kubenswrapper[4947]: I1129 06:36:33.735902 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-lx74d" event={"ID":"b85a2376-eba6-4a1e-b6eb-870ffc696f31","Type":"ContainerStarted","Data":"cd1ab9ed624d1d66ca947c6957420a8983953494ab20940b0ecafc280a48eff6"} Nov 29 06:36:33 crc kubenswrapper[4947]: I1129 06:36:33.747716 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:36:33 crc kubenswrapper[4947]: E1129 06:36:33.747853 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:36:34.247833964 +0000 UTC m=+145.292216055 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:33 crc kubenswrapper[4947]: I1129 06:36:33.748057 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:33 crc kubenswrapper[4947]: E1129 06:36:33.748419 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 06:36:34.248408551 +0000 UTC m=+145.292790632 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcrf" (UID: "819051f4-236d-42d3-b3cf-c82103136dce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:33 crc kubenswrapper[4947]: I1129 06:36:33.749088 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-td4m7" event={"ID":"0eed2dde-d158-43e1-8df2-f5a309ef3da3","Type":"ContainerStarted","Data":"a6eea82ecdf0ee25c53821fdd7aa10fcd9e723c8ab5fea6d3cf504381575a567"} Nov 29 06:36:33 crc kubenswrapper[4947]: I1129 06:36:33.749127 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-td4m7" event={"ID":"0eed2dde-d158-43e1-8df2-f5a309ef3da3","Type":"ContainerStarted","Data":"dd7ce2cdfe75ad805866d7cb4477680737692b439293c49034dff6e78491fc31"} Nov 29 06:36:33 crc kubenswrapper[4947]: I1129 06:36:33.749163 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-td4m7" Nov 29 06:36:33 crc kubenswrapper[4947]: I1129 06:36:33.754804 4947 generic.go:334] "Generic (PLEG): container finished" podID="f903f69c-2db9-478a-9141-22f6aeb27ce3" containerID="e214531d918c1f5b4d9a4e9b551b5de7b052147288010bcb3d6bb24e3019d402" exitCode=0 Nov 29 06:36:33 crc kubenswrapper[4947]: I1129 06:36:33.754876 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ts6sm" event={"ID":"f903f69c-2db9-478a-9141-22f6aeb27ce3","Type":"ContainerDied","Data":"e214531d918c1f5b4d9a4e9b551b5de7b052147288010bcb3d6bb24e3019d402"} Nov 29 06:36:33 crc kubenswrapper[4947]: I1129 06:36:33.775770 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dh8pj" event={"ID":"23977121-e0f0-4055-a727-c4050a20f2a6","Type":"ContainerStarted","Data":"245f0257e22b1171f1877c80b4f0cd7a9962ca00b018844b5d89cbf0d5d5f0e0"} Nov 29 06:36:33 crc kubenswrapper[4947]: I1129 06:36:33.783891 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m5hbt" podStartSLOduration=123.783873307 podStartE2EDuration="2m3.783873307s" podCreationTimestamp="2025-11-29 06:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:36:32.926731494 +0000 UTC m=+143.971113585" watchObservedRunningTime="2025-11-29 06:36:33.783873307 +0000 UTC m=+144.828255388" Nov 29 06:36:33 crc kubenswrapper[4947]: I1129 06:36:33.794476 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mzd69" event={"ID":"59495e16-0371-48e0-b517-0adb7ac8eb4f","Type":"ContainerStarted","Data":"becdf5f341981c692965d0fd890375a9f65916aea30ec17730d56e175c673112"} Nov 29 06:36:33 crc kubenswrapper[4947]: I1129 06:36:33.814125 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xjxbf" event={"ID":"4a5aec17-3235-4678-afa1-08da4b223f45","Type":"ContainerStarted","Data":"30e2a05ece3c4c9b31edbd379d665e6dbb62c65342b41975dfe321f6347c0868"} Nov 29 06:36:33 crc kubenswrapper[4947]: I1129 06:36:33.820011 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-tcchr" event={"ID":"f060d79a-f223-455c-b203-0bd9e430a896","Type":"ContainerStarted","Data":"c0b6a7781efaef60d0a89c2ed2732343bea673d1d2fac6a756d0cc6f85fe3d08"} Nov 29 06:36:33 crc kubenswrapper[4947]: I1129 06:36:33.841297 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-td4m7" podStartSLOduration=9.841275506 podStartE2EDuration="9.841275506s" podCreationTimestamp="2025-11-29 06:36:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:36:33.786706821 +0000 UTC m=+144.831088902" watchObservedRunningTime="2025-11-29 06:36:33.841275506 +0000 UTC m=+144.885657587" Nov 29 06:36:33 crc kubenswrapper[4947]: I1129 06:36:33.841561 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xjxbf" Nov 29 06:36:33 crc kubenswrapper[4947]: I1129 06:36:33.849433 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:36:33 crc kubenswrapper[4947]: I1129 06:36:33.849803 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lkf5s" event={"ID":"79190f34-e70f-4fa8-b8da-7db3b29678a0","Type":"ContainerStarted","Data":"d34e829620259d41e3f341efec5e96dec3dd3287ed1bb03270a065bfc67ca973"} Nov 29 06:36:33 crc kubenswrapper[4947]: E1129 06:36:33.850541 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:36:34.350519811 +0000 UTC m=+145.394901962 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:33 crc kubenswrapper[4947]: I1129 06:36:33.874422 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-4cgzv" event={"ID":"26b3766f-e08f-47e8-803e-d138e6e7620f","Type":"ContainerStarted","Data":"c31abb81678e7fabd06cbf4512e5155e74597b7a099dd355931a25e9c41fbf95"} Nov 29 06:36:33 crc kubenswrapper[4947]: I1129 06:36:33.904615 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-66xmw" event={"ID":"4cd078d1-1fb6-4997-a55e-f90cfea7bf7a","Type":"ContainerStarted","Data":"07dc7c71ca207c4bee88d1c394e8964381617a6310255ade734aa24e62ab122d"} Nov 29 06:36:33 crc kubenswrapper[4947]: I1129 06:36:33.914656 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-p7wlh" event={"ID":"7fc6440e-f991-4421-b078-9496ffdfb74d","Type":"ContainerStarted","Data":"e1ec3d5a1db36f4e837ba03eff621716ed7504d4bc02587afc8a801fbf18ffef"} Nov 29 06:36:33 crc kubenswrapper[4947]: I1129 06:36:33.928856 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-tcchr" podStartSLOduration=124.928834752 podStartE2EDuration="2m4.928834752s" podCreationTimestamp="2025-11-29 06:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:36:33.882583335 +0000 UTC m=+144.926965416" watchObservedRunningTime="2025-11-29 06:36:33.928834752 +0000 UTC m=+144.973216833" Nov 29 06:36:33 crc kubenswrapper[4947]: I1129 06:36:33.936407 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406630-wnfkh" event={"ID":"3f161f69-0220-4b3f-9f46-76277cd105f9","Type":"ContainerStarted","Data":"14db6dd613c8c9dea1c8d42e9341e43b5be15363a2ac81ec32d4573cf39a077f"} Nov 29 06:36:33 crc kubenswrapper[4947]: I1129 06:36:33.939862 4947 generic.go:334] "Generic (PLEG): container finished" podID="ec9073bd-31ec-4b35-93a9-08d26b62c60d" containerID="af7a898522d6847c069e00670e6e7f7675beaa1aa1b11238ca71d026ec5ffbb6" exitCode=0 Nov 29 06:36:33 crc kubenswrapper[4947]: I1129 06:36:33.940354 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-b45sk" event={"ID":"ec9073bd-31ec-4b35-93a9-08d26b62c60d","Type":"ContainerDied","Data":"af7a898522d6847c069e00670e6e7f7675beaa1aa1b11238ca71d026ec5ffbb6"} Nov 29 06:36:33 crc kubenswrapper[4947]: I1129 06:36:33.950906 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:33 crc kubenswrapper[4947]: E1129 06:36:33.955473 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 06:36:34.455455834 +0000 UTC m=+145.499837915 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcrf" (UID: "819051f4-236d-42d3-b3cf-c82103136dce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:33 crc kubenswrapper[4947]: I1129 06:36:33.979139 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-2drts" event={"ID":"d950d1cc-546a-4650-ab9c-e58388bda769","Type":"ContainerStarted","Data":"eff31f6cd7e8473ed2ae71d83e4565d6085ae3a57e48186d83fd77f00f640bc3"} Nov 29 06:36:33 crc kubenswrapper[4947]: I1129 06:36:33.994201 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6mhws" event={"ID":"675de8ae-169a-4737-a290-54cdb32d8cb0","Type":"ContainerStarted","Data":"2111f172b313f21f1ec488e2f051f6deac9e9588feb338a541ecdbd5d75e5c13"} Nov 29 06:36:33 crc kubenswrapper[4947]: I1129 06:36:33.994787 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-6mhws" Nov 29 06:36:33 crc kubenswrapper[4947]: I1129 06:36:33.999386 4947 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-6mhws container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Nov 29 06:36:33 crc kubenswrapper[4947]: I1129 06:36:33.999443 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-6mhws" podUID="675de8ae-169a-4737-a290-54cdb32d8cb0" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.000111 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-66xmw" podStartSLOduration=125.000090973 podStartE2EDuration="2m5.000090973s" podCreationTimestamp="2025-11-29 06:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:36:33.969783061 +0000 UTC m=+145.014165132" watchObservedRunningTime="2025-11-29 06:36:34.000090973 +0000 UTC m=+145.044473054" Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.000298 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lkf5s" podStartSLOduration=125.000291999 podStartE2EDuration="2m5.000291999s" podCreationTimestamp="2025-11-29 06:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:36:33.998513146 +0000 UTC m=+145.042895227" watchObservedRunningTime="2025-11-29 06:36:34.000291999 +0000 UTC m=+145.044674090" Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.028059 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29406630-wnfkh" podStartSLOduration=125.028040005 podStartE2EDuration="2m5.028040005s" podCreationTimestamp="2025-11-29 06:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:36:34.02686345 +0000 UTC m=+145.071245531" watchObservedRunningTime="2025-11-29 06:36:34.028040005 +0000 UTC m=+145.072422086" Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.041773 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cwrn9" event={"ID":"07b30d68-832e-44c3-aa22-18c8f1cbb6e6","Type":"ContainerStarted","Data":"56dd59975222509a55eadda2cdb8c95f1e5796d691204459029bdc0232c37c85"} Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.052701 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:36:34 crc kubenswrapper[4947]: E1129 06:36:34.053480 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:36:34.553465172 +0000 UTC m=+145.597847253 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.053757 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-2drts" podStartSLOduration=125.05374363 podStartE2EDuration="2m5.05374363s" podCreationTimestamp="2025-11-29 06:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:36:34.050938447 +0000 UTC m=+145.095320528" watchObservedRunningTime="2025-11-29 06:36:34.05374363 +0000 UTC m=+145.098125711" Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.074405 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xxz9x" event={"ID":"1627cfb6-29e2-4b2e-ae8b-0dcd6d125da0","Type":"ContainerStarted","Data":"98c0a6c7899a9083a6e868ef9f384793acd290b5d8c719579c15054ed03978eb"} Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.074470 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xxz9x" event={"ID":"1627cfb6-29e2-4b2e-ae8b-0dcd6d125da0","Type":"ContainerStarted","Data":"8dfc4bee567e60726cf1b602b47e5335c12489997d357ad7b825ab577f821178"} Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.080457 4947 patch_prober.go:28] interesting pod/downloads-7954f5f757-zcdgs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.080519 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zcdgs" podUID="e0e2a5e3-321b-4774-bf83-dd727fc954d2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.081623 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-6mhws" podStartSLOduration=125.081603359 podStartE2EDuration="2m5.081603359s" podCreationTimestamp="2025-11-29 06:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:36:34.081562318 +0000 UTC m=+145.125944399" watchObservedRunningTime="2025-11-29 06:36:34.081603359 +0000 UTC m=+145.125985440" Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.153504 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cwrn9" podStartSLOduration=125.153484559 podStartE2EDuration="2m5.153484559s" podCreationTimestamp="2025-11-29 06:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:36:34.152213731 +0000 UTC m=+145.196595812" watchObservedRunningTime="2025-11-29 06:36:34.153484559 +0000 UTC m=+145.197866650" Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.167800 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:34 crc kubenswrapper[4947]: E1129 06:36:34.169161 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 06:36:34.669140755 +0000 UTC m=+145.713522826 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcrf" (UID: "819051f4-236d-42d3-b3cf-c82103136dce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.204004 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xxz9x" podStartSLOduration=125.203981912 podStartE2EDuration="2m5.203981912s" podCreationTimestamp="2025-11-29 06:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:36:34.18879448 +0000 UTC m=+145.233176561" watchObservedRunningTime="2025-11-29 06:36:34.203981912 +0000 UTC m=+145.248363993" Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.227270 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-m5hbt" Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.229691 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-72fb2" Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.269165 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:36:34 crc kubenswrapper[4947]: E1129 06:36:34.270248 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:36:34.770233754 +0000 UTC m=+145.814615825 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.278173 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.278923 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.288992 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.289614 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.310350 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.371687 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:34 crc kubenswrapper[4947]: E1129 06:36:34.372071 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 06:36:34.872055595 +0000 UTC m=+145.916437676 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcrf" (UID: "819051f4-236d-42d3-b3cf-c82103136dce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.472813 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:36:34 crc kubenswrapper[4947]: E1129 06:36:34.473008 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:36:34.972982779 +0000 UTC m=+146.017364860 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.473060 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/35af846b-b152-4122-8ec9-3eec41937893-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"35af846b-b152-4122-8ec9-3eec41937893\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.473123 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.473184 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35af846b-b152-4122-8ec9-3eec41937893-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"35af846b-b152-4122-8ec9-3eec41937893\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 29 06:36:34 crc kubenswrapper[4947]: E1129 06:36:34.473474 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 06:36:34.973467503 +0000 UTC m=+146.017849584 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcrf" (UID: "819051f4-236d-42d3-b3cf-c82103136dce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.573986 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:36:34 crc kubenswrapper[4947]: E1129 06:36:34.574176 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:36:35.07414536 +0000 UTC m=+146.118527451 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.574271 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/35af846b-b152-4122-8ec9-3eec41937893-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"35af846b-b152-4122-8ec9-3eec41937893\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.574332 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.574393 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/35af846b-b152-4122-8ec9-3eec41937893-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"35af846b-b152-4122-8ec9-3eec41937893\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.574401 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35af846b-b152-4122-8ec9-3eec41937893-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"35af846b-b152-4122-8ec9-3eec41937893\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 29 06:36:34 crc kubenswrapper[4947]: E1129 06:36:34.574752 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 06:36:35.074734538 +0000 UTC m=+146.119116689 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcrf" (UID: "819051f4-236d-42d3-b3cf-c82103136dce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.577848 4947 patch_prober.go:28] interesting pod/router-default-5444994796-6ndbx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 06:36:34 crc kubenswrapper[4947]: [-]has-synced failed: reason withheld Nov 29 06:36:34 crc kubenswrapper[4947]: [+]process-running ok Nov 29 06:36:34 crc kubenswrapper[4947]: healthz check failed Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.577917 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6ndbx" podUID="a4cf8b6f-d7b3-4cc2-adf5-2494f5d21b71" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.618518 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t4vb4"] Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.620682 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t4vb4" Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.622524 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.629048 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35af846b-b152-4122-8ec9-3eec41937893-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"35af846b-b152-4122-8ec9-3eec41937893\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.630372 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t4vb4"] Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.675789 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:36:34 crc kubenswrapper[4947]: E1129 06:36:34.675992 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:36:35.17596121 +0000 UTC m=+146.220343291 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.676207 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:34 crc kubenswrapper[4947]: E1129 06:36:34.676545 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 06:36:35.176529267 +0000 UTC m=+146.220911418 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcrf" (UID: "819051f4-236d-42d3-b3cf-c82103136dce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.777340 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:36:34 crc kubenswrapper[4947]: E1129 06:36:34.777498 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:36:35.277477102 +0000 UTC m=+146.321859183 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.777578 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.777699 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/159f707e-f150-45c6-9371-6b4b272eaf5d-utilities\") pod \"certified-operators-t4vb4\" (UID: \"159f707e-f150-45c6-9371-6b4b272eaf5d\") " pod="openshift-marketplace/certified-operators-t4vb4" Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.777733 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd5ln\" (UniqueName: \"kubernetes.io/projected/159f707e-f150-45c6-9371-6b4b272eaf5d-kube-api-access-fd5ln\") pod \"certified-operators-t4vb4\" (UID: \"159f707e-f150-45c6-9371-6b4b272eaf5d\") " pod="openshift-marketplace/certified-operators-t4vb4" Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.777791 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/159f707e-f150-45c6-9371-6b4b272eaf5d-catalog-content\") pod \"certified-operators-t4vb4\" (UID: \"159f707e-f150-45c6-9371-6b4b272eaf5d\") " pod="openshift-marketplace/certified-operators-t4vb4" Nov 29 06:36:34 crc kubenswrapper[4947]: E1129 06:36:34.777917 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 06:36:35.277908554 +0000 UTC m=+146.322290635 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcrf" (UID: "819051f4-236d-42d3-b3cf-c82103136dce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.815558 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-phgdn"] Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.816670 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-phgdn" Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.818363 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.827565 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-phgdn"] Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.878608 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:36:34 crc kubenswrapper[4947]: E1129 06:36:34.878831 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:36:35.378796697 +0000 UTC m=+146.423178798 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.878880 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/159f707e-f150-45c6-9371-6b4b272eaf5d-utilities\") pod \"certified-operators-t4vb4\" (UID: \"159f707e-f150-45c6-9371-6b4b272eaf5d\") " pod="openshift-marketplace/certified-operators-t4vb4" Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.878914 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fd5ln\" (UniqueName: \"kubernetes.io/projected/159f707e-f150-45c6-9371-6b4b272eaf5d-kube-api-access-fd5ln\") pod \"certified-operators-t4vb4\" (UID: \"159f707e-f150-45c6-9371-6b4b272eaf5d\") " pod="openshift-marketplace/certified-operators-t4vb4" Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.878937 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/159f707e-f150-45c6-9371-6b4b272eaf5d-catalog-content\") pod \"certified-operators-t4vb4\" (UID: \"159f707e-f150-45c6-9371-6b4b272eaf5d\") " pod="openshift-marketplace/certified-operators-t4vb4" Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.878978 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:34 crc kubenswrapper[4947]: E1129 06:36:34.879239 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 06:36:35.37922896 +0000 UTC m=+146.423611041 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcrf" (UID: "819051f4-236d-42d3-b3cf-c82103136dce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.879425 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/159f707e-f150-45c6-9371-6b4b272eaf5d-utilities\") pod \"certified-operators-t4vb4\" (UID: \"159f707e-f150-45c6-9371-6b4b272eaf5d\") " pod="openshift-marketplace/certified-operators-t4vb4" Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.879459 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/159f707e-f150-45c6-9371-6b4b272eaf5d-catalog-content\") pod \"certified-operators-t4vb4\" (UID: \"159f707e-f150-45c6-9371-6b4b272eaf5d\") " pod="openshift-marketplace/certified-operators-t4vb4" Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.898391 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fd5ln\" (UniqueName: \"kubernetes.io/projected/159f707e-f150-45c6-9371-6b4b272eaf5d-kube-api-access-fd5ln\") pod \"certified-operators-t4vb4\" (UID: \"159f707e-f150-45c6-9371-6b4b272eaf5d\") " pod="openshift-marketplace/certified-operators-t4vb4" Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.912343 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.958429 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t4vb4" Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.980408 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:36:34 crc kubenswrapper[4947]: E1129 06:36:34.980642 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:36:35.480605168 +0000 UTC m=+146.524987269 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.980716 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pfz9\" (UniqueName: \"kubernetes.io/projected/b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8-kube-api-access-5pfz9\") pod \"community-operators-phgdn\" (UID: \"b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8\") " pod="openshift-marketplace/community-operators-phgdn" Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.980749 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8-utilities\") pod \"community-operators-phgdn\" (UID: \"b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8\") " pod="openshift-marketplace/community-operators-phgdn" Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.980804 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:34 crc kubenswrapper[4947]: I1129 06:36:34.980971 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8-catalog-content\") pod \"community-operators-phgdn\" (UID: \"b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8\") " pod="openshift-marketplace/community-operators-phgdn" Nov 29 06:36:34 crc kubenswrapper[4947]: E1129 06:36:34.981436 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 06:36:35.481417052 +0000 UTC m=+146.525799213 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcrf" (UID: "819051f4-236d-42d3-b3cf-c82103136dce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.016866 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gbhvj"] Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.017977 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gbhvj" Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.039148 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gbhvj"] Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.082756 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.082965 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8-utilities\") pod \"community-operators-phgdn\" (UID: \"b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8\") " pod="openshift-marketplace/community-operators-phgdn" Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.083087 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8-catalog-content\") pod \"community-operators-phgdn\" (UID: \"b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8\") " pod="openshift-marketplace/community-operators-phgdn" Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.083138 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pfz9\" (UniqueName: \"kubernetes.io/projected/b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8-kube-api-access-5pfz9\") pod \"community-operators-phgdn\" (UID: \"b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8\") " pod="openshift-marketplace/community-operators-phgdn" Nov 29 06:36:35 crc kubenswrapper[4947]: E1129 06:36:35.083543 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:36:35.583525911 +0000 UTC m=+146.627907992 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.084031 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8-utilities\") pod \"community-operators-phgdn\" (UID: \"b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8\") " pod="openshift-marketplace/community-operators-phgdn" Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.084357 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8-catalog-content\") pod \"community-operators-phgdn\" (UID: \"b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8\") " pod="openshift-marketplace/community-operators-phgdn" Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.124728 4947 generic.go:334] "Generic (PLEG): container finished" podID="3f161f69-0220-4b3f-9f46-76277cd105f9" containerID="14db6dd613c8c9dea1c8d42e9341e43b5be15363a2ac81ec32d4573cf39a077f" exitCode=0 Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.125608 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pfz9\" (UniqueName: \"kubernetes.io/projected/b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8-kube-api-access-5pfz9\") pod \"community-operators-phgdn\" (UID: \"b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8\") " pod="openshift-marketplace/community-operators-phgdn" Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.126613 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406630-wnfkh" event={"ID":"3f161f69-0220-4b3f-9f46-76277cd105f9","Type":"ContainerDied","Data":"14db6dd613c8c9dea1c8d42e9341e43b5be15363a2ac81ec32d4573cf39a077f"} Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.126844 4947 patch_prober.go:28] interesting pod/downloads-7954f5f757-zcdgs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.126888 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zcdgs" podUID="e0e2a5e3-321b-4774-bf83-dd727fc954d2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.126963 4947 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-6mhws container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.126979 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-6mhws" podUID="675de8ae-169a-4737-a290-54cdb32d8cb0" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.129746 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-phgdn" Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.179026 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-lx74d" podStartSLOduration=126.179006643 podStartE2EDuration="2m6.179006643s" podCreationTimestamp="2025-11-29 06:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:36:35.177326743 +0000 UTC m=+146.221708854" watchObservedRunningTime="2025-11-29 06:36:35.179006643 +0000 UTC m=+146.223388724" Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.183890 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a37f770-07c2-40b1-9f24-ccddc3215658-catalog-content\") pod \"certified-operators-gbhvj\" (UID: \"1a37f770-07c2-40b1-9f24-ccddc3215658\") " pod="openshift-marketplace/certified-operators-gbhvj" Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.183949 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lztlt\" (UniqueName: \"kubernetes.io/projected/1a37f770-07c2-40b1-9f24-ccddc3215658-kube-api-access-lztlt\") pod \"certified-operators-gbhvj\" (UID: \"1a37f770-07c2-40b1-9f24-ccddc3215658\") " pod="openshift-marketplace/certified-operators-gbhvj" Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.183980 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.184014 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a37f770-07c2-40b1-9f24-ccddc3215658-utilities\") pod \"certified-operators-gbhvj\" (UID: \"1a37f770-07c2-40b1-9f24-ccddc3215658\") " pod="openshift-marketplace/certified-operators-gbhvj" Nov 29 06:36:35 crc kubenswrapper[4947]: E1129 06:36:35.184317 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 06:36:35.684304991 +0000 UTC m=+146.728687072 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcrf" (UID: "819051f4-236d-42d3-b3cf-c82103136dce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.214451 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.217018 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wqz4n"] Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.218114 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wqz4n" Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.224606 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wqz4n"] Nov 29 06:36:35 crc kubenswrapper[4947]: W1129 06:36:35.257267 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod35af846b_b152_4122_8ec9_3eec41937893.slice/crio-45254bcfb48c569ec373f865ac34245ecc6790434c627994d25d2744a310818c WatchSource:0}: Error finding container 45254bcfb48c569ec373f865ac34245ecc6790434c627994d25d2744a310818c: Status 404 returned error can't find the container with id 45254bcfb48c569ec373f865ac34245ecc6790434c627994d25d2744a310818c Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.284593 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:36:35 crc kubenswrapper[4947]: E1129 06:36:35.284765 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:36:35.784740251 +0000 UTC m=+146.829122332 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.287239 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a37f770-07c2-40b1-9f24-ccddc3215658-catalog-content\") pod \"certified-operators-gbhvj\" (UID: \"1a37f770-07c2-40b1-9f24-ccddc3215658\") " pod="openshift-marketplace/certified-operators-gbhvj" Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.287590 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lztlt\" (UniqueName: \"kubernetes.io/projected/1a37f770-07c2-40b1-9f24-ccddc3215658-kube-api-access-lztlt\") pod \"certified-operators-gbhvj\" (UID: \"1a37f770-07c2-40b1-9f24-ccddc3215658\") " pod="openshift-marketplace/certified-operators-gbhvj" Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.287713 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a37f770-07c2-40b1-9f24-ccddc3215658-catalog-content\") pod \"certified-operators-gbhvj\" (UID: \"1a37f770-07c2-40b1-9f24-ccddc3215658\") " pod="openshift-marketplace/certified-operators-gbhvj" Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.287724 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.289595 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a37f770-07c2-40b1-9f24-ccddc3215658-utilities\") pod \"certified-operators-gbhvj\" (UID: \"1a37f770-07c2-40b1-9f24-ccddc3215658\") " pod="openshift-marketplace/certified-operators-gbhvj" Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.290470 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a37f770-07c2-40b1-9f24-ccddc3215658-utilities\") pod \"certified-operators-gbhvj\" (UID: \"1a37f770-07c2-40b1-9f24-ccddc3215658\") " pod="openshift-marketplace/certified-operators-gbhvj" Nov 29 06:36:35 crc kubenswrapper[4947]: E1129 06:36:35.291166 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 06:36:35.791148851 +0000 UTC m=+146.835531012 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcrf" (UID: "819051f4-236d-42d3-b3cf-c82103136dce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.310091 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lztlt\" (UniqueName: \"kubernetes.io/projected/1a37f770-07c2-40b1-9f24-ccddc3215658-kube-api-access-lztlt\") pod \"certified-operators-gbhvj\" (UID: \"1a37f770-07c2-40b1-9f24-ccddc3215658\") " pod="openshift-marketplace/certified-operators-gbhvj" Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.323183 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t4vb4"] Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.378180 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gbhvj" Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.390308 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:36:35 crc kubenswrapper[4947]: E1129 06:36:35.390504 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:36:35.890446087 +0000 UTC m=+146.934828178 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.390590 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt875\" (UniqueName: \"kubernetes.io/projected/658b72a9-13fc-4881-88c5-109b221bbc48-kube-api-access-wt875\") pod \"community-operators-wqz4n\" (UID: \"658b72a9-13fc-4881-88c5-109b221bbc48\") " pod="openshift-marketplace/community-operators-wqz4n" Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.390831 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/658b72a9-13fc-4881-88c5-109b221bbc48-utilities\") pod \"community-operators-wqz4n\" (UID: \"658b72a9-13fc-4881-88c5-109b221bbc48\") " pod="openshift-marketplace/community-operators-wqz4n" Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.391034 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.391087 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/658b72a9-13fc-4881-88c5-109b221bbc48-catalog-content\") pod \"community-operators-wqz4n\" (UID: \"658b72a9-13fc-4881-88c5-109b221bbc48\") " pod="openshift-marketplace/community-operators-wqz4n" Nov 29 06:36:35 crc kubenswrapper[4947]: E1129 06:36:35.391412 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 06:36:35.891397385 +0000 UTC m=+146.935779466 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcrf" (UID: "819051f4-236d-42d3-b3cf-c82103136dce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.494394 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.494701 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/658b72a9-13fc-4881-88c5-109b221bbc48-catalog-content\") pod \"community-operators-wqz4n\" (UID: \"658b72a9-13fc-4881-88c5-109b221bbc48\") " pod="openshift-marketplace/community-operators-wqz4n" Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.494730 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt875\" (UniqueName: \"kubernetes.io/projected/658b72a9-13fc-4881-88c5-109b221bbc48-kube-api-access-wt875\") pod \"community-operators-wqz4n\" (UID: \"658b72a9-13fc-4881-88c5-109b221bbc48\") " pod="openshift-marketplace/community-operators-wqz4n" Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.494766 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/658b72a9-13fc-4881-88c5-109b221bbc48-utilities\") pod \"community-operators-wqz4n\" (UID: \"658b72a9-13fc-4881-88c5-109b221bbc48\") " pod="openshift-marketplace/community-operators-wqz4n" Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.495148 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/658b72a9-13fc-4881-88c5-109b221bbc48-utilities\") pod \"community-operators-wqz4n\" (UID: \"658b72a9-13fc-4881-88c5-109b221bbc48\") " pod="openshift-marketplace/community-operators-wqz4n" Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.495366 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/658b72a9-13fc-4881-88c5-109b221bbc48-catalog-content\") pod \"community-operators-wqz4n\" (UID: \"658b72a9-13fc-4881-88c5-109b221bbc48\") " pod="openshift-marketplace/community-operators-wqz4n" Nov 29 06:36:35 crc kubenswrapper[4947]: E1129 06:36:35.495455 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:36:35.995441672 +0000 UTC m=+147.039823843 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.514891 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt875\" (UniqueName: \"kubernetes.io/projected/658b72a9-13fc-4881-88c5-109b221bbc48-kube-api-access-wt875\") pod \"community-operators-wqz4n\" (UID: \"658b72a9-13fc-4881-88c5-109b221bbc48\") " pod="openshift-marketplace/community-operators-wqz4n" Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.542747 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wqz4n" Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.577414 4947 patch_prober.go:28] interesting pod/router-default-5444994796-6ndbx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 06:36:35 crc kubenswrapper[4947]: [-]has-synced failed: reason withheld Nov 29 06:36:35 crc kubenswrapper[4947]: [+]process-running ok Nov 29 06:36:35 crc kubenswrapper[4947]: healthz check failed Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.577670 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6ndbx" podUID="a4cf8b6f-d7b3-4cc2-adf5-2494f5d21b71" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.596098 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:35 crc kubenswrapper[4947]: E1129 06:36:35.596475 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 06:36:36.096460849 +0000 UTC m=+147.140842930 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcrf" (UID: "819051f4-236d-42d3-b3cf-c82103136dce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.666987 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gbhvj"] Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.697570 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:36:35 crc kubenswrapper[4947]: E1129 06:36:35.697818 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:36:36.197787645 +0000 UTC m=+147.242169766 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.697922 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:35 crc kubenswrapper[4947]: E1129 06:36:35.698397 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 06:36:36.198383193 +0000 UTC m=+147.242765274 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcrf" (UID: "819051f4-236d-42d3-b3cf-c82103136dce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.706744 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-phgdn"] Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.726654 4947 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Nov 29 06:36:35 crc kubenswrapper[4947]: W1129 06:36:35.743504 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9d714ea_dfd5_4e58_83d6_13a1c1cebfd8.slice/crio-add974d42f317674dda406080d8dcc2d231cb389ce6b47c1a8d2456f6b1831b7 WatchSource:0}: Error finding container add974d42f317674dda406080d8dcc2d231cb389ce6b47c1a8d2456f6b1831b7: Status 404 returned error can't find the container with id add974d42f317674dda406080d8dcc2d231cb389ce6b47c1a8d2456f6b1831b7 Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.779035 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wqz4n"] Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.798812 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:36:35 crc kubenswrapper[4947]: E1129 06:36:35.799099 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:36:36.299085701 +0000 UTC m=+147.343467782 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:35 crc kubenswrapper[4947]: I1129 06:36:35.900339 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:35 crc kubenswrapper[4947]: E1129 06:36:35.900671 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 06:36:36.400657664 +0000 UTC m=+147.445039745 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcrf" (UID: "819051f4-236d-42d3-b3cf-c82103136dce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.001127 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:36:36 crc kubenswrapper[4947]: E1129 06:36:36.001436 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:36:36.501421733 +0000 UTC m=+147.545803814 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.102534 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:36 crc kubenswrapper[4947]: E1129 06:36:36.102933 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 06:36:36.602917574 +0000 UTC m=+147.647299655 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcrf" (UID: "819051f4-236d-42d3-b3cf-c82103136dce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.133213 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gbhvj" event={"ID":"1a37f770-07c2-40b1-9f24-ccddc3215658","Type":"ContainerStarted","Data":"881876073f241f7e31165ab4e1d26169e20c99427fe39b13dff3e31fc512ab9f"} Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.135842 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-b45sk" event={"ID":"ec9073bd-31ec-4b35-93a9-08d26b62c60d","Type":"ContainerStarted","Data":"ee9d4a42c432c311f1b5645bad5d006fae6c668b59e61619c01eca49ed3c8b63"} Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.136996 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t4vb4" event={"ID":"159f707e-f150-45c6-9371-6b4b272eaf5d","Type":"ContainerStarted","Data":"aa77bd2983d90833ecdc4ddbc40076fdcfce1edf477bd5e816563a24ecdb6105"} Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.138282 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wqz4n" event={"ID":"658b72a9-13fc-4881-88c5-109b221bbc48","Type":"ContainerStarted","Data":"6df30375539c85f723c5b1e1bc66fed564abaefeea10a108dd05a182cd3c5bb4"} Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.139275 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"35af846b-b152-4122-8ec9-3eec41937893","Type":"ContainerStarted","Data":"45254bcfb48c569ec373f865ac34245ecc6790434c627994d25d2744a310818c"} Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.141313 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-p7wlh" event={"ID":"7fc6440e-f991-4421-b078-9496ffdfb74d","Type":"ContainerStarted","Data":"274b6ce67955c81d4dc63ef9ddaf76939b37900a856f7ca6e3b612fac564cb90"} Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.142356 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-phgdn" event={"ID":"b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8","Type":"ContainerStarted","Data":"add974d42f317674dda406080d8dcc2d231cb389ce6b47c1a8d2456f6b1831b7"} Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.144261 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ts6sm" event={"ID":"f903f69c-2db9-478a-9141-22f6aeb27ce3","Type":"ContainerStarted","Data":"5e8c51007bff1388dd14cd27b947eb7c071e3a460e65f4ca0fc5816d89af7802"} Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.170238 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ts6sm" podStartSLOduration=126.170198237 podStartE2EDuration="2m6.170198237s" podCreationTimestamp="2025-11-29 06:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:36:36.164708144 +0000 UTC m=+147.209090245" watchObservedRunningTime="2025-11-29 06:36:36.170198237 +0000 UTC m=+147.214580348" Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.203091 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:36:36 crc kubenswrapper[4947]: E1129 06:36:36.203291 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:36:36.703266421 +0000 UTC m=+147.747648502 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.203815 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:36 crc kubenswrapper[4947]: E1129 06:36:36.204276 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 06:36:36.704263551 +0000 UTC m=+147.748645752 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcrf" (UID: "819051f4-236d-42d3-b3cf-c82103136dce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.309665 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:36:36 crc kubenswrapper[4947]: E1129 06:36:36.309804 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:36:36.809782572 +0000 UTC m=+147.854164653 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.309861 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.309891 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.309955 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:36:36 crc kubenswrapper[4947]: E1129 06:36:36.311345 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 06:36:36.811330378 +0000 UTC m=+147.855712449 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcrf" (UID: "819051f4-236d-42d3-b3cf-c82103136dce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.316626 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.352525 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406630-wnfkh" Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.410780 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8l2xv\" (UniqueName: \"kubernetes.io/projected/3f161f69-0220-4b3f-9f46-76277cd105f9-kube-api-access-8l2xv\") pod \"3f161f69-0220-4b3f-9f46-76277cd105f9\" (UID: \"3f161f69-0220-4b3f-9f46-76277cd105f9\") " Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.410835 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f161f69-0220-4b3f-9f46-76277cd105f9-secret-volume\") pod \"3f161f69-0220-4b3f-9f46-76277cd105f9\" (UID: \"3f161f69-0220-4b3f-9f46-76277cd105f9\") " Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.410970 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.411004 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f161f69-0220-4b3f-9f46-76277cd105f9-config-volume\") pod \"3f161f69-0220-4b3f-9f46-76277cd105f9\" (UID: \"3f161f69-0220-4b3f-9f46-76277cd105f9\") " Nov 29 06:36:36 crc kubenswrapper[4947]: E1129 06:36:36.411211 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 06:36:36.91117058 +0000 UTC m=+147.955552661 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.411478 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f161f69-0220-4b3f-9f46-76277cd105f9-config-volume" (OuterVolumeSpecName: "config-volume") pod "3f161f69-0220-4b3f-9f46-76277cd105f9" (UID: "3f161f69-0220-4b3f-9f46-76277cd105f9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.414981 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f161f69-0220-4b3f-9f46-76277cd105f9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3f161f69-0220-4b3f-9f46-76277cd105f9" (UID: "3f161f69-0220-4b3f-9f46-76277cd105f9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.415386 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f161f69-0220-4b3f-9f46-76277cd105f9-kube-api-access-8l2xv" (OuterVolumeSpecName: "kube-api-access-8l2xv") pod "3f161f69-0220-4b3f-9f46-76277cd105f9" (UID: "3f161f69-0220-4b3f-9f46-76277cd105f9"). InnerVolumeSpecName "kube-api-access-8l2xv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.464986 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.512301 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.512380 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.512403 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.512438 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8l2xv\" (UniqueName: \"kubernetes.io/projected/3f161f69-0220-4b3f-9f46-76277cd105f9-kube-api-access-8l2xv\") on node \"crc\" DevicePath \"\"" Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.512448 4947 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f161f69-0220-4b3f-9f46-76277cd105f9-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.512456 4947 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f161f69-0220-4b3f-9f46-76277cd105f9-config-volume\") on node \"crc\" DevicePath \"\"" Nov 29 06:36:36 crc kubenswrapper[4947]: E1129 06:36:36.512707 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 06:36:37.012693322 +0000 UTC m=+148.057075403 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cjcrf" (UID: "819051f4-236d-42d3-b3cf-c82103136dce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.515506 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.518861 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.542772 4947 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-11-29T06:36:35.726674345Z","Handler":null,"Name":""} Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.552672 4947 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.552714 4947 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.576900 4947 patch_prober.go:28] interesting pod/router-default-5444994796-6ndbx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 06:36:36 crc kubenswrapper[4947]: [-]has-synced failed: reason withheld Nov 29 06:36:36 crc kubenswrapper[4947]: [+]process-running ok Nov 29 06:36:36 crc kubenswrapper[4947]: healthz check failed Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.576947 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6ndbx" podUID="a4cf8b6f-d7b3-4cc2-adf5-2494f5d21b71" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.612925 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.614805 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fq4zz"] Nov 29 06:36:36 crc kubenswrapper[4947]: E1129 06:36:36.615043 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f161f69-0220-4b3f-9f46-76277cd105f9" containerName="collect-profiles" Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.615064 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f161f69-0220-4b3f-9f46-76277cd105f9" containerName="collect-profiles" Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.615174 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f161f69-0220-4b3f-9f46-76277cd105f9" containerName="collect-profiles" Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.616032 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fq4zz" Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.618786 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.620381 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.629794 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fq4zz"] Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.697557 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.713908 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e620beb-b5db-4321-b404-0ef499ded600-utilities\") pod \"redhat-marketplace-fq4zz\" (UID: \"1e620beb-b5db-4321-b404-0ef499ded600\") " pod="openshift-marketplace/redhat-marketplace-fq4zz" Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.714011 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e620beb-b5db-4321-b404-0ef499ded600-catalog-content\") pod \"redhat-marketplace-fq4zz\" (UID: \"1e620beb-b5db-4321-b404-0ef499ded600\") " pod="openshift-marketplace/redhat-marketplace-fq4zz" Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.714061 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.714269 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xpnc\" (UniqueName: \"kubernetes.io/projected/1e620beb-b5db-4321-b404-0ef499ded600-kube-api-access-4xpnc\") pod \"redhat-marketplace-fq4zz\" (UID: \"1e620beb-b5db-4321-b404-0ef499ded600\") " pod="openshift-marketplace/redhat-marketplace-fq4zz" Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.717661 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.792825 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.815128 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e620beb-b5db-4321-b404-0ef499ded600-catalog-content\") pod \"redhat-marketplace-fq4zz\" (UID: \"1e620beb-b5db-4321-b404-0ef499ded600\") " pod="openshift-marketplace/redhat-marketplace-fq4zz" Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.815549 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xpnc\" (UniqueName: \"kubernetes.io/projected/1e620beb-b5db-4321-b404-0ef499ded600-kube-api-access-4xpnc\") pod \"redhat-marketplace-fq4zz\" (UID: \"1e620beb-b5db-4321-b404-0ef499ded600\") " pod="openshift-marketplace/redhat-marketplace-fq4zz" Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.815625 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e620beb-b5db-4321-b404-0ef499ded600-utilities\") pod \"redhat-marketplace-fq4zz\" (UID: \"1e620beb-b5db-4321-b404-0ef499ded600\") " pod="openshift-marketplace/redhat-marketplace-fq4zz" Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.815387 4947 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.815804 4947 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.815738 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e620beb-b5db-4321-b404-0ef499ded600-catalog-content\") pod \"redhat-marketplace-fq4zz\" (UID: \"1e620beb-b5db-4321-b404-0ef499ded600\") " pod="openshift-marketplace/redhat-marketplace-fq4zz" Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.816362 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e620beb-b5db-4321-b404-0ef499ded600-utilities\") pod \"redhat-marketplace-fq4zz\" (UID: \"1e620beb-b5db-4321-b404-0ef499ded600\") " pod="openshift-marketplace/redhat-marketplace-fq4zz" Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.837689 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xpnc\" (UniqueName: \"kubernetes.io/projected/1e620beb-b5db-4321-b404-0ef499ded600-kube-api-access-4xpnc\") pod \"redhat-marketplace-fq4zz\" (UID: \"1e620beb-b5db-4321-b404-0ef499ded600\") " pod="openshift-marketplace/redhat-marketplace-fq4zz" Nov 29 06:36:36 crc kubenswrapper[4947]: I1129 06:36:36.978563 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fq4zz" Nov 29 06:36:37 crc kubenswrapper[4947]: I1129 06:36:37.037058 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8zln5"] Nov 29 06:36:37 crc kubenswrapper[4947]: I1129 06:36:37.038710 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8zln5" Nov 29 06:36:37 crc kubenswrapper[4947]: I1129 06:36:37.063710 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8zln5"] Nov 29 06:36:37 crc kubenswrapper[4947]: I1129 06:36:37.122749 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/253ae1cb-50f4-48e7-a004-a70a958c27cd-utilities\") pod \"redhat-marketplace-8zln5\" (UID: \"253ae1cb-50f4-48e7-a004-a70a958c27cd\") " pod="openshift-marketplace/redhat-marketplace-8zln5" Nov 29 06:36:37 crc kubenswrapper[4947]: I1129 06:36:37.122792 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2cf6\" (UniqueName: \"kubernetes.io/projected/253ae1cb-50f4-48e7-a004-a70a958c27cd-kube-api-access-m2cf6\") pod \"redhat-marketplace-8zln5\" (UID: \"253ae1cb-50f4-48e7-a004-a70a958c27cd\") " pod="openshift-marketplace/redhat-marketplace-8zln5" Nov 29 06:36:37 crc kubenswrapper[4947]: I1129 06:36:37.122829 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/253ae1cb-50f4-48e7-a004-a70a958c27cd-catalog-content\") pod \"redhat-marketplace-8zln5\" (UID: \"253ae1cb-50f4-48e7-a004-a70a958c27cd\") " pod="openshift-marketplace/redhat-marketplace-8zln5" Nov 29 06:36:37 crc kubenswrapper[4947]: W1129 06:36:37.142935 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-77873ccd611122c6df1aba594ba62a512d7cc80988026a5c337509f45703528f WatchSource:0}: Error finding container 77873ccd611122c6df1aba594ba62a512d7cc80988026a5c337509f45703528f: Status 404 returned error can't find the container with id 77873ccd611122c6df1aba594ba62a512d7cc80988026a5c337509f45703528f Nov 29 06:36:37 crc kubenswrapper[4947]: I1129 06:36:37.153819 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"77873ccd611122c6df1aba594ba62a512d7cc80988026a5c337509f45703528f"} Nov 29 06:36:37 crc kubenswrapper[4947]: I1129 06:36:37.159587 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406630-wnfkh" event={"ID":"3f161f69-0220-4b3f-9f46-76277cd105f9","Type":"ContainerDied","Data":"de2481dba38f7f12860db2cb61b2f4648cfae5de917e0579aecfa5ce4be8dacb"} Nov 29 06:36:37 crc kubenswrapper[4947]: I1129 06:36:37.159661 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de2481dba38f7f12860db2cb61b2f4648cfae5de917e0579aecfa5ce4be8dacb" Nov 29 06:36:37 crc kubenswrapper[4947]: I1129 06:36:37.159832 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406630-wnfkh" Nov 29 06:36:37 crc kubenswrapper[4947]: I1129 06:36:37.187583 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Nov 29 06:36:37 crc kubenswrapper[4947]: I1129 06:36:37.226443 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/253ae1cb-50f4-48e7-a004-a70a958c27cd-utilities\") pod \"redhat-marketplace-8zln5\" (UID: \"253ae1cb-50f4-48e7-a004-a70a958c27cd\") " pod="openshift-marketplace/redhat-marketplace-8zln5" Nov 29 06:36:37 crc kubenswrapper[4947]: I1129 06:36:37.226798 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2cf6\" (UniqueName: \"kubernetes.io/projected/253ae1cb-50f4-48e7-a004-a70a958c27cd-kube-api-access-m2cf6\") pod \"redhat-marketplace-8zln5\" (UID: \"253ae1cb-50f4-48e7-a004-a70a958c27cd\") " pod="openshift-marketplace/redhat-marketplace-8zln5" Nov 29 06:36:37 crc kubenswrapper[4947]: I1129 06:36:37.226857 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/253ae1cb-50f4-48e7-a004-a70a958c27cd-catalog-content\") pod \"redhat-marketplace-8zln5\" (UID: \"253ae1cb-50f4-48e7-a004-a70a958c27cd\") " pod="openshift-marketplace/redhat-marketplace-8zln5" Nov 29 06:36:37 crc kubenswrapper[4947]: I1129 06:36:37.227370 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/253ae1cb-50f4-48e7-a004-a70a958c27cd-catalog-content\") pod \"redhat-marketplace-8zln5\" (UID: \"253ae1cb-50f4-48e7-a004-a70a958c27cd\") " pod="openshift-marketplace/redhat-marketplace-8zln5" Nov 29 06:36:37 crc kubenswrapper[4947]: I1129 06:36:37.227785 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/253ae1cb-50f4-48e7-a004-a70a958c27cd-utilities\") pod \"redhat-marketplace-8zln5\" (UID: \"253ae1cb-50f4-48e7-a004-a70a958c27cd\") " pod="openshift-marketplace/redhat-marketplace-8zln5" Nov 29 06:36:37 crc kubenswrapper[4947]: I1129 06:36:37.243279 4947 patch_prober.go:28] interesting pod/downloads-7954f5f757-zcdgs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Nov 29 06:36:37 crc kubenswrapper[4947]: I1129 06:36:37.243320 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zcdgs" podUID="e0e2a5e3-321b-4774-bf83-dd727fc954d2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" Nov 29 06:36:37 crc kubenswrapper[4947]: I1129 06:36:37.243336 4947 patch_prober.go:28] interesting pod/downloads-7954f5f757-zcdgs container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Nov 29 06:36:37 crc kubenswrapper[4947]: I1129 06:36:37.243382 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-zcdgs" podUID="e0e2a5e3-321b-4774-bf83-dd727fc954d2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" Nov 29 06:36:37 crc kubenswrapper[4947]: I1129 06:36:37.249934 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2cf6\" (UniqueName: \"kubernetes.io/projected/253ae1cb-50f4-48e7-a004-a70a958c27cd-kube-api-access-m2cf6\") pod \"redhat-marketplace-8zln5\" (UID: \"253ae1cb-50f4-48e7-a004-a70a958c27cd\") " pod="openshift-marketplace/redhat-marketplace-8zln5" Nov 29 06:36:37 crc kubenswrapper[4947]: I1129 06:36:37.262736 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-lx74d" Nov 29 06:36:37 crc kubenswrapper[4947]: I1129 06:36:37.263091 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-lx74d" Nov 29 06:36:37 crc kubenswrapper[4947]: I1129 06:36:37.284278 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-lx74d" Nov 29 06:36:37 crc kubenswrapper[4947]: I1129 06:36:37.330446 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fq4zz"] Nov 29 06:36:37 crc kubenswrapper[4947]: W1129 06:36:37.338753 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e620beb_b5db_4321_b404_0ef499ded600.slice/crio-f4b5cf2791740beecb54d837d3351f1f1be1163d1d7c1602b4267b4ecaa9d6c1 WatchSource:0}: Error finding container f4b5cf2791740beecb54d837d3351f1f1be1163d1d7c1602b4267b4ecaa9d6c1: Status 404 returned error can't find the container with id f4b5cf2791740beecb54d837d3351f1f1be1163d1d7c1602b4267b4ecaa9d6c1 Nov 29 06:36:37 crc kubenswrapper[4947]: I1129 06:36:37.398136 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8zln5" Nov 29 06:36:37 crc kubenswrapper[4947]: I1129 06:36:37.573465 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-6ndbx" Nov 29 06:36:37 crc kubenswrapper[4947]: I1129 06:36:37.577636 4947 patch_prober.go:28] interesting pod/router-default-5444994796-6ndbx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 06:36:37 crc kubenswrapper[4947]: [-]has-synced failed: reason withheld Nov 29 06:36:37 crc kubenswrapper[4947]: [+]process-running ok Nov 29 06:36:37 crc kubenswrapper[4947]: healthz check failed Nov 29 06:36:37 crc kubenswrapper[4947]: I1129 06:36:37.577706 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6ndbx" podUID="a4cf8b6f-d7b3-4cc2-adf5-2494f5d21b71" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 06:36:37 crc kubenswrapper[4947]: I1129 06:36:37.785349 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8zln5"] Nov 29 06:36:37 crc kubenswrapper[4947]: I1129 06:36:37.820248 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qsz24"] Nov 29 06:36:37 crc kubenswrapper[4947]: I1129 06:36:37.821541 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qsz24" Nov 29 06:36:37 crc kubenswrapper[4947]: I1129 06:36:37.823344 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 29 06:36:37 crc kubenswrapper[4947]: I1129 06:36:37.828667 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-nswtf" Nov 29 06:36:37 crc kubenswrapper[4947]: I1129 06:36:37.828758 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-nswtf" Nov 29 06:36:37 crc kubenswrapper[4947]: I1129 06:36:37.830406 4947 patch_prober.go:28] interesting pod/console-f9d7485db-nswtf container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.18:8443/health\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Nov 29 06:36:37 crc kubenswrapper[4947]: I1129 06:36:37.830453 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-nswtf" podUID="711e27d0-dd37-4f6f-adae-5c04bb856f47" containerName="console" probeResult="failure" output="Get \"https://10.217.0.18:8443/health\": dial tcp 10.217.0.18:8443: connect: connection refused" Nov 29 06:36:37 crc kubenswrapper[4947]: I1129 06:36:37.837071 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpvng\" (UniqueName: \"kubernetes.io/projected/93c9ef47-b7f5-41f4-8e91-bee4f16b658d-kube-api-access-tpvng\") pod \"redhat-operators-qsz24\" (UID: \"93c9ef47-b7f5-41f4-8e91-bee4f16b658d\") " pod="openshift-marketplace/redhat-operators-qsz24" Nov 29 06:36:37 crc kubenswrapper[4947]: I1129 06:36:37.837190 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93c9ef47-b7f5-41f4-8e91-bee4f16b658d-utilities\") pod \"redhat-operators-qsz24\" (UID: \"93c9ef47-b7f5-41f4-8e91-bee4f16b658d\") " pod="openshift-marketplace/redhat-operators-qsz24" Nov 29 06:36:37 crc kubenswrapper[4947]: I1129 06:36:37.837280 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93c9ef47-b7f5-41f4-8e91-bee4f16b658d-catalog-content\") pod \"redhat-operators-qsz24\" (UID: \"93c9ef47-b7f5-41f4-8e91-bee4f16b658d\") " pod="openshift-marketplace/redhat-operators-qsz24" Nov 29 06:36:37 crc kubenswrapper[4947]: I1129 06:36:37.839886 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qsz24"] Nov 29 06:36:37 crc kubenswrapper[4947]: I1129 06:36:37.874338 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ts6sm" Nov 29 06:36:37 crc kubenswrapper[4947]: I1129 06:36:37.874719 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ts6sm" Nov 29 06:36:37 crc kubenswrapper[4947]: I1129 06:36:37.887248 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cjcrf\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:37 crc kubenswrapper[4947]: I1129 06:36:37.898524 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ts6sm" Nov 29 06:36:37 crc kubenswrapper[4947]: I1129 06:36:37.909373 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-b45sk" podStartSLOduration=128.909347274 podStartE2EDuration="2m8.909347274s" podCreationTimestamp="2025-11-29 06:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:36:37.87964119 +0000 UTC m=+148.924023271" watchObservedRunningTime="2025-11-29 06:36:37.909347274 +0000 UTC m=+148.953729365" Nov 29 06:36:37 crc kubenswrapper[4947]: I1129 06:36:37.940021 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93c9ef47-b7f5-41f4-8e91-bee4f16b658d-utilities\") pod \"redhat-operators-qsz24\" (UID: \"93c9ef47-b7f5-41f4-8e91-bee4f16b658d\") " pod="openshift-marketplace/redhat-operators-qsz24" Nov 29 06:36:37 crc kubenswrapper[4947]: I1129 06:36:37.940104 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93c9ef47-b7f5-41f4-8e91-bee4f16b658d-catalog-content\") pod \"redhat-operators-qsz24\" (UID: \"93c9ef47-b7f5-41f4-8e91-bee4f16b658d\") " pod="openshift-marketplace/redhat-operators-qsz24" Nov 29 06:36:37 crc kubenswrapper[4947]: I1129 06:36:37.940194 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpvng\" (UniqueName: \"kubernetes.io/projected/93c9ef47-b7f5-41f4-8e91-bee4f16b658d-kube-api-access-tpvng\") pod \"redhat-operators-qsz24\" (UID: \"93c9ef47-b7f5-41f4-8e91-bee4f16b658d\") " pod="openshift-marketplace/redhat-operators-qsz24" Nov 29 06:36:37 crc kubenswrapper[4947]: I1129 06:36:37.941824 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93c9ef47-b7f5-41f4-8e91-bee4f16b658d-utilities\") pod \"redhat-operators-qsz24\" (UID: \"93c9ef47-b7f5-41f4-8e91-bee4f16b658d\") " pod="openshift-marketplace/redhat-operators-qsz24" Nov 29 06:36:37 crc kubenswrapper[4947]: I1129 06:36:37.942126 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93c9ef47-b7f5-41f4-8e91-bee4f16b658d-catalog-content\") pod \"redhat-operators-qsz24\" (UID: \"93c9ef47-b7f5-41f4-8e91-bee4f16b658d\") " pod="openshift-marketplace/redhat-operators-qsz24" Nov 29 06:36:37 crc kubenswrapper[4947]: I1129 06:36:37.961685 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpvng\" (UniqueName: \"kubernetes.io/projected/93c9ef47-b7f5-41f4-8e91-bee4f16b658d-kube-api-access-tpvng\") pod \"redhat-operators-qsz24\" (UID: \"93c9ef47-b7f5-41f4-8e91-bee4f16b658d\") " pod="openshift-marketplace/redhat-operators-qsz24" Nov 29 06:36:38 crc kubenswrapper[4947]: I1129 06:36:38.019912 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-6mhws" Nov 29 06:36:38 crc kubenswrapper[4947]: I1129 06:36:38.144829 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qsz24" Nov 29 06:36:38 crc kubenswrapper[4947]: I1129 06:36:38.160701 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:38 crc kubenswrapper[4947]: I1129 06:36:38.174557 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"b9fd0ee162c9a83ac66f6284e23a8b4ce7661ee9df58d0eabe939dc49415e070"} Nov 29 06:36:38 crc kubenswrapper[4947]: I1129 06:36:38.176963 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8zln5" event={"ID":"253ae1cb-50f4-48e7-a004-a70a958c27cd","Type":"ContainerStarted","Data":"6d5bfc375b7fb87fab2d1d765997e5912baae8a4340fef9ac54176aae988450a"} Nov 29 06:36:38 crc kubenswrapper[4947]: I1129 06:36:38.179571 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fq4zz" event={"ID":"1e620beb-b5db-4321-b404-0ef499ded600","Type":"ContainerStarted","Data":"f4b5cf2791740beecb54d837d3351f1f1be1163d1d7c1602b4267b4ecaa9d6c1"} Nov 29 06:36:38 crc kubenswrapper[4947]: I1129 06:36:38.187304 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"a5b307c462858d1d047a092bcea529c42f77bbe50748401ad8c2e47e39a08157"} Nov 29 06:36:38 crc kubenswrapper[4947]: I1129 06:36:38.195358 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ts6sm" Nov 29 06:36:38 crc kubenswrapper[4947]: I1129 06:36:38.195451 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-lx74d" Nov 29 06:36:38 crc kubenswrapper[4947]: I1129 06:36:38.223375 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-w24f6"] Nov 29 06:36:38 crc kubenswrapper[4947]: I1129 06:36:38.225378 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w24f6" Nov 29 06:36:38 crc kubenswrapper[4947]: I1129 06:36:38.239191 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w24f6"] Nov 29 06:36:38 crc kubenswrapper[4947]: I1129 06:36:38.244757 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd94a1d6-7039-4b84-aa44-ee8ec166da24-catalog-content\") pod \"redhat-operators-w24f6\" (UID: \"fd94a1d6-7039-4b84-aa44-ee8ec166da24\") " pod="openshift-marketplace/redhat-operators-w24f6" Nov 29 06:36:38 crc kubenswrapper[4947]: I1129 06:36:38.244807 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb9l4\" (UniqueName: \"kubernetes.io/projected/fd94a1d6-7039-4b84-aa44-ee8ec166da24-kube-api-access-sb9l4\") pod \"redhat-operators-w24f6\" (UID: \"fd94a1d6-7039-4b84-aa44-ee8ec166da24\") " pod="openshift-marketplace/redhat-operators-w24f6" Nov 29 06:36:38 crc kubenswrapper[4947]: I1129 06:36:38.245110 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd94a1d6-7039-4b84-aa44-ee8ec166da24-utilities\") pod \"redhat-operators-w24f6\" (UID: \"fd94a1d6-7039-4b84-aa44-ee8ec166da24\") " pod="openshift-marketplace/redhat-operators-w24f6" Nov 29 06:36:38 crc kubenswrapper[4947]: I1129 06:36:38.346700 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd94a1d6-7039-4b84-aa44-ee8ec166da24-utilities\") pod \"redhat-operators-w24f6\" (UID: \"fd94a1d6-7039-4b84-aa44-ee8ec166da24\") " pod="openshift-marketplace/redhat-operators-w24f6" Nov 29 06:36:38 crc kubenswrapper[4947]: I1129 06:36:38.347056 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd94a1d6-7039-4b84-aa44-ee8ec166da24-catalog-content\") pod \"redhat-operators-w24f6\" (UID: \"fd94a1d6-7039-4b84-aa44-ee8ec166da24\") " pod="openshift-marketplace/redhat-operators-w24f6" Nov 29 06:36:38 crc kubenswrapper[4947]: I1129 06:36:38.347083 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb9l4\" (UniqueName: \"kubernetes.io/projected/fd94a1d6-7039-4b84-aa44-ee8ec166da24-kube-api-access-sb9l4\") pod \"redhat-operators-w24f6\" (UID: \"fd94a1d6-7039-4b84-aa44-ee8ec166da24\") " pod="openshift-marketplace/redhat-operators-w24f6" Nov 29 06:36:38 crc kubenswrapper[4947]: I1129 06:36:38.347918 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd94a1d6-7039-4b84-aa44-ee8ec166da24-utilities\") pod \"redhat-operators-w24f6\" (UID: \"fd94a1d6-7039-4b84-aa44-ee8ec166da24\") " pod="openshift-marketplace/redhat-operators-w24f6" Nov 29 06:36:38 crc kubenswrapper[4947]: I1129 06:36:38.347989 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd94a1d6-7039-4b84-aa44-ee8ec166da24-catalog-content\") pod \"redhat-operators-w24f6\" (UID: \"fd94a1d6-7039-4b84-aa44-ee8ec166da24\") " pod="openshift-marketplace/redhat-operators-w24f6" Nov 29 06:36:38 crc kubenswrapper[4947]: I1129 06:36:38.514316 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qsz24"] Nov 29 06:36:38 crc kubenswrapper[4947]: I1129 06:36:38.580454 4947 patch_prober.go:28] interesting pod/router-default-5444994796-6ndbx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 06:36:38 crc kubenswrapper[4947]: [-]has-synced failed: reason withheld Nov 29 06:36:38 crc kubenswrapper[4947]: [+]process-running ok Nov 29 06:36:38 crc kubenswrapper[4947]: healthz check failed Nov 29 06:36:38 crc kubenswrapper[4947]: I1129 06:36:38.580570 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6ndbx" podUID="a4cf8b6f-d7b3-4cc2-adf5-2494f5d21b71" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 06:36:38 crc kubenswrapper[4947]: I1129 06:36:38.595805 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb9l4\" (UniqueName: \"kubernetes.io/projected/fd94a1d6-7039-4b84-aa44-ee8ec166da24-kube-api-access-sb9l4\") pod \"redhat-operators-w24f6\" (UID: \"fd94a1d6-7039-4b84-aa44-ee8ec166da24\") " pod="openshift-marketplace/redhat-operators-w24f6" Nov 29 06:36:38 crc kubenswrapper[4947]: I1129 06:36:38.616519 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-cjcrf"] Nov 29 06:36:38 crc kubenswrapper[4947]: W1129 06:36:38.624402 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod819051f4_236d_42d3_b3cf_c82103136dce.slice/crio-450be68be89a39c2a7e124829fb8cdeeb4dcb3c23bb75b99843614c39c019ec7 WatchSource:0}: Error finding container 450be68be89a39c2a7e124829fb8cdeeb4dcb3c23bb75b99843614c39c019ec7: Status 404 returned error can't find the container with id 450be68be89a39c2a7e124829fb8cdeeb4dcb3c23bb75b99843614c39c019ec7 Nov 29 06:36:38 crc kubenswrapper[4947]: I1129 06:36:38.852585 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w24f6" Nov 29 06:36:39 crc kubenswrapper[4947]: I1129 06:36:39.086748 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w24f6"] Nov 29 06:36:39 crc kubenswrapper[4947]: W1129 06:36:39.096167 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd94a1d6_7039_4b84_aa44_ee8ec166da24.slice/crio-6d07517addb7352585fbc41b52e6ae93d1a48b53856041ebd6b1a86f0f2c9e5a WatchSource:0}: Error finding container 6d07517addb7352585fbc41b52e6ae93d1a48b53856041ebd6b1a86f0f2c9e5a: Status 404 returned error can't find the container with id 6d07517addb7352585fbc41b52e6ae93d1a48b53856041ebd6b1a86f0f2c9e5a Nov 29 06:36:39 crc kubenswrapper[4947]: I1129 06:36:39.222177 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" event={"ID":"819051f4-236d-42d3-b3cf-c82103136dce","Type":"ContainerStarted","Data":"450be68be89a39c2a7e124829fb8cdeeb4dcb3c23bb75b99843614c39c019ec7"} Nov 29 06:36:39 crc kubenswrapper[4947]: I1129 06:36:39.224469 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w24f6" event={"ID":"fd94a1d6-7039-4b84-aa44-ee8ec166da24","Type":"ContainerStarted","Data":"6d07517addb7352585fbc41b52e6ae93d1a48b53856041ebd6b1a86f0f2c9e5a"} Nov 29 06:36:39 crc kubenswrapper[4947]: I1129 06:36:39.226180 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t4vb4" event={"ID":"159f707e-f150-45c6-9371-6b4b272eaf5d","Type":"ContainerStarted","Data":"36ff54017bf5d6cb1a3fa22d28583e9c2b073eee805669cf1bfb9f564682371e"} Nov 29 06:36:39 crc kubenswrapper[4947]: I1129 06:36:39.227487 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qsz24" event={"ID":"93c9ef47-b7f5-41f4-8e91-bee4f16b658d","Type":"ContainerStarted","Data":"00d4ec0c49505de171c295ed3db5700c4603af0de15aa4acfca9a970fea10ac2"} Nov 29 06:36:40 crc kubenswrapper[4947]: I1129 06:36:40.138561 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-b45sk" Nov 29 06:36:40 crc kubenswrapper[4947]: I1129 06:36:40.147117 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-b45sk" Nov 29 06:36:40 crc kubenswrapper[4947]: I1129 06:36:40.233479 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"35af846b-b152-4122-8ec9-3eec41937893","Type":"ContainerStarted","Data":"e066567b529440c9a23535c7c6766043b8896f907bcbdd1f6e04efb0f4f24f57"} Nov 29 06:36:40 crc kubenswrapper[4947]: I1129 06:36:40.234862 4947 generic.go:334] "Generic (PLEG): container finished" podID="159f707e-f150-45c6-9371-6b4b272eaf5d" containerID="36ff54017bf5d6cb1a3fa22d28583e9c2b073eee805669cf1bfb9f564682371e" exitCode=0 Nov 29 06:36:40 crc kubenswrapper[4947]: I1129 06:36:40.234970 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t4vb4" event={"ID":"159f707e-f150-45c6-9371-6b4b272eaf5d","Type":"ContainerDied","Data":"36ff54017bf5d6cb1a3fa22d28583e9c2b073eee805669cf1bfb9f564682371e"} Nov 29 06:36:40 crc kubenswrapper[4947]: I1129 06:36:40.321673 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-6ndbx" Nov 29 06:36:40 crc kubenswrapper[4947]: I1129 06:36:40.325676 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-6ndbx" Nov 29 06:36:41 crc kubenswrapper[4947]: I1129 06:36:41.241804 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fq4zz" event={"ID":"1e620beb-b5db-4321-b404-0ef499ded600","Type":"ContainerStarted","Data":"86f2002f25356ee3382834b6865c179bef0e838a309ca7342f577cefffae10bd"} Nov 29 06:36:41 crc kubenswrapper[4947]: I1129 06:36:41.243510 4947 generic.go:334] "Generic (PLEG): container finished" podID="658b72a9-13fc-4881-88c5-109b221bbc48" containerID="8305120d2c16842620cab8de3a5af9aabfd34c903721287ec8b672723e321425" exitCode=0 Nov 29 06:36:41 crc kubenswrapper[4947]: I1129 06:36:41.243601 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wqz4n" event={"ID":"658b72a9-13fc-4881-88c5-109b221bbc48","Type":"ContainerDied","Data":"8305120d2c16842620cab8de3a5af9aabfd34c903721287ec8b672723e321425"} Nov 29 06:36:41 crc kubenswrapper[4947]: I1129 06:36:41.245183 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"fb2880b1e842a2a4069a0fb60013dc11b388c3ac4764db19fa514e089d055059"} Nov 29 06:36:41 crc kubenswrapper[4947]: I1129 06:36:41.246584 4947 generic.go:334] "Generic (PLEG): container finished" podID="b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8" containerID="916c62a2f75dca89ddefabefc3aa70951292f893d1f25d243e3ea1333a2aaa3a" exitCode=0 Nov 29 06:36:41 crc kubenswrapper[4947]: I1129 06:36:41.246672 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-phgdn" event={"ID":"b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8","Type":"ContainerDied","Data":"916c62a2f75dca89ddefabefc3aa70951292f893d1f25d243e3ea1333a2aaa3a"} Nov 29 06:36:41 crc kubenswrapper[4947]: I1129 06:36:41.248096 4947 generic.go:334] "Generic (PLEG): container finished" podID="1a37f770-07c2-40b1-9f24-ccddc3215658" containerID="31f5d71eb410e2949ad9f4965932711bd2c39c456909db5a679aa8c8f5bbb0b6" exitCode=0 Nov 29 06:36:41 crc kubenswrapper[4947]: I1129 06:36:41.248181 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gbhvj" event={"ID":"1a37f770-07c2-40b1-9f24-ccddc3215658","Type":"ContainerDied","Data":"31f5d71eb410e2949ad9f4965932711bd2c39c456909db5a679aa8c8f5bbb0b6"} Nov 29 06:36:41 crc kubenswrapper[4947]: I1129 06:36:41.249389 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"bee9ed080aad75cd5157004a297ba3fac93ce4b3ebe0089d95668f5cc161520d"} Nov 29 06:36:41 crc kubenswrapper[4947]: I1129 06:36:41.250462 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"2939b65bcb80f56b705a266e116ccca87ce827f8e45f0392dcb15349a7b9be70"} Nov 29 06:36:41 crc kubenswrapper[4947]: I1129 06:36:41.251693 4947 generic.go:334] "Generic (PLEG): container finished" podID="253ae1cb-50f4-48e7-a004-a70a958c27cd" containerID="c68eed3d9db1869817626a6f965483b1ed0d01a01d450a271ab6c273ae51aeb3" exitCode=0 Nov 29 06:36:41 crc kubenswrapper[4947]: I1129 06:36:41.252508 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8zln5" event={"ID":"253ae1cb-50f4-48e7-a004-a70a958c27cd","Type":"ContainerDied","Data":"c68eed3d9db1869817626a6f965483b1ed0d01a01d450a271ab6c273ae51aeb3"} Nov 29 06:36:42 crc kubenswrapper[4947]: I1129 06:36:42.136422 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 29 06:36:42 crc kubenswrapper[4947]: I1129 06:36:42.137463 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 29 06:36:42 crc kubenswrapper[4947]: I1129 06:36:42.141294 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 29 06:36:42 crc kubenswrapper[4947]: I1129 06:36:42.141308 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 29 06:36:42 crc kubenswrapper[4947]: I1129 06:36:42.149006 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 29 06:36:42 crc kubenswrapper[4947]: I1129 06:36:42.258416 4947 generic.go:334] "Generic (PLEG): container finished" podID="fd94a1d6-7039-4b84-aa44-ee8ec166da24" containerID="33967f8a2bfa6fd85b97d82e57ac6127b4f2d826788d375fc2ebb91d9772287f" exitCode=0 Nov 29 06:36:42 crc kubenswrapper[4947]: I1129 06:36:42.258768 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w24f6" event={"ID":"fd94a1d6-7039-4b84-aa44-ee8ec166da24","Type":"ContainerDied","Data":"33967f8a2bfa6fd85b97d82e57ac6127b4f2d826788d375fc2ebb91d9772287f"} Nov 29 06:36:42 crc kubenswrapper[4947]: I1129 06:36:42.259877 4947 generic.go:334] "Generic (PLEG): container finished" podID="1e620beb-b5db-4321-b404-0ef499ded600" containerID="86f2002f25356ee3382834b6865c179bef0e838a309ca7342f577cefffae10bd" exitCode=0 Nov 29 06:36:42 crc kubenswrapper[4947]: I1129 06:36:42.259944 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fq4zz" event={"ID":"1e620beb-b5db-4321-b404-0ef499ded600","Type":"ContainerDied","Data":"86f2002f25356ee3382834b6865c179bef0e838a309ca7342f577cefffae10bd"} Nov 29 06:36:42 crc kubenswrapper[4947]: I1129 06:36:42.261424 4947 generic.go:334] "Generic (PLEG): container finished" podID="93c9ef47-b7f5-41f4-8e91-bee4f16b658d" containerID="d62a5d2f3d75367628e637726ba319a36808d77db75ae6be396a2295a5b501e7" exitCode=0 Nov 29 06:36:42 crc kubenswrapper[4947]: I1129 06:36:42.261500 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qsz24" event={"ID":"93c9ef47-b7f5-41f4-8e91-bee4f16b658d","Type":"ContainerDied","Data":"d62a5d2f3d75367628e637726ba319a36808d77db75ae6be396a2295a5b501e7"} Nov 29 06:36:42 crc kubenswrapper[4947]: I1129 06:36:42.263011 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" event={"ID":"819051f4-236d-42d3-b3cf-c82103136dce","Type":"ContainerStarted","Data":"9fa2f1f31d39edaf36810c348cb6d2c8f08a391489acff1804cdc85a7b24264e"} Nov 29 06:36:42 crc kubenswrapper[4947]: I1129 06:36:42.265708 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-p7wlh" event={"ID":"7fc6440e-f991-4421-b078-9496ffdfb74d","Type":"ContainerStarted","Data":"e34759f99f484f4dd81009fc6e476a5c665efb1c7849135560136036310879ad"} Nov 29 06:36:42 crc kubenswrapper[4947]: I1129 06:36:42.268017 4947 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 06:36:42 crc kubenswrapper[4947]: I1129 06:36:42.285202 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76b0a502-11c9-4674-89b7-c5cc5ba44edc-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"76b0a502-11c9-4674-89b7-c5cc5ba44edc\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 29 06:36:42 crc kubenswrapper[4947]: I1129 06:36:42.285363 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/76b0a502-11c9-4674-89b7-c5cc5ba44edc-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"76b0a502-11c9-4674-89b7-c5cc5ba44edc\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 29 06:36:42 crc kubenswrapper[4947]: I1129 06:36:42.305992 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=8.305975901 podStartE2EDuration="8.305975901s" podCreationTimestamp="2025-11-29 06:36:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:36:42.283971206 +0000 UTC m=+153.328353277" watchObservedRunningTime="2025-11-29 06:36:42.305975901 +0000 UTC m=+153.350357982" Nov 29 06:36:42 crc kubenswrapper[4947]: I1129 06:36:42.386537 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/76b0a502-11c9-4674-89b7-c5cc5ba44edc-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"76b0a502-11c9-4674-89b7-c5cc5ba44edc\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 29 06:36:42 crc kubenswrapper[4947]: I1129 06:36:42.386605 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76b0a502-11c9-4674-89b7-c5cc5ba44edc-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"76b0a502-11c9-4674-89b7-c5cc5ba44edc\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 29 06:36:42 crc kubenswrapper[4947]: I1129 06:36:42.386666 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/76b0a502-11c9-4674-89b7-c5cc5ba44edc-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"76b0a502-11c9-4674-89b7-c5cc5ba44edc\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 29 06:36:42 crc kubenswrapper[4947]: I1129 06:36:42.404265 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76b0a502-11c9-4674-89b7-c5cc5ba44edc-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"76b0a502-11c9-4674-89b7-c5cc5ba44edc\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 29 06:36:42 crc kubenswrapper[4947]: I1129 06:36:42.455838 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 29 06:36:42 crc kubenswrapper[4947]: I1129 06:36:42.634076 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 29 06:36:42 crc kubenswrapper[4947]: W1129 06:36:42.640736 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod76b0a502_11c9_4674_89b7_c5cc5ba44edc.slice/crio-218bf8e9c44fb97821ab1fe599981c293b4151460538915304489f7577b79306 WatchSource:0}: Error finding container 218bf8e9c44fb97821ab1fe599981c293b4151460538915304489f7577b79306: Status 404 returned error can't find the container with id 218bf8e9c44fb97821ab1fe599981c293b4151460538915304489f7577b79306 Nov 29 06:36:43 crc kubenswrapper[4947]: I1129 06:36:43.783052 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"76b0a502-11c9-4674-89b7-c5cc5ba44edc","Type":"ContainerStarted","Data":"218bf8e9c44fb97821ab1fe599981c293b4151460538915304489f7577b79306"} Nov 29 06:36:43 crc kubenswrapper[4947]: I1129 06:36:43.784587 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-td4m7" Nov 29 06:36:44 crc kubenswrapper[4947]: I1129 06:36:44.788237 4947 generic.go:334] "Generic (PLEG): container finished" podID="35af846b-b152-4122-8ec9-3eec41937893" containerID="e066567b529440c9a23535c7c6766043b8896f907bcbdd1f6e04efb0f4f24f57" exitCode=0 Nov 29 06:36:44 crc kubenswrapper[4947]: I1129 06:36:44.788323 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"35af846b-b152-4122-8ec9-3eec41937893","Type":"ContainerDied","Data":"e066567b529440c9a23535c7c6766043b8896f907bcbdd1f6e04efb0f4f24f57"} Nov 29 06:36:45 crc kubenswrapper[4947]: I1129 06:36:45.871928 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" podStartSLOduration=136.871906693 podStartE2EDuration="2m16.871906693s" podCreationTimestamp="2025-11-29 06:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:36:45.869730478 +0000 UTC m=+156.914112559" watchObservedRunningTime="2025-11-29 06:36:45.871906693 +0000 UTC m=+156.916288774" Nov 29 06:36:46 crc kubenswrapper[4947]: I1129 06:36:46.096487 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 29 06:36:46 crc kubenswrapper[4947]: I1129 06:36:46.235859 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35af846b-b152-4122-8ec9-3eec41937893-kube-api-access\") pod \"35af846b-b152-4122-8ec9-3eec41937893\" (UID: \"35af846b-b152-4122-8ec9-3eec41937893\") " Nov 29 06:36:46 crc kubenswrapper[4947]: I1129 06:36:46.235926 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/35af846b-b152-4122-8ec9-3eec41937893-kubelet-dir\") pod \"35af846b-b152-4122-8ec9-3eec41937893\" (UID: \"35af846b-b152-4122-8ec9-3eec41937893\") " Nov 29 06:36:46 crc kubenswrapper[4947]: I1129 06:36:46.236174 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35af846b-b152-4122-8ec9-3eec41937893-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "35af846b-b152-4122-8ec9-3eec41937893" (UID: "35af846b-b152-4122-8ec9-3eec41937893"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 06:36:46 crc kubenswrapper[4947]: I1129 06:36:46.236383 4947 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/35af846b-b152-4122-8ec9-3eec41937893-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 29 06:36:46 crc kubenswrapper[4947]: I1129 06:36:46.244452 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35af846b-b152-4122-8ec9-3eec41937893-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "35af846b-b152-4122-8ec9-3eec41937893" (UID: "35af846b-b152-4122-8ec9-3eec41937893"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:36:46 crc kubenswrapper[4947]: I1129 06:36:46.338466 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35af846b-b152-4122-8ec9-3eec41937893-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 29 06:36:46 crc kubenswrapper[4947]: I1129 06:36:46.698480 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:36:46 crc kubenswrapper[4947]: I1129 06:36:46.801685 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"35af846b-b152-4122-8ec9-3eec41937893","Type":"ContainerDied","Data":"45254bcfb48c569ec373f865ac34245ecc6790434c627994d25d2744a310818c"} Nov 29 06:36:46 crc kubenswrapper[4947]: I1129 06:36:46.802100 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45254bcfb48c569ec373f865ac34245ecc6790434c627994d25d2744a310818c" Nov 29 06:36:46 crc kubenswrapper[4947]: I1129 06:36:46.801763 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 29 06:36:47 crc kubenswrapper[4947]: I1129 06:36:47.250123 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-zcdgs" Nov 29 06:36:47 crc kubenswrapper[4947]: I1129 06:36:47.809663 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"76b0a502-11c9-4674-89b7-c5cc5ba44edc","Type":"ContainerStarted","Data":"ec59bef870337dffcead3e89780190b0ab3c5f9e8cb0332c7d8a903d469d11ea"} Nov 29 06:36:48 crc kubenswrapper[4947]: I1129 06:36:48.161279 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:36:48 crc kubenswrapper[4947]: I1129 06:36:48.203771 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-nswtf" Nov 29 06:36:48 crc kubenswrapper[4947]: I1129 06:36:48.215649 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-nswtf" Nov 29 06:36:49 crc kubenswrapper[4947]: I1129 06:36:49.821417 4947 generic.go:334] "Generic (PLEG): container finished" podID="76b0a502-11c9-4674-89b7-c5cc5ba44edc" containerID="ec59bef870337dffcead3e89780190b0ab3c5f9e8cb0332c7d8a903d469d11ea" exitCode=0 Nov 29 06:36:49 crc kubenswrapper[4947]: I1129 06:36:49.821478 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"76b0a502-11c9-4674-89b7-c5cc5ba44edc","Type":"ContainerDied","Data":"ec59bef870337dffcead3e89780190b0ab3c5f9e8cb0332c7d8a903d469d11ea"} Nov 29 06:36:49 crc kubenswrapper[4947]: I1129 06:36:49.825164 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-p7wlh" event={"ID":"7fc6440e-f991-4421-b078-9496ffdfb74d","Type":"ContainerStarted","Data":"43f7b56a3f6fc0402c4446eb8318e80f14eb95e0987b76914722d1d4ecd88ef9"} Nov 29 06:36:49 crc kubenswrapper[4947]: I1129 06:36:49.853410 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-p7wlh" podStartSLOduration=25.853387404 podStartE2EDuration="25.853387404s" podCreationTimestamp="2025-11-29 06:36:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:36:49.851171868 +0000 UTC m=+160.895553949" watchObservedRunningTime="2025-11-29 06:36:49.853387404 +0000 UTC m=+160.897769485" Nov 29 06:36:51 crc kubenswrapper[4947]: I1129 06:36:51.160386 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 29 06:36:51 crc kubenswrapper[4947]: I1129 06:36:51.345691 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/76b0a502-11c9-4674-89b7-c5cc5ba44edc-kubelet-dir\") pod \"76b0a502-11c9-4674-89b7-c5cc5ba44edc\" (UID: \"76b0a502-11c9-4674-89b7-c5cc5ba44edc\") " Nov 29 06:36:51 crc kubenswrapper[4947]: I1129 06:36:51.345806 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76b0a502-11c9-4674-89b7-c5cc5ba44edc-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "76b0a502-11c9-4674-89b7-c5cc5ba44edc" (UID: "76b0a502-11c9-4674-89b7-c5cc5ba44edc"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 06:36:51 crc kubenswrapper[4947]: I1129 06:36:51.345813 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76b0a502-11c9-4674-89b7-c5cc5ba44edc-kube-api-access\") pod \"76b0a502-11c9-4674-89b7-c5cc5ba44edc\" (UID: \"76b0a502-11c9-4674-89b7-c5cc5ba44edc\") " Nov 29 06:36:51 crc kubenswrapper[4947]: I1129 06:36:51.346101 4947 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/76b0a502-11c9-4674-89b7-c5cc5ba44edc-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 29 06:36:51 crc kubenswrapper[4947]: I1129 06:36:51.351525 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76b0a502-11c9-4674-89b7-c5cc5ba44edc-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "76b0a502-11c9-4674-89b7-c5cc5ba44edc" (UID: "76b0a502-11c9-4674-89b7-c5cc5ba44edc"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:36:51 crc kubenswrapper[4947]: I1129 06:36:51.447553 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76b0a502-11c9-4674-89b7-c5cc5ba44edc-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 29 06:36:51 crc kubenswrapper[4947]: I1129 06:36:51.851693 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"76b0a502-11c9-4674-89b7-c5cc5ba44edc","Type":"ContainerDied","Data":"218bf8e9c44fb97821ab1fe599981c293b4151460538915304489f7577b79306"} Nov 29 06:36:51 crc kubenswrapper[4947]: I1129 06:36:51.851760 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="218bf8e9c44fb97821ab1fe599981c293b4151460538915304489f7577b79306" Nov 29 06:36:51 crc kubenswrapper[4947]: I1129 06:36:51.851739 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 29 06:36:52 crc kubenswrapper[4947]: I1129 06:36:52.162071 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/53a3bcac-8ad0-47ce-abee-ee56fd152ea8-metrics-certs\") pod \"network-metrics-daemon-2fbj5\" (UID: \"53a3bcac-8ad0-47ce-abee-ee56fd152ea8\") " pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:36:52 crc kubenswrapper[4947]: I1129 06:36:52.170287 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/53a3bcac-8ad0-47ce-abee-ee56fd152ea8-metrics-certs\") pod \"network-metrics-daemon-2fbj5\" (UID: \"53a3bcac-8ad0-47ce-abee-ee56fd152ea8\") " pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:36:52 crc kubenswrapper[4947]: I1129 06:36:52.309156 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2fbj5" Nov 29 06:36:52 crc kubenswrapper[4947]: I1129 06:36:52.712517 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-2fbj5"] Nov 29 06:36:52 crc kubenswrapper[4947]: I1129 06:36:52.869348 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2fbj5" event={"ID":"53a3bcac-8ad0-47ce-abee-ee56fd152ea8","Type":"ContainerStarted","Data":"99c40142bced70866ddd4df9c036386f3d17bb2469627628a87428a3d8e223d0"} Nov 29 06:36:52 crc kubenswrapper[4947]: I1129 06:36:52.988017 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 06:36:52 crc kubenswrapper[4947]: I1129 06:36:52.988092 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 06:36:54 crc kubenswrapper[4947]: I1129 06:36:54.925470 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2fbj5" event={"ID":"53a3bcac-8ad0-47ce-abee-ee56fd152ea8","Type":"ContainerStarted","Data":"c8d71a36c28892f738c56d564dee62eb17af63b9d5e7779f9215ec145491d2a4"} Nov 29 06:36:58 crc kubenswrapper[4947]: I1129 06:36:58.165294 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:37:07 crc kubenswrapper[4947]: I1129 06:37:07.953512 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mzd69" Nov 29 06:37:17 crc kubenswrapper[4947]: I1129 06:37:17.888949 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 06:37:18 crc kubenswrapper[4947]: I1129 06:37:18.133722 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 29 06:37:18 crc kubenswrapper[4947]: E1129 06:37:18.133989 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76b0a502-11c9-4674-89b7-c5cc5ba44edc" containerName="pruner" Nov 29 06:37:18 crc kubenswrapper[4947]: I1129 06:37:18.134005 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="76b0a502-11c9-4674-89b7-c5cc5ba44edc" containerName="pruner" Nov 29 06:37:18 crc kubenswrapper[4947]: E1129 06:37:18.134033 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35af846b-b152-4122-8ec9-3eec41937893" containerName="pruner" Nov 29 06:37:18 crc kubenswrapper[4947]: I1129 06:37:18.134041 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="35af846b-b152-4122-8ec9-3eec41937893" containerName="pruner" Nov 29 06:37:18 crc kubenswrapper[4947]: I1129 06:37:18.134158 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="76b0a502-11c9-4674-89b7-c5cc5ba44edc" containerName="pruner" Nov 29 06:37:18 crc kubenswrapper[4947]: I1129 06:37:18.134178 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="35af846b-b152-4122-8ec9-3eec41937893" containerName="pruner" Nov 29 06:37:18 crc kubenswrapper[4947]: I1129 06:37:18.134681 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 29 06:37:18 crc kubenswrapper[4947]: I1129 06:37:18.136622 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 29 06:37:18 crc kubenswrapper[4947]: I1129 06:37:18.138123 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 29 06:37:18 crc kubenswrapper[4947]: I1129 06:37:18.145837 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 29 06:37:18 crc kubenswrapper[4947]: I1129 06:37:18.160747 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e598e1a-e111-4110-99ea-22bc72820f0c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8e598e1a-e111-4110-99ea-22bc72820f0c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 29 06:37:18 crc kubenswrapper[4947]: I1129 06:37:18.160843 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e598e1a-e111-4110-99ea-22bc72820f0c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8e598e1a-e111-4110-99ea-22bc72820f0c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 29 06:37:18 crc kubenswrapper[4947]: I1129 06:37:18.261800 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e598e1a-e111-4110-99ea-22bc72820f0c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8e598e1a-e111-4110-99ea-22bc72820f0c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 29 06:37:18 crc kubenswrapper[4947]: I1129 06:37:18.261866 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e598e1a-e111-4110-99ea-22bc72820f0c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8e598e1a-e111-4110-99ea-22bc72820f0c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 29 06:37:18 crc kubenswrapper[4947]: I1129 06:37:18.261944 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e598e1a-e111-4110-99ea-22bc72820f0c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8e598e1a-e111-4110-99ea-22bc72820f0c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 29 06:37:18 crc kubenswrapper[4947]: I1129 06:37:18.285894 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e598e1a-e111-4110-99ea-22bc72820f0c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8e598e1a-e111-4110-99ea-22bc72820f0c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 29 06:37:18 crc kubenswrapper[4947]: I1129 06:37:18.472204 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 29 06:37:22 crc kubenswrapper[4947]: I1129 06:37:22.988688 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 06:37:22 crc kubenswrapper[4947]: I1129 06:37:22.989603 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 06:37:23 crc kubenswrapper[4947]: I1129 06:37:23.338742 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 29 06:37:23 crc kubenswrapper[4947]: I1129 06:37:23.339416 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 29 06:37:23 crc kubenswrapper[4947]: I1129 06:37:23.347547 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 29 06:37:23 crc kubenswrapper[4947]: I1129 06:37:23.531562 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05e8a117-69fe-4488-b4e4-c0d7f1b4a63a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"05e8a117-69fe-4488-b4e4-c0d7f1b4a63a\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 29 06:37:23 crc kubenswrapper[4947]: I1129 06:37:23.532085 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/05e8a117-69fe-4488-b4e4-c0d7f1b4a63a-var-lock\") pod \"installer-9-crc\" (UID: \"05e8a117-69fe-4488-b4e4-c0d7f1b4a63a\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 29 06:37:23 crc kubenswrapper[4947]: I1129 06:37:23.532115 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05e8a117-69fe-4488-b4e4-c0d7f1b4a63a-kube-api-access\") pod \"installer-9-crc\" (UID: \"05e8a117-69fe-4488-b4e4-c0d7f1b4a63a\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 29 06:37:23 crc kubenswrapper[4947]: I1129 06:37:23.633201 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05e8a117-69fe-4488-b4e4-c0d7f1b4a63a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"05e8a117-69fe-4488-b4e4-c0d7f1b4a63a\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 29 06:37:23 crc kubenswrapper[4947]: I1129 06:37:23.633292 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05e8a117-69fe-4488-b4e4-c0d7f1b4a63a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"05e8a117-69fe-4488-b4e4-c0d7f1b4a63a\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 29 06:37:23 crc kubenswrapper[4947]: I1129 06:37:23.633354 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/05e8a117-69fe-4488-b4e4-c0d7f1b4a63a-var-lock\") pod \"installer-9-crc\" (UID: \"05e8a117-69fe-4488-b4e4-c0d7f1b4a63a\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 29 06:37:23 crc kubenswrapper[4947]: I1129 06:37:23.633402 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05e8a117-69fe-4488-b4e4-c0d7f1b4a63a-kube-api-access\") pod \"installer-9-crc\" (UID: \"05e8a117-69fe-4488-b4e4-c0d7f1b4a63a\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 29 06:37:23 crc kubenswrapper[4947]: I1129 06:37:23.633481 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/05e8a117-69fe-4488-b4e4-c0d7f1b4a63a-var-lock\") pod \"installer-9-crc\" (UID: \"05e8a117-69fe-4488-b4e4-c0d7f1b4a63a\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 29 06:37:23 crc kubenswrapper[4947]: I1129 06:37:23.661876 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05e8a117-69fe-4488-b4e4-c0d7f1b4a63a-kube-api-access\") pod \"installer-9-crc\" (UID: \"05e8a117-69fe-4488-b4e4-c0d7f1b4a63a\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 29 06:37:23 crc kubenswrapper[4947]: I1129 06:37:23.682280 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 29 06:37:44 crc kubenswrapper[4947]: E1129 06:37:44.338592 4947 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:b45b4080e75db66dbb2f4d8403f29133c1829a6e7a5055752f4267aea3a23894: Get \"https://registry.redhat.io/v2/redhat/community-operator-index/blobs/sha256:b45b4080e75db66dbb2f4d8403f29133c1829a6e7a5055752f4267aea3a23894\": context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 29 06:37:44 crc kubenswrapper[4947]: E1129 06:37:44.339509 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wt875,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-wqz4n_openshift-marketplace(658b72a9-13fc-4881-88c5-109b221bbc48): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:b45b4080e75db66dbb2f4d8403f29133c1829a6e7a5055752f4267aea3a23894: Get \"https://registry.redhat.io/v2/redhat/community-operator-index/blobs/sha256:b45b4080e75db66dbb2f4d8403f29133c1829a6e7a5055752f4267aea3a23894\": context canceled" logger="UnhandledError" Nov 29 06:37:44 crc kubenswrapper[4947]: E1129 06:37:44.340956 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:b45b4080e75db66dbb2f4d8403f29133c1829a6e7a5055752f4267aea3a23894: Get \\\"https://registry.redhat.io/v2/redhat/community-operator-index/blobs/sha256:b45b4080e75db66dbb2f4d8403f29133c1829a6e7a5055752f4267aea3a23894\\\": context canceled\"" pod="openshift-marketplace/community-operators-wqz4n" podUID="658b72a9-13fc-4881-88c5-109b221bbc48" Nov 29 06:37:44 crc kubenswrapper[4947]: E1129 06:37:44.341130 4947 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 29 06:37:44 crc kubenswrapper[4947]: E1129 06:37:44.341338 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fd5ln,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-t4vb4_openshift-marketplace(159f707e-f150-45c6-9371-6b4b272eaf5d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 29 06:37:44 crc kubenswrapper[4947]: E1129 06:37:44.342477 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-t4vb4" podUID="159f707e-f150-45c6-9371-6b4b272eaf5d" Nov 29 06:37:45 crc kubenswrapper[4947]: E1129 06:37:45.633123 4947 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 29 06:37:45 crc kubenswrapper[4947]: E1129 06:37:45.634033 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5pfz9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-phgdn_openshift-marketplace(b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 29 06:37:45 crc kubenswrapper[4947]: E1129 06:37:45.636582 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-phgdn" podUID="b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8" Nov 29 06:37:48 crc kubenswrapper[4947]: E1129 06:37:48.506435 4947 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 29 06:37:48 crc kubenswrapper[4947]: E1129 06:37:48.507597 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sb9l4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-w24f6_openshift-marketplace(fd94a1d6-7039-4b84-aa44-ee8ec166da24): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 29 06:37:48 crc kubenswrapper[4947]: E1129 06:37:48.509472 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-w24f6" podUID="fd94a1d6-7039-4b84-aa44-ee8ec166da24" Nov 29 06:37:49 crc kubenswrapper[4947]: E1129 06:37:49.824105 4947 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 29 06:37:49 crc kubenswrapper[4947]: E1129 06:37:49.824905 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lztlt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-gbhvj_openshift-marketplace(1a37f770-07c2-40b1-9f24-ccddc3215658): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 29 06:37:49 crc kubenswrapper[4947]: E1129 06:37:49.826174 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-gbhvj" podUID="1a37f770-07c2-40b1-9f24-ccddc3215658" Nov 29 06:37:49 crc kubenswrapper[4947]: E1129 06:37:49.875654 4947 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 29 06:37:49 crc kubenswrapper[4947]: E1129 06:37:49.875830 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4xpnc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-fq4zz_openshift-marketplace(1e620beb-b5db-4321-b404-0ef499ded600): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 29 06:37:49 crc kubenswrapper[4947]: E1129 06:37:49.877360 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-fq4zz" podUID="1e620beb-b5db-4321-b404-0ef499ded600" Nov 29 06:37:49 crc kubenswrapper[4947]: E1129 06:37:49.903070 4947 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 29 06:37:49 crc kubenswrapper[4947]: E1129 06:37:49.903255 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tpvng,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-qsz24_openshift-marketplace(93c9ef47-b7f5-41f4-8e91-bee4f16b658d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 29 06:37:49 crc kubenswrapper[4947]: E1129 06:37:49.904413 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-qsz24" podUID="93c9ef47-b7f5-41f4-8e91-bee4f16b658d" Nov 29 06:37:50 crc kubenswrapper[4947]: I1129 06:37:50.277305 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 29 06:37:50 crc kubenswrapper[4947]: W1129 06:37:50.295960 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod8e598e1a_e111_4110_99ea_22bc72820f0c.slice/crio-1c63fc1053be4b7c79822d26c8e109a5dd91e611ec6dfd74fd716c85a2a064cb WatchSource:0}: Error finding container 1c63fc1053be4b7c79822d26c8e109a5dd91e611ec6dfd74fd716c85a2a064cb: Status 404 returned error can't find the container with id 1c63fc1053be4b7c79822d26c8e109a5dd91e611ec6dfd74fd716c85a2a064cb Nov 29 06:37:50 crc kubenswrapper[4947]: I1129 06:37:50.329779 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 29 06:37:50 crc kubenswrapper[4947]: I1129 06:37:50.678568 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"05e8a117-69fe-4488-b4e4-c0d7f1b4a63a","Type":"ContainerStarted","Data":"97cad3ec590558426124306fc3517db71333704981521242c74c5ee0cc3ffb4a"} Nov 29 06:37:50 crc kubenswrapper[4947]: I1129 06:37:50.680299 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"8e598e1a-e111-4110-99ea-22bc72820f0c","Type":"ContainerStarted","Data":"1c63fc1053be4b7c79822d26c8e109a5dd91e611ec6dfd74fd716c85a2a064cb"} Nov 29 06:37:50 crc kubenswrapper[4947]: E1129 06:37:50.684327 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-gbhvj" podUID="1a37f770-07c2-40b1-9f24-ccddc3215658" Nov 29 06:37:50 crc kubenswrapper[4947]: E1129 06:37:50.684888 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-qsz24" podUID="93c9ef47-b7f5-41f4-8e91-bee4f16b658d" Nov 29 06:37:51 crc kubenswrapper[4947]: E1129 06:37:51.525397 4947 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 29 06:37:51 crc kubenswrapper[4947]: E1129 06:37:51.525661 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m2cf6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-8zln5_openshift-marketplace(253ae1cb-50f4-48e7-a004-a70a958c27cd): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 29 06:37:51 crc kubenswrapper[4947]: E1129 06:37:51.526771 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-8zln5" podUID="253ae1cb-50f4-48e7-a004-a70a958c27cd" Nov 29 06:37:51 crc kubenswrapper[4947]: I1129 06:37:51.689187 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2fbj5" event={"ID":"53a3bcac-8ad0-47ce-abee-ee56fd152ea8","Type":"ContainerStarted","Data":"f4ba4ef860eb9de9b350e059ea3181e1d0c05fe21d54b6babe5f5104517fdbbb"} Nov 29 06:37:51 crc kubenswrapper[4947]: I1129 06:37:51.692660 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"05e8a117-69fe-4488-b4e4-c0d7f1b4a63a","Type":"ContainerStarted","Data":"a19bcd8246942d520bcd5ba7d447594f7f48aa7a1d65ff78216d547f3f04375c"} Nov 29 06:37:51 crc kubenswrapper[4947]: I1129 06:37:51.695495 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"8e598e1a-e111-4110-99ea-22bc72820f0c","Type":"ContainerStarted","Data":"f4f94a89c4d978c054cfaaa7206efac8320d920045dfef7965a1f60b99a3a2ec"} Nov 29 06:37:51 crc kubenswrapper[4947]: I1129 06:37:51.715421 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-2fbj5" podStartSLOduration=202.715392538 podStartE2EDuration="3m22.715392538s" podCreationTimestamp="2025-11-29 06:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:37:51.708725266 +0000 UTC m=+222.753107347" watchObservedRunningTime="2025-11-29 06:37:51.715392538 +0000 UTC m=+222.759774659" Nov 29 06:37:51 crc kubenswrapper[4947]: I1129 06:37:51.734536 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=33.734508721 podStartE2EDuration="33.734508721s" podCreationTimestamp="2025-11-29 06:37:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:37:51.731327314 +0000 UTC m=+222.775709445" watchObservedRunningTime="2025-11-29 06:37:51.734508721 +0000 UTC m=+222.778890812" Nov 29 06:37:51 crc kubenswrapper[4947]: I1129 06:37:51.755687 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=28.755671291 podStartE2EDuration="28.755671291s" podCreationTimestamp="2025-11-29 06:37:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:37:51.755328511 +0000 UTC m=+222.799710662" watchObservedRunningTime="2025-11-29 06:37:51.755671291 +0000 UTC m=+222.800053372" Nov 29 06:37:52 crc kubenswrapper[4947]: I1129 06:37:52.706485 4947 generic.go:334] "Generic (PLEG): container finished" podID="8e598e1a-e111-4110-99ea-22bc72820f0c" containerID="f4f94a89c4d978c054cfaaa7206efac8320d920045dfef7965a1f60b99a3a2ec" exitCode=0 Nov 29 06:37:52 crc kubenswrapper[4947]: I1129 06:37:52.706601 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"8e598e1a-e111-4110-99ea-22bc72820f0c","Type":"ContainerDied","Data":"f4f94a89c4d978c054cfaaa7206efac8320d920045dfef7965a1f60b99a3a2ec"} Nov 29 06:37:52 crc kubenswrapper[4947]: I1129 06:37:52.987605 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 06:37:52 crc kubenswrapper[4947]: I1129 06:37:52.987673 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 06:37:52 crc kubenswrapper[4947]: I1129 06:37:52.987723 4947 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" Nov 29 06:37:52 crc kubenswrapper[4947]: I1129 06:37:52.988370 4947 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bade16afee9bcb203991caf838d5e9e5302dd0b35462b93b45c34cf98e89c7b8"} pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 06:37:52 crc kubenswrapper[4947]: I1129 06:37:52.988486 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" containerID="cri-o://bade16afee9bcb203991caf838d5e9e5302dd0b35462b93b45c34cf98e89c7b8" gracePeriod=600 Nov 29 06:37:53 crc kubenswrapper[4947]: I1129 06:37:53.712541 4947 generic.go:334] "Generic (PLEG): container finished" podID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerID="bade16afee9bcb203991caf838d5e9e5302dd0b35462b93b45c34cf98e89c7b8" exitCode=0 Nov 29 06:37:53 crc kubenswrapper[4947]: I1129 06:37:53.712694 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" event={"ID":"5f4d791f-bb61-4aaa-a09c-3007b59645a7","Type":"ContainerDied","Data":"bade16afee9bcb203991caf838d5e9e5302dd0b35462b93b45c34cf98e89c7b8"} Nov 29 06:37:54 crc kubenswrapper[4947]: I1129 06:37:54.027341 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 29 06:37:54 crc kubenswrapper[4947]: I1129 06:37:54.102631 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e598e1a-e111-4110-99ea-22bc72820f0c-kubelet-dir\") pod \"8e598e1a-e111-4110-99ea-22bc72820f0c\" (UID: \"8e598e1a-e111-4110-99ea-22bc72820f0c\") " Nov 29 06:37:54 crc kubenswrapper[4947]: I1129 06:37:54.102716 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e598e1a-e111-4110-99ea-22bc72820f0c-kube-api-access\") pod \"8e598e1a-e111-4110-99ea-22bc72820f0c\" (UID: \"8e598e1a-e111-4110-99ea-22bc72820f0c\") " Nov 29 06:37:54 crc kubenswrapper[4947]: I1129 06:37:54.102833 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e598e1a-e111-4110-99ea-22bc72820f0c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8e598e1a-e111-4110-99ea-22bc72820f0c" (UID: "8e598e1a-e111-4110-99ea-22bc72820f0c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 06:37:54 crc kubenswrapper[4947]: I1129 06:37:54.103039 4947 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e598e1a-e111-4110-99ea-22bc72820f0c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 29 06:37:54 crc kubenswrapper[4947]: I1129 06:37:54.111354 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e598e1a-e111-4110-99ea-22bc72820f0c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8e598e1a-e111-4110-99ea-22bc72820f0c" (UID: "8e598e1a-e111-4110-99ea-22bc72820f0c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:37:54 crc kubenswrapper[4947]: I1129 06:37:54.203872 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e598e1a-e111-4110-99ea-22bc72820f0c-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 29 06:37:54 crc kubenswrapper[4947]: I1129 06:37:54.720878 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" event={"ID":"5f4d791f-bb61-4aaa-a09c-3007b59645a7","Type":"ContainerStarted","Data":"6742510082cd58dfd52c8f7fa3778bd9aaaffe372801b3a708a086461b8d5abd"} Nov 29 06:37:54 crc kubenswrapper[4947]: I1129 06:37:54.722751 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"8e598e1a-e111-4110-99ea-22bc72820f0c","Type":"ContainerDied","Data":"1c63fc1053be4b7c79822d26c8e109a5dd91e611ec6dfd74fd716c85a2a064cb"} Nov 29 06:37:54 crc kubenswrapper[4947]: I1129 06:37:54.722785 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 29 06:37:54 crc kubenswrapper[4947]: I1129 06:37:54.722809 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c63fc1053be4b7c79822d26c8e109a5dd91e611ec6dfd74fd716c85a2a064cb" Nov 29 06:38:02 crc kubenswrapper[4947]: I1129 06:38:02.790733 4947 generic.go:334] "Generic (PLEG): container finished" podID="159f707e-f150-45c6-9371-6b4b272eaf5d" containerID="283afd4d499611cd02ae6d506bf253cfd055b10a46905adc340ec986a71861e2" exitCode=0 Nov 29 06:38:02 crc kubenswrapper[4947]: I1129 06:38:02.790811 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t4vb4" event={"ID":"159f707e-f150-45c6-9371-6b4b272eaf5d","Type":"ContainerDied","Data":"283afd4d499611cd02ae6d506bf253cfd055b10a46905adc340ec986a71861e2"} Nov 29 06:38:02 crc kubenswrapper[4947]: I1129 06:38:02.794064 4947 generic.go:334] "Generic (PLEG): container finished" podID="658b72a9-13fc-4881-88c5-109b221bbc48" containerID="976cf47c7a81ded79cab697b184562dfadbf98223f35c31af39b4d34941240e1" exitCode=0 Nov 29 06:38:02 crc kubenswrapper[4947]: I1129 06:38:02.794101 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wqz4n" event={"ID":"658b72a9-13fc-4881-88c5-109b221bbc48","Type":"ContainerDied","Data":"976cf47c7a81ded79cab697b184562dfadbf98223f35c31af39b4d34941240e1"} Nov 29 06:38:03 crc kubenswrapper[4947]: I1129 06:38:03.804111 4947 generic.go:334] "Generic (PLEG): container finished" podID="b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8" containerID="bae87c40c5d616ee71eac6558dc4f929c824e01eb3eb68699062b24f52ebde62" exitCode=0 Nov 29 06:38:03 crc kubenswrapper[4947]: I1129 06:38:03.804156 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-phgdn" event={"ID":"b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8","Type":"ContainerDied","Data":"bae87c40c5d616ee71eac6558dc4f929c824e01eb3eb68699062b24f52ebde62"} Nov 29 06:38:04 crc kubenswrapper[4947]: I1129 06:38:04.813397 4947 generic.go:334] "Generic (PLEG): container finished" podID="1a37f770-07c2-40b1-9f24-ccddc3215658" containerID="f23b2903b37d6385814ddc74168550bc6d78b9de063a891462687bc2bc1a7d35" exitCode=0 Nov 29 06:38:04 crc kubenswrapper[4947]: I1129 06:38:04.813472 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gbhvj" event={"ID":"1a37f770-07c2-40b1-9f24-ccddc3215658","Type":"ContainerDied","Data":"f23b2903b37d6385814ddc74168550bc6d78b9de063a891462687bc2bc1a7d35"} Nov 29 06:38:05 crc kubenswrapper[4947]: I1129 06:38:05.821132 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t4vb4" event={"ID":"159f707e-f150-45c6-9371-6b4b272eaf5d","Type":"ContainerStarted","Data":"27e50bde31a847edd63619b0d0cb538fccb516602f935f9cf663104e2776d315"} Nov 29 06:38:05 crc kubenswrapper[4947]: I1129 06:38:05.843495 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-t4vb4" podStartSLOduration=9.964231827 podStartE2EDuration="1m31.843479677s" podCreationTimestamp="2025-11-29 06:36:34 +0000 UTC" firstStartedPulling="2025-11-29 06:36:42.267790135 +0000 UTC m=+153.312172216" lastFinishedPulling="2025-11-29 06:38:04.147037985 +0000 UTC m=+235.191420066" observedRunningTime="2025-11-29 06:38:05.842699826 +0000 UTC m=+236.887081907" watchObservedRunningTime="2025-11-29 06:38:05.843479677 +0000 UTC m=+236.887861748" Nov 29 06:38:14 crc kubenswrapper[4947]: I1129 06:38:14.960177 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t4vb4" Nov 29 06:38:14 crc kubenswrapper[4947]: I1129 06:38:14.960686 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-t4vb4" Nov 29 06:38:15 crc kubenswrapper[4947]: I1129 06:38:15.880064 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t4vb4" Nov 29 06:38:15 crc kubenswrapper[4947]: I1129 06:38:15.931397 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t4vb4" Nov 29 06:38:30 crc kubenswrapper[4947]: I1129 06:38:30.531129 4947 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 29 06:38:30 crc kubenswrapper[4947]: E1129 06:38:30.532613 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e598e1a-e111-4110-99ea-22bc72820f0c" containerName="pruner" Nov 29 06:38:30 crc kubenswrapper[4947]: I1129 06:38:30.532644 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e598e1a-e111-4110-99ea-22bc72820f0c" containerName="pruner" Nov 29 06:38:30 crc kubenswrapper[4947]: I1129 06:38:30.532919 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e598e1a-e111-4110-99ea-22bc72820f0c" containerName="pruner" Nov 29 06:38:30 crc kubenswrapper[4947]: I1129 06:38:30.533777 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 06:38:30 crc kubenswrapper[4947]: I1129 06:38:30.535734 4947 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 29 06:38:30 crc kubenswrapper[4947]: I1129 06:38:30.536032 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://e518bb57f917b37cc08b0f099557c012da1b6c08a168ff3acb262abc2c9d6983" gracePeriod=15 Nov 29 06:38:30 crc kubenswrapper[4947]: I1129 06:38:30.536181 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://3fb63d3fae21b626041036425ac0dbe4cc0fba792aa82812037eb0cf64036984" gracePeriod=15 Nov 29 06:38:30 crc kubenswrapper[4947]: I1129 06:38:30.536244 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://0ce7bb31402783842d6a20f7d0667426065729bdee1f7cf4fcbe636173e79da8" gracePeriod=15 Nov 29 06:38:30 crc kubenswrapper[4947]: I1129 06:38:30.536291 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://a73fed24d3da601900cf1f54e0adc59daba09bd9a50813cc39508f2d8c66ae5c" gracePeriod=15 Nov 29 06:38:30 crc kubenswrapper[4947]: I1129 06:38:30.536359 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://187288a1894fa8a8aaf843f1889edb54adeb0771fb0a5759a1346d1ae04f9970" gracePeriod=15 Nov 29 06:38:30 crc kubenswrapper[4947]: I1129 06:38:30.539726 4947 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 29 06:38:30 crc kubenswrapper[4947]: E1129 06:38:30.540076 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 29 06:38:30 crc kubenswrapper[4947]: I1129 06:38:30.540112 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 29 06:38:30 crc kubenswrapper[4947]: E1129 06:38:30.540151 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 29 06:38:30 crc kubenswrapper[4947]: I1129 06:38:30.540167 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 29 06:38:30 crc kubenswrapper[4947]: E1129 06:38:30.540186 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Nov 29 06:38:30 crc kubenswrapper[4947]: I1129 06:38:30.540201 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Nov 29 06:38:30 crc kubenswrapper[4947]: E1129 06:38:30.540255 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 29 06:38:30 crc kubenswrapper[4947]: I1129 06:38:30.540277 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 29 06:38:30 crc kubenswrapper[4947]: E1129 06:38:30.540303 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 29 06:38:30 crc kubenswrapper[4947]: I1129 06:38:30.540319 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 29 06:38:30 crc kubenswrapper[4947]: E1129 06:38:30.540336 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 29 06:38:30 crc kubenswrapper[4947]: I1129 06:38:30.540351 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 29 06:38:30 crc kubenswrapper[4947]: E1129 06:38:30.540377 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 29 06:38:30 crc kubenswrapper[4947]: I1129 06:38:30.540393 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 29 06:38:30 crc kubenswrapper[4947]: I1129 06:38:30.540631 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 29 06:38:30 crc kubenswrapper[4947]: I1129 06:38:30.540668 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 29 06:38:30 crc kubenswrapper[4947]: I1129 06:38:30.540686 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 29 06:38:30 crc kubenswrapper[4947]: I1129 06:38:30.540709 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 29 06:38:30 crc kubenswrapper[4947]: I1129 06:38:30.540728 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 29 06:38:30 crc kubenswrapper[4947]: I1129 06:38:30.540749 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 29 06:38:30 crc kubenswrapper[4947]: I1129 06:38:30.588149 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 29 06:38:30 crc kubenswrapper[4947]: I1129 06:38:30.713022 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 06:38:30 crc kubenswrapper[4947]: I1129 06:38:30.713481 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 06:38:30 crc kubenswrapper[4947]: I1129 06:38:30.713525 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 06:38:30 crc kubenswrapper[4947]: I1129 06:38:30.713573 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 06:38:30 crc kubenswrapper[4947]: I1129 06:38:30.713604 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 06:38:30 crc kubenswrapper[4947]: I1129 06:38:30.713638 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 06:38:30 crc kubenswrapper[4947]: I1129 06:38:30.713742 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 06:38:30 crc kubenswrapper[4947]: I1129 06:38:30.713779 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 06:38:30 crc kubenswrapper[4947]: I1129 06:38:30.814492 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 06:38:30 crc kubenswrapper[4947]: I1129 06:38:30.814568 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 06:38:30 crc kubenswrapper[4947]: I1129 06:38:30.814602 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 06:38:30 crc kubenswrapper[4947]: I1129 06:38:30.814642 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 06:38:30 crc kubenswrapper[4947]: I1129 06:38:30.814667 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 06:38:30 crc kubenswrapper[4947]: I1129 06:38:30.814746 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 06:38:30 crc kubenswrapper[4947]: I1129 06:38:30.814812 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 06:38:30 crc kubenswrapper[4947]: I1129 06:38:30.814843 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 06:38:30 crc kubenswrapper[4947]: I1129 06:38:30.814964 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 06:38:30 crc kubenswrapper[4947]: I1129 06:38:30.815017 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 06:38:30 crc kubenswrapper[4947]: I1129 06:38:30.815065 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 06:38:30 crc kubenswrapper[4947]: I1129 06:38:30.815103 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 06:38:30 crc kubenswrapper[4947]: I1129 06:38:30.815142 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 06:38:30 crc kubenswrapper[4947]: I1129 06:38:30.815180 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 06:38:30 crc kubenswrapper[4947]: I1129 06:38:30.815257 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 06:38:30 crc kubenswrapper[4947]: I1129 06:38:30.815301 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 06:38:30 crc kubenswrapper[4947]: I1129 06:38:30.884172 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 06:38:30 crc kubenswrapper[4947]: I1129 06:38:30.977657 4947 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Nov 29 06:38:30 crc kubenswrapper[4947]: I1129 06:38:30.977715 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Nov 29 06:38:33 crc kubenswrapper[4947]: I1129 06:38:33.996918 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 29 06:38:33 crc kubenswrapper[4947]: I1129 06:38:33.998825 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 29 06:38:33 crc kubenswrapper[4947]: I1129 06:38:33.999684 4947 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="187288a1894fa8a8aaf843f1889edb54adeb0771fb0a5759a1346d1ae04f9970" exitCode=2 Nov 29 06:38:35 crc kubenswrapper[4947]: E1129 06:38:35.829519 4947 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:35 crc kubenswrapper[4947]: E1129 06:38:35.830167 4947 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:35 crc kubenswrapper[4947]: E1129 06:38:35.830502 4947 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:35 crc kubenswrapper[4947]: E1129 06:38:35.830696 4947 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:35 crc kubenswrapper[4947]: E1129 06:38:35.830849 4947 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:35 crc kubenswrapper[4947]: I1129 06:38:35.830867 4947 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Nov 29 06:38:35 crc kubenswrapper[4947]: E1129 06:38:35.831110 4947 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" interval="200ms" Nov 29 06:38:36 crc kubenswrapper[4947]: I1129 06:38:36.013541 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 29 06:38:36 crc kubenswrapper[4947]: I1129 06:38:36.015648 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 29 06:38:36 crc kubenswrapper[4947]: I1129 06:38:36.016428 4947 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3fb63d3fae21b626041036425ac0dbe4cc0fba792aa82812037eb0cf64036984" exitCode=0 Nov 29 06:38:36 crc kubenswrapper[4947]: I1129 06:38:36.016468 4947 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0ce7bb31402783842d6a20f7d0667426065729bdee1f7cf4fcbe636173e79da8" exitCode=0 Nov 29 06:38:36 crc kubenswrapper[4947]: I1129 06:38:36.016480 4947 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a73fed24d3da601900cf1f54e0adc59daba09bd9a50813cc39508f2d8c66ae5c" exitCode=0 Nov 29 06:38:36 crc kubenswrapper[4947]: I1129 06:38:36.016489 4947 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e518bb57f917b37cc08b0f099557c012da1b6c08a168ff3acb262abc2c9d6983" exitCode=0 Nov 29 06:38:36 crc kubenswrapper[4947]: I1129 06:38:36.016516 4947 scope.go:117] "RemoveContainer" containerID="ce17a0c77733e53cc77e1ced6944f39b53419840d78420bc7cf4ad34fec82e8b" Nov 29 06:38:36 crc kubenswrapper[4947]: I1129 06:38:36.018201 4947 generic.go:334] "Generic (PLEG): container finished" podID="05e8a117-69fe-4488-b4e4-c0d7f1b4a63a" containerID="a19bcd8246942d520bcd5ba7d447594f7f48aa7a1d65ff78216d547f3f04375c" exitCode=0 Nov 29 06:38:36 crc kubenswrapper[4947]: I1129 06:38:36.018264 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"05e8a117-69fe-4488-b4e4-c0d7f1b4a63a","Type":"ContainerDied","Data":"a19bcd8246942d520bcd5ba7d447594f7f48aa7a1d65ff78216d547f3f04375c"} Nov 29 06:38:36 crc kubenswrapper[4947]: I1129 06:38:36.019001 4947 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:36 crc kubenswrapper[4947]: I1129 06:38:36.019387 4947 status_manager.go:851] "Failed to get status for pod" podUID="05e8a117-69fe-4488-b4e4-c0d7f1b4a63a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:36 crc kubenswrapper[4947]: E1129 06:38:36.032394 4947 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" interval="400ms" Nov 29 06:38:36 crc kubenswrapper[4947]: E1129 06:38:36.434073 4947 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" interval="800ms" Nov 29 06:38:37 crc kubenswrapper[4947]: E1129 06:38:37.235022 4947 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" interval="1.6s" Nov 29 06:38:37 crc kubenswrapper[4947]: I1129 06:38:37.658939 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 29 06:38:37 crc kubenswrapper[4947]: I1129 06:38:37.659640 4947 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:37 crc kubenswrapper[4947]: I1129 06:38:37.659957 4947 status_manager.go:851] "Failed to get status for pod" podUID="05e8a117-69fe-4488-b4e4-c0d7f1b4a63a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:37 crc kubenswrapper[4947]: I1129 06:38:37.666391 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 29 06:38:37 crc kubenswrapper[4947]: I1129 06:38:37.667229 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 06:38:37 crc kubenswrapper[4947]: I1129 06:38:37.667738 4947 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:37 crc kubenswrapper[4947]: I1129 06:38:37.668048 4947 status_manager.go:851] "Failed to get status for pod" podUID="05e8a117-69fe-4488-b4e4-c0d7f1b4a63a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:37 crc kubenswrapper[4947]: I1129 06:38:37.668407 4947 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:37 crc kubenswrapper[4947]: I1129 06:38:37.795113 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 29 06:38:37 crc kubenswrapper[4947]: I1129 06:38:37.795209 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05e8a117-69fe-4488-b4e4-c0d7f1b4a63a-kube-api-access\") pod \"05e8a117-69fe-4488-b4e4-c0d7f1b4a63a\" (UID: \"05e8a117-69fe-4488-b4e4-c0d7f1b4a63a\") " Nov 29 06:38:37 crc kubenswrapper[4947]: I1129 06:38:37.795296 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 29 06:38:37 crc kubenswrapper[4947]: I1129 06:38:37.795336 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05e8a117-69fe-4488-b4e4-c0d7f1b4a63a-kubelet-dir\") pod \"05e8a117-69fe-4488-b4e4-c0d7f1b4a63a\" (UID: \"05e8a117-69fe-4488-b4e4-c0d7f1b4a63a\") " Nov 29 06:38:37 crc kubenswrapper[4947]: I1129 06:38:37.795364 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 29 06:38:37 crc kubenswrapper[4947]: I1129 06:38:37.795342 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 06:38:37 crc kubenswrapper[4947]: I1129 06:38:37.795396 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/05e8a117-69fe-4488-b4e4-c0d7f1b4a63a-var-lock\") pod \"05e8a117-69fe-4488-b4e4-c0d7f1b4a63a\" (UID: \"05e8a117-69fe-4488-b4e4-c0d7f1b4a63a\") " Nov 29 06:38:37 crc kubenswrapper[4947]: I1129 06:38:37.795445 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05e8a117-69fe-4488-b4e4-c0d7f1b4a63a-var-lock" (OuterVolumeSpecName: "var-lock") pod "05e8a117-69fe-4488-b4e4-c0d7f1b4a63a" (UID: "05e8a117-69fe-4488-b4e4-c0d7f1b4a63a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 06:38:37 crc kubenswrapper[4947]: I1129 06:38:37.795456 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 06:38:37 crc kubenswrapper[4947]: I1129 06:38:37.795498 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 06:38:37 crc kubenswrapper[4947]: I1129 06:38:37.795456 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05e8a117-69fe-4488-b4e4-c0d7f1b4a63a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "05e8a117-69fe-4488-b4e4-c0d7f1b4a63a" (UID: "05e8a117-69fe-4488-b4e4-c0d7f1b4a63a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 06:38:37 crc kubenswrapper[4947]: I1129 06:38:37.796140 4947 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 29 06:38:37 crc kubenswrapper[4947]: I1129 06:38:37.796189 4947 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Nov 29 06:38:37 crc kubenswrapper[4947]: I1129 06:38:37.796206 4947 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05e8a117-69fe-4488-b4e4-c0d7f1b4a63a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 29 06:38:37 crc kubenswrapper[4947]: I1129 06:38:37.796241 4947 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 29 06:38:37 crc kubenswrapper[4947]: I1129 06:38:37.796260 4947 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/05e8a117-69fe-4488-b4e4-c0d7f1b4a63a-var-lock\") on node \"crc\" DevicePath \"\"" Nov 29 06:38:37 crc kubenswrapper[4947]: I1129 06:38:37.803009 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05e8a117-69fe-4488-b4e4-c0d7f1b4a63a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "05e8a117-69fe-4488-b4e4-c0d7f1b4a63a" (UID: "05e8a117-69fe-4488-b4e4-c0d7f1b4a63a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:38:37 crc kubenswrapper[4947]: I1129 06:38:37.897099 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05e8a117-69fe-4488-b4e4-c0d7f1b4a63a-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 29 06:38:38 crc kubenswrapper[4947]: I1129 06:38:38.029812 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"05e8a117-69fe-4488-b4e4-c0d7f1b4a63a","Type":"ContainerDied","Data":"97cad3ec590558426124306fc3517db71333704981521242c74c5ee0cc3ffb4a"} Nov 29 06:38:38 crc kubenswrapper[4947]: I1129 06:38:38.029850 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 29 06:38:38 crc kubenswrapper[4947]: I1129 06:38:38.029858 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97cad3ec590558426124306fc3517db71333704981521242c74c5ee0cc3ffb4a" Nov 29 06:38:38 crc kubenswrapper[4947]: I1129 06:38:38.039693 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 29 06:38:38 crc kubenswrapper[4947]: I1129 06:38:38.041945 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 06:38:38 crc kubenswrapper[4947]: I1129 06:38:38.052937 4947 status_manager.go:851] "Failed to get status for pod" podUID="05e8a117-69fe-4488-b4e4-c0d7f1b4a63a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:38 crc kubenswrapper[4947]: I1129 06:38:38.053604 4947 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:38 crc kubenswrapper[4947]: I1129 06:38:38.053904 4947 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:38 crc kubenswrapper[4947]: I1129 06:38:38.057461 4947 status_manager.go:851] "Failed to get status for pod" podUID="05e8a117-69fe-4488-b4e4-c0d7f1b4a63a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:38 crc kubenswrapper[4947]: I1129 06:38:38.058052 4947 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:38 crc kubenswrapper[4947]: I1129 06:38:38.058426 4947 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:38 crc kubenswrapper[4947]: E1129 06:38:38.401589 4947 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.47:6443: connect: connection refused" event="&Event{ObjectMeta:{community-operators-phgdn.187c66ec229416c5 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-phgdn,UID:b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8,APIVersion:v1,ResourceVersion:28371,FieldPath:spec.containers{registry-server},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\" in 34.594s (34.594s including waiting). Image size: 907837715 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-29 06:38:38.400837317 +0000 UTC m=+269.445219398,LastTimestamp:2025-11-29 06:38:38.400837317 +0000 UTC m=+269.445219398,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 29 06:38:38 crc kubenswrapper[4947]: E1129 06:38:38.837162 4947 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" interval="3.2s" Nov 29 06:38:38 crc kubenswrapper[4947]: I1129 06:38:38.838233 4947 scope.go:117] "RemoveContainer" containerID="3fb63d3fae21b626041036425ac0dbe4cc0fba792aa82812037eb0cf64036984" Nov 29 06:38:39 crc kubenswrapper[4947]: W1129 06:38:39.031853 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-67f93df53aeed6a6d5a881757eeb48250763dfb4580506a343c0cd9fa8427db7 WatchSource:0}: Error finding container 67f93df53aeed6a6d5a881757eeb48250763dfb4580506a343c0cd9fa8427db7: Status 404 returned error can't find the container with id 67f93df53aeed6a6d5a881757eeb48250763dfb4580506a343c0cd9fa8427db7 Nov 29 06:38:39 crc kubenswrapper[4947]: I1129 06:38:39.048072 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"67f93df53aeed6a6d5a881757eeb48250763dfb4580506a343c0cd9fa8427db7"} Nov 29 06:38:39 crc kubenswrapper[4947]: I1129 06:38:39.050807 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 29 06:38:39 crc kubenswrapper[4947]: I1129 06:38:39.181549 4947 status_manager.go:851] "Failed to get status for pod" podUID="05e8a117-69fe-4488-b4e4-c0d7f1b4a63a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:39 crc kubenswrapper[4947]: I1129 06:38:39.182056 4947 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:39 crc kubenswrapper[4947]: I1129 06:38:39.182599 4947 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:39 crc kubenswrapper[4947]: I1129 06:38:39.185310 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Nov 29 06:38:39 crc kubenswrapper[4947]: I1129 06:38:39.631162 4947 scope.go:117] "RemoveContainer" containerID="ce17a0c77733e53cc77e1ced6944f39b53419840d78420bc7cf4ad34fec82e8b" Nov 29 06:38:39 crc kubenswrapper[4947]: E1129 06:38:39.631854 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce17a0c77733e53cc77e1ced6944f39b53419840d78420bc7cf4ad34fec82e8b\": container with ID starting with ce17a0c77733e53cc77e1ced6944f39b53419840d78420bc7cf4ad34fec82e8b not found: ID does not exist" containerID="ce17a0c77733e53cc77e1ced6944f39b53419840d78420bc7cf4ad34fec82e8b" Nov 29 06:38:39 crc kubenswrapper[4947]: I1129 06:38:39.631904 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce17a0c77733e53cc77e1ced6944f39b53419840d78420bc7cf4ad34fec82e8b"} err="failed to get container status \"ce17a0c77733e53cc77e1ced6944f39b53419840d78420bc7cf4ad34fec82e8b\": rpc error: code = NotFound desc = could not find container \"ce17a0c77733e53cc77e1ced6944f39b53419840d78420bc7cf4ad34fec82e8b\": container with ID starting with ce17a0c77733e53cc77e1ced6944f39b53419840d78420bc7cf4ad34fec82e8b not found: ID does not exist" Nov 29 06:38:39 crc kubenswrapper[4947]: I1129 06:38:39.631940 4947 scope.go:117] "RemoveContainer" containerID="0ce7bb31402783842d6a20f7d0667426065729bdee1f7cf4fcbe636173e79da8" Nov 29 06:38:39 crc kubenswrapper[4947]: I1129 06:38:39.669910 4947 scope.go:117] "RemoveContainer" containerID="a73fed24d3da601900cf1f54e0adc59daba09bd9a50813cc39508f2d8c66ae5c" Nov 29 06:38:39 crc kubenswrapper[4947]: I1129 06:38:39.744076 4947 scope.go:117] "RemoveContainer" containerID="187288a1894fa8a8aaf843f1889edb54adeb0771fb0a5759a1346d1ae04f9970" Nov 29 06:38:39 crc kubenswrapper[4947]: I1129 06:38:39.778147 4947 scope.go:117] "RemoveContainer" containerID="e518bb57f917b37cc08b0f099557c012da1b6c08a168ff3acb262abc2c9d6983" Nov 29 06:38:39 crc kubenswrapper[4947]: I1129 06:38:39.830252 4947 scope.go:117] "RemoveContainer" containerID="c25f700a6bef6a4d36e501040a179d4a1fc314d356577578d77db9ae868e980c" Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.072029 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-phgdn" event={"ID":"b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8","Type":"ContainerStarted","Data":"d95e3edb448e83c6e101d519aab848bff447a7d2868f37d4581bde4155486473"} Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.073083 4947 status_manager.go:851] "Failed to get status for pod" podUID="b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8" pod="openshift-marketplace/community-operators-phgdn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-phgdn\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.073438 4947 status_manager.go:851] "Failed to get status for pod" podUID="05e8a117-69fe-4488-b4e4-c0d7f1b4a63a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.073712 4947 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.076645 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gbhvj" event={"ID":"1a37f770-07c2-40b1-9f24-ccddc3215658","Type":"ContainerStarted","Data":"d80950f3cbe94e5e3050599144d1cebbe62ec504c014ce28ba34c389c2765f48"} Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.077486 4947 status_manager.go:851] "Failed to get status for pod" podUID="b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8" pod="openshift-marketplace/community-operators-phgdn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-phgdn\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.077873 4947 status_manager.go:851] "Failed to get status for pod" podUID="05e8a117-69fe-4488-b4e4-c0d7f1b4a63a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.078122 4947 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.078348 4947 status_manager.go:851] "Failed to get status for pod" podUID="1a37f770-07c2-40b1-9f24-ccddc3215658" pod="openshift-marketplace/certified-operators-gbhvj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gbhvj\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.078953 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w24f6" event={"ID":"fd94a1d6-7039-4b84-aa44-ee8ec166da24","Type":"ContainerStarted","Data":"6d2570c91b1cae30cc1b3a92b39850cf05c6aefcc546590bd2f8036d76002d51"} Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.079862 4947 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.080005 4947 status_manager.go:851] "Failed to get status for pod" podUID="1a37f770-07c2-40b1-9f24-ccddc3215658" pod="openshift-marketplace/certified-operators-gbhvj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gbhvj\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.080160 4947 status_manager.go:851] "Failed to get status for pod" podUID="fd94a1d6-7039-4b84-aa44-ee8ec166da24" pod="openshift-marketplace/redhat-operators-w24f6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-w24f6\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.080333 4947 status_manager.go:851] "Failed to get status for pod" podUID="b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8" pod="openshift-marketplace/community-operators-phgdn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-phgdn\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.081385 4947 status_manager.go:851] "Failed to get status for pod" podUID="05e8a117-69fe-4488-b4e4-c0d7f1b4a63a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.085195 4947 generic.go:334] "Generic (PLEG): container finished" podID="253ae1cb-50f4-48e7-a004-a70a958c27cd" containerID="c2fae668c2ec078a95e6c7b2608cc32d7b1257a983016f67137bb7dca946cbc9" exitCode=0 Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.085257 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8zln5" event={"ID":"253ae1cb-50f4-48e7-a004-a70a958c27cd","Type":"ContainerDied","Data":"c2fae668c2ec078a95e6c7b2608cc32d7b1257a983016f67137bb7dca946cbc9"} Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.085886 4947 status_manager.go:851] "Failed to get status for pod" podUID="b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8" pod="openshift-marketplace/community-operators-phgdn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-phgdn\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.086248 4947 status_manager.go:851] "Failed to get status for pod" podUID="05e8a117-69fe-4488-b4e4-c0d7f1b4a63a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.086528 4947 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.086796 4947 status_manager.go:851] "Failed to get status for pod" podUID="1a37f770-07c2-40b1-9f24-ccddc3215658" pod="openshift-marketplace/certified-operators-gbhvj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gbhvj\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.087035 4947 status_manager.go:851] "Failed to get status for pod" podUID="253ae1cb-50f4-48e7-a004-a70a958c27cd" pod="openshift-marketplace/redhat-marketplace-8zln5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8zln5\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.087277 4947 status_manager.go:851] "Failed to get status for pod" podUID="fd94a1d6-7039-4b84-aa44-ee8ec166da24" pod="openshift-marketplace/redhat-operators-w24f6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-w24f6\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.089637 4947 generic.go:334] "Generic (PLEG): container finished" podID="1e620beb-b5db-4321-b404-0ef499ded600" containerID="a64f74c2e3ba9f27f62725df7e0a64846a2da7f01b7d1886cf22e755e666235a" exitCode=0 Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.089698 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fq4zz" event={"ID":"1e620beb-b5db-4321-b404-0ef499ded600","Type":"ContainerDied","Data":"a64f74c2e3ba9f27f62725df7e0a64846a2da7f01b7d1886cf22e755e666235a"} Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.090502 4947 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.090717 4947 status_manager.go:851] "Failed to get status for pod" podUID="1a37f770-07c2-40b1-9f24-ccddc3215658" pod="openshift-marketplace/certified-operators-gbhvj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gbhvj\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.090996 4947 status_manager.go:851] "Failed to get status for pod" podUID="1e620beb-b5db-4321-b404-0ef499ded600" pod="openshift-marketplace/redhat-marketplace-fq4zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fq4zz\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.091281 4947 status_manager.go:851] "Failed to get status for pod" podUID="253ae1cb-50f4-48e7-a004-a70a958c27cd" pod="openshift-marketplace/redhat-marketplace-8zln5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8zln5\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.091570 4947 status_manager.go:851] "Failed to get status for pod" podUID="fd94a1d6-7039-4b84-aa44-ee8ec166da24" pod="openshift-marketplace/redhat-operators-w24f6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-w24f6\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.091745 4947 status_manager.go:851] "Failed to get status for pod" podUID="b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8" pod="openshift-marketplace/community-operators-phgdn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-phgdn\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.091963 4947 status_manager.go:851] "Failed to get status for pod" podUID="05e8a117-69fe-4488-b4e4-c0d7f1b4a63a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.095556 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wqz4n" event={"ID":"658b72a9-13fc-4881-88c5-109b221bbc48","Type":"ContainerStarted","Data":"2e178f5efb0c6e07219ed6d7ed32a5fe57693bf837256dc21fd3955805b19763"} Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.096205 4947 status_manager.go:851] "Failed to get status for pod" podUID="1a37f770-07c2-40b1-9f24-ccddc3215658" pod="openshift-marketplace/certified-operators-gbhvj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gbhvj\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.096524 4947 status_manager.go:851] "Failed to get status for pod" podUID="1e620beb-b5db-4321-b404-0ef499ded600" pod="openshift-marketplace/redhat-marketplace-fq4zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fq4zz\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.096842 4947 status_manager.go:851] "Failed to get status for pod" podUID="253ae1cb-50f4-48e7-a004-a70a958c27cd" pod="openshift-marketplace/redhat-marketplace-8zln5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8zln5\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.096985 4947 status_manager.go:851] "Failed to get status for pod" podUID="fd94a1d6-7039-4b84-aa44-ee8ec166da24" pod="openshift-marketplace/redhat-operators-w24f6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-w24f6\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.097167 4947 status_manager.go:851] "Failed to get status for pod" podUID="b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8" pod="openshift-marketplace/community-operators-phgdn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-phgdn\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.097397 4947 status_manager.go:851] "Failed to get status for pod" podUID="05e8a117-69fe-4488-b4e4-c0d7f1b4a63a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.097590 4947 status_manager.go:851] "Failed to get status for pod" podUID="658b72a9-13fc-4881-88c5-109b221bbc48" pod="openshift-marketplace/community-operators-wqz4n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wqz4n\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.097680 4947 generic.go:334] "Generic (PLEG): container finished" podID="93c9ef47-b7f5-41f4-8e91-bee4f16b658d" containerID="2924e68fe6b0334e6b27b698c25d36a60d6ece07f2b9ca5328e18bfe18cf4d00" exitCode=0 Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.097720 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qsz24" event={"ID":"93c9ef47-b7f5-41f4-8e91-bee4f16b658d","Type":"ContainerDied","Data":"2924e68fe6b0334e6b27b698c25d36a60d6ece07f2b9ca5328e18bfe18cf4d00"} Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.097754 4947 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.098195 4947 status_manager.go:851] "Failed to get status for pod" podUID="1a37f770-07c2-40b1-9f24-ccddc3215658" pod="openshift-marketplace/certified-operators-gbhvj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gbhvj\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.098630 4947 status_manager.go:851] "Failed to get status for pod" podUID="1e620beb-b5db-4321-b404-0ef499ded600" pod="openshift-marketplace/redhat-marketplace-fq4zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fq4zz\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.099036 4947 status_manager.go:851] "Failed to get status for pod" podUID="253ae1cb-50f4-48e7-a004-a70a958c27cd" pod="openshift-marketplace/redhat-marketplace-8zln5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8zln5\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.099348 4947 status_manager.go:851] "Failed to get status for pod" podUID="fd94a1d6-7039-4b84-aa44-ee8ec166da24" pod="openshift-marketplace/redhat-operators-w24f6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-w24f6\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.099560 4947 status_manager.go:851] "Failed to get status for pod" podUID="b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8" pod="openshift-marketplace/community-operators-phgdn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-phgdn\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.099710 4947 status_manager.go:851] "Failed to get status for pod" podUID="05e8a117-69fe-4488-b4e4-c0d7f1b4a63a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.099900 4947 status_manager.go:851] "Failed to get status for pod" podUID="658b72a9-13fc-4881-88c5-109b221bbc48" pod="openshift-marketplace/community-operators-wqz4n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wqz4n\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.100013 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"6199b757c8dee13c0d44384a4a68198095a559a7877f519a7913d261149cbd16"} Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.100093 4947 status_manager.go:851] "Failed to get status for pod" podUID="93c9ef47-b7f5-41f4-8e91-bee4f16b658d" pod="openshift-marketplace/redhat-operators-qsz24" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qsz24\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.101348 4947 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.101674 4947 status_manager.go:851] "Failed to get status for pod" podUID="1e620beb-b5db-4321-b404-0ef499ded600" pod="openshift-marketplace/redhat-marketplace-fq4zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fq4zz\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.102035 4947 status_manager.go:851] "Failed to get status for pod" podUID="253ae1cb-50f4-48e7-a004-a70a958c27cd" pod="openshift-marketplace/redhat-marketplace-8zln5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8zln5\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.102357 4947 status_manager.go:851] "Failed to get status for pod" podUID="fd94a1d6-7039-4b84-aa44-ee8ec166da24" pod="openshift-marketplace/redhat-operators-w24f6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-w24f6\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.102614 4947 status_manager.go:851] "Failed to get status for pod" podUID="b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8" pod="openshift-marketplace/community-operators-phgdn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-phgdn\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.102886 4947 status_manager.go:851] "Failed to get status for pod" podUID="05e8a117-69fe-4488-b4e4-c0d7f1b4a63a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.103123 4947 status_manager.go:851] "Failed to get status for pod" podUID="658b72a9-13fc-4881-88c5-109b221bbc48" pod="openshift-marketplace/community-operators-wqz4n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wqz4n\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.103417 4947 status_manager.go:851] "Failed to get status for pod" podUID="93c9ef47-b7f5-41f4-8e91-bee4f16b658d" pod="openshift-marketplace/redhat-operators-qsz24" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qsz24\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.103614 4947 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:41 crc kubenswrapper[4947]: I1129 06:38:41.103755 4947 status_manager.go:851] "Failed to get status for pod" podUID="1a37f770-07c2-40b1-9f24-ccddc3215658" pod="openshift-marketplace/certified-operators-gbhvj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gbhvj\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:41 crc kubenswrapper[4947]: E1129 06:38:41.167088 4947 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.47:6443: connect: connection refused" event="&Event{ObjectMeta:{community-operators-phgdn.187c66ec229416c5 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-phgdn,UID:b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8,APIVersion:v1,ResourceVersion:28371,FieldPath:spec.containers{registry-server},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\" in 34.594s (34.594s including waiting). Image size: 907837715 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-29 06:38:38.400837317 +0000 UTC m=+269.445219398,LastTimestamp:2025-11-29 06:38:38.400837317 +0000 UTC m=+269.445219398,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 29 06:38:42 crc kubenswrapper[4947]: E1129 06:38:42.038987 4947 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" interval="6.4s" Nov 29 06:38:42 crc kubenswrapper[4947]: I1129 06:38:42.108010 4947 generic.go:334] "Generic (PLEG): container finished" podID="fd94a1d6-7039-4b84-aa44-ee8ec166da24" containerID="6d2570c91b1cae30cc1b3a92b39850cf05c6aefcc546590bd2f8036d76002d51" exitCode=0 Nov 29 06:38:42 crc kubenswrapper[4947]: I1129 06:38:42.108111 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w24f6" event={"ID":"fd94a1d6-7039-4b84-aa44-ee8ec166da24","Type":"ContainerDied","Data":"6d2570c91b1cae30cc1b3a92b39850cf05c6aefcc546590bd2f8036d76002d51"} Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:45.130164 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-phgdn" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:45.130547 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-phgdn" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:45.131822 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fq4zz" event={"ID":"1e620beb-b5db-4321-b404-0ef499ded600","Type":"ContainerStarted","Data":"57fa2c131f3e218b323f0f7cb8f09fe3245232caad748f83b980bc0f141674fa"} Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:45.178268 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:45.180119 4947 status_manager.go:851] "Failed to get status for pod" podUID="658b72a9-13fc-4881-88c5-109b221bbc48" pod="openshift-marketplace/community-operators-wqz4n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wqz4n\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:45.180551 4947 status_manager.go:851] "Failed to get status for pod" podUID="93c9ef47-b7f5-41f4-8e91-bee4f16b658d" pod="openshift-marketplace/redhat-operators-qsz24" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qsz24\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:45.181398 4947 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:45.181956 4947 status_manager.go:851] "Failed to get status for pod" podUID="1a37f770-07c2-40b1-9f24-ccddc3215658" pod="openshift-marketplace/certified-operators-gbhvj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gbhvj\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:45.182539 4947 status_manager.go:851] "Failed to get status for pod" podUID="1e620beb-b5db-4321-b404-0ef499ded600" pod="openshift-marketplace/redhat-marketplace-fq4zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fq4zz\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:45.183247 4947 status_manager.go:851] "Failed to get status for pod" podUID="253ae1cb-50f4-48e7-a004-a70a958c27cd" pod="openshift-marketplace/redhat-marketplace-8zln5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8zln5\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:45.184727 4947 status_manager.go:851] "Failed to get status for pod" podUID="fd94a1d6-7039-4b84-aa44-ee8ec166da24" pod="openshift-marketplace/redhat-operators-w24f6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-w24f6\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:45.185306 4947 status_manager.go:851] "Failed to get status for pod" podUID="b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8" pod="openshift-marketplace/community-operators-phgdn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-phgdn\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:45.185967 4947 status_manager.go:851] "Failed to get status for pod" podUID="05e8a117-69fe-4488-b4e4-c0d7f1b4a63a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:45.199432 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-phgdn" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:45.200051 4947 status_manager.go:851] "Failed to get status for pod" podUID="fd94a1d6-7039-4b84-aa44-ee8ec166da24" pod="openshift-marketplace/redhat-operators-w24f6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-w24f6\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:45.200606 4947 status_manager.go:851] "Failed to get status for pod" podUID="b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8" pod="openshift-marketplace/community-operators-phgdn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-phgdn\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:45.200889 4947 status_manager.go:851] "Failed to get status for pod" podUID="05e8a117-69fe-4488-b4e4-c0d7f1b4a63a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:45.201145 4947 status_manager.go:851] "Failed to get status for pod" podUID="658b72a9-13fc-4881-88c5-109b221bbc48" pod="openshift-marketplace/community-operators-wqz4n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wqz4n\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:45.201386 4947 status_manager.go:851] "Failed to get status for pod" podUID="93c9ef47-b7f5-41f4-8e91-bee4f16b658d" pod="openshift-marketplace/redhat-operators-qsz24" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qsz24\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:45.201656 4947 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:45.201868 4947 status_manager.go:851] "Failed to get status for pod" podUID="1a37f770-07c2-40b1-9f24-ccddc3215658" pod="openshift-marketplace/certified-operators-gbhvj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gbhvj\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:45.202112 4947 status_manager.go:851] "Failed to get status for pod" podUID="1e620beb-b5db-4321-b404-0ef499ded600" pod="openshift-marketplace/redhat-marketplace-fq4zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fq4zz\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:45.202475 4947 status_manager.go:851] "Failed to get status for pod" podUID="253ae1cb-50f4-48e7-a004-a70a958c27cd" pod="openshift-marketplace/redhat-marketplace-8zln5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8zln5\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:45.205275 4947 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="eabaff26-a896-4929-8b32-6e32efe02ffc" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:45.205296 4947 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="eabaff26-a896-4929-8b32-6e32efe02ffc" Nov 29 06:38:52 crc kubenswrapper[4947]: E1129 06:38:45.205780 4947 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:45.206520 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:45.378733 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gbhvj" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:45.378822 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gbhvj" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:45.436841 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gbhvj" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:45.437832 4947 status_manager.go:851] "Failed to get status for pod" podUID="fd94a1d6-7039-4b84-aa44-ee8ec166da24" pod="openshift-marketplace/redhat-operators-w24f6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-w24f6\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:45.438753 4947 status_manager.go:851] "Failed to get status for pod" podUID="b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8" pod="openshift-marketplace/community-operators-phgdn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-phgdn\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:45.439279 4947 status_manager.go:851] "Failed to get status for pod" podUID="05e8a117-69fe-4488-b4e4-c0d7f1b4a63a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:45.439737 4947 status_manager.go:851] "Failed to get status for pod" podUID="93c9ef47-b7f5-41f4-8e91-bee4f16b658d" pod="openshift-marketplace/redhat-operators-qsz24" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qsz24\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:45.440249 4947 status_manager.go:851] "Failed to get status for pod" podUID="658b72a9-13fc-4881-88c5-109b221bbc48" pod="openshift-marketplace/community-operators-wqz4n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wqz4n\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:45.440873 4947 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:45.441384 4947 status_manager.go:851] "Failed to get status for pod" podUID="1a37f770-07c2-40b1-9f24-ccddc3215658" pod="openshift-marketplace/certified-operators-gbhvj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gbhvj\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:45.442025 4947 status_manager.go:851] "Failed to get status for pod" podUID="1e620beb-b5db-4321-b404-0ef499ded600" pod="openshift-marketplace/redhat-marketplace-fq4zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fq4zz\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:45.442739 4947 status_manager.go:851] "Failed to get status for pod" podUID="253ae1cb-50f4-48e7-a004-a70a958c27cd" pod="openshift-marketplace/redhat-marketplace-8zln5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8zln5\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:45.542976 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wqz4n" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:45.544047 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wqz4n" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:45.617842 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wqz4n" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:45.618563 4947 status_manager.go:851] "Failed to get status for pod" podUID="fd94a1d6-7039-4b84-aa44-ee8ec166da24" pod="openshift-marketplace/redhat-operators-w24f6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-w24f6\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:45.618948 4947 status_manager.go:851] "Failed to get status for pod" podUID="b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8" pod="openshift-marketplace/community-operators-phgdn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-phgdn\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:45.619572 4947 status_manager.go:851] "Failed to get status for pod" podUID="05e8a117-69fe-4488-b4e4-c0d7f1b4a63a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:45.620297 4947 status_manager.go:851] "Failed to get status for pod" podUID="658b72a9-13fc-4881-88c5-109b221bbc48" pod="openshift-marketplace/community-operators-wqz4n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wqz4n\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:45.620741 4947 status_manager.go:851] "Failed to get status for pod" podUID="93c9ef47-b7f5-41f4-8e91-bee4f16b658d" pod="openshift-marketplace/redhat-operators-qsz24" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qsz24\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:45.621318 4947 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:45.621903 4947 status_manager.go:851] "Failed to get status for pod" podUID="1a37f770-07c2-40b1-9f24-ccddc3215658" pod="openshift-marketplace/certified-operators-gbhvj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gbhvj\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:45.622388 4947 status_manager.go:851] "Failed to get status for pod" podUID="1e620beb-b5db-4321-b404-0ef499ded600" pod="openshift-marketplace/redhat-marketplace-fq4zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fq4zz\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:45.622939 4947 status_manager.go:851] "Failed to get status for pod" podUID="253ae1cb-50f4-48e7-a004-a70a958c27cd" pod="openshift-marketplace/redhat-marketplace-8zln5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8zln5\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:46.180456 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gbhvj" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:46.181509 4947 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:46.182187 4947 status_manager.go:851] "Failed to get status for pod" podUID="1a37f770-07c2-40b1-9f24-ccddc3215658" pod="openshift-marketplace/certified-operators-gbhvj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gbhvj\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:46.182671 4947 status_manager.go:851] "Failed to get status for pod" podUID="1e620beb-b5db-4321-b404-0ef499ded600" pod="openshift-marketplace/redhat-marketplace-fq4zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fq4zz\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:46.184526 4947 status_manager.go:851] "Failed to get status for pod" podUID="253ae1cb-50f4-48e7-a004-a70a958c27cd" pod="openshift-marketplace/redhat-marketplace-8zln5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8zln5\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:46.185040 4947 status_manager.go:851] "Failed to get status for pod" podUID="fd94a1d6-7039-4b84-aa44-ee8ec166da24" pod="openshift-marketplace/redhat-operators-w24f6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-w24f6\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:46.185629 4947 status_manager.go:851] "Failed to get status for pod" podUID="b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8" pod="openshift-marketplace/community-operators-phgdn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-phgdn\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:46.186129 4947 status_manager.go:851] "Failed to get status for pod" podUID="05e8a117-69fe-4488-b4e4-c0d7f1b4a63a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:46.186567 4947 status_manager.go:851] "Failed to get status for pod" podUID="658b72a9-13fc-4881-88c5-109b221bbc48" pod="openshift-marketplace/community-operators-wqz4n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wqz4n\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:46.186950 4947 status_manager.go:851] "Failed to get status for pod" podUID="93c9ef47-b7f5-41f4-8e91-bee4f16b658d" pod="openshift-marketplace/redhat-operators-qsz24" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qsz24\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:46.200863 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wqz4n" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:46.201421 4947 status_manager.go:851] "Failed to get status for pod" podUID="93c9ef47-b7f5-41f4-8e91-bee4f16b658d" pod="openshift-marketplace/redhat-operators-qsz24" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qsz24\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:46.201883 4947 status_manager.go:851] "Failed to get status for pod" podUID="658b72a9-13fc-4881-88c5-109b221bbc48" pod="openshift-marketplace/community-operators-wqz4n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wqz4n\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:46.203407 4947 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:46.203650 4947 status_manager.go:851] "Failed to get status for pod" podUID="1a37f770-07c2-40b1-9f24-ccddc3215658" pod="openshift-marketplace/certified-operators-gbhvj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gbhvj\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:46.204002 4947 status_manager.go:851] "Failed to get status for pod" podUID="1e620beb-b5db-4321-b404-0ef499ded600" pod="openshift-marketplace/redhat-marketplace-fq4zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fq4zz\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:46.204513 4947 status_manager.go:851] "Failed to get status for pod" podUID="253ae1cb-50f4-48e7-a004-a70a958c27cd" pod="openshift-marketplace/redhat-marketplace-8zln5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8zln5\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:46.204928 4947 status_manager.go:851] "Failed to get status for pod" podUID="fd94a1d6-7039-4b84-aa44-ee8ec166da24" pod="openshift-marketplace/redhat-operators-w24f6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-w24f6\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:46.205391 4947 status_manager.go:851] "Failed to get status for pod" podUID="b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8" pod="openshift-marketplace/community-operators-phgdn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-phgdn\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:46.205756 4947 status_manager.go:851] "Failed to get status for pod" podUID="05e8a117-69fe-4488-b4e4-c0d7f1b4a63a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:46.207809 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-phgdn" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:46.208385 4947 status_manager.go:851] "Failed to get status for pod" podUID="fd94a1d6-7039-4b84-aa44-ee8ec166da24" pod="openshift-marketplace/redhat-operators-w24f6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-w24f6\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:46.208838 4947 status_manager.go:851] "Failed to get status for pod" podUID="b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8" pod="openshift-marketplace/community-operators-phgdn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-phgdn\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:46.209462 4947 status_manager.go:851] "Failed to get status for pod" podUID="05e8a117-69fe-4488-b4e4-c0d7f1b4a63a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:46.209858 4947 status_manager.go:851] "Failed to get status for pod" podUID="658b72a9-13fc-4881-88c5-109b221bbc48" pod="openshift-marketplace/community-operators-wqz4n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wqz4n\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:46.210199 4947 status_manager.go:851] "Failed to get status for pod" podUID="93c9ef47-b7f5-41f4-8e91-bee4f16b658d" pod="openshift-marketplace/redhat-operators-qsz24" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qsz24\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:46.210682 4947 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:46.211070 4947 status_manager.go:851] "Failed to get status for pod" podUID="1a37f770-07c2-40b1-9f24-ccddc3215658" pod="openshift-marketplace/certified-operators-gbhvj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gbhvj\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:46.211444 4947 status_manager.go:851] "Failed to get status for pod" podUID="1e620beb-b5db-4321-b404-0ef499ded600" pod="openshift-marketplace/redhat-marketplace-fq4zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fq4zz\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:46.211891 4947 status_manager.go:851] "Failed to get status for pod" podUID="253ae1cb-50f4-48e7-a004-a70a958c27cd" pod="openshift-marketplace/redhat-marketplace-8zln5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8zln5\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:47.911647 4947 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:47.911709 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: E1129 06:38:47.996844 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:38:47Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:38:47Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:38:47Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:38:47Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[],\\\"sizeBytes\\\":1605131077},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:6dbcf185d7aecc64ad77a55ea28c8b2e46764d0976aa0cb7b7e359d7db7c7d99\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:aa3b0a1844156935b9005476446d3a6e00eeaa29c793005658af056bb3739900\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1201826122},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:51ac30ca0620a42d370d558857cd469cdf23a21628a64e38d7a216b9e5021abd\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:6dead86fcee7b924770c08041260eabd5cea0ec81f21621479b03a9a2c6d2c7e\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1201184084},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: E1129 06:38:47.997318 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: E1129 06:38:47.997651 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: E1129 06:38:47.998022 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: E1129 06:38:47.998472 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: E1129 06:38:47.998494 4947 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 29 06:38:52 crc kubenswrapper[4947]: E1129 06:38:48.441893 4947 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" interval="7s" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:49.182259 4947 status_manager.go:851] "Failed to get status for pod" podUID="1e620beb-b5db-4321-b404-0ef499ded600" pod="openshift-marketplace/redhat-marketplace-fq4zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fq4zz\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:49.182754 4947 status_manager.go:851] "Failed to get status for pod" podUID="253ae1cb-50f4-48e7-a004-a70a958c27cd" pod="openshift-marketplace/redhat-marketplace-8zln5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8zln5\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:49.183319 4947 status_manager.go:851] "Failed to get status for pod" podUID="fd94a1d6-7039-4b84-aa44-ee8ec166da24" pod="openshift-marketplace/redhat-operators-w24f6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-w24f6\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:49.183665 4947 status_manager.go:851] "Failed to get status for pod" podUID="b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8" pod="openshift-marketplace/community-operators-phgdn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-phgdn\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:49.184328 4947 status_manager.go:851] "Failed to get status for pod" podUID="05e8a117-69fe-4488-b4e4-c0d7f1b4a63a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:49.184878 4947 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:49.185356 4947 status_manager.go:851] "Failed to get status for pod" podUID="658b72a9-13fc-4881-88c5-109b221bbc48" pod="openshift-marketplace/community-operators-wqz4n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wqz4n\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:49.185851 4947 status_manager.go:851] "Failed to get status for pod" podUID="93c9ef47-b7f5-41f4-8e91-bee4f16b658d" pod="openshift-marketplace/redhat-operators-qsz24" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qsz24\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:49.187235 4947 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: I1129 06:38:49.187699 4947 status_manager.go:851] "Failed to get status for pod" podUID="1a37f770-07c2-40b1-9f24-ccddc3215658" pod="openshift-marketplace/certified-operators-gbhvj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gbhvj\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:38:52 crc kubenswrapper[4947]: E1129 06:38:51.168706 4947 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.47:6443: connect: connection refused" event="&Event{ObjectMeta:{community-operators-phgdn.187c66ec229416c5 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-phgdn,UID:b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8,APIVersion:v1,ResourceVersion:28371,FieldPath:spec.containers{registry-server},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\" in 34.594s (34.594s including waiting). Image size: 907837715 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-29 06:38:38.400837317 +0000 UTC m=+269.445219398,LastTimestamp:2025-11-29 06:38:38.400837317 +0000 UTC m=+269.445219398,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:52.170791 4947 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:52.171005 4947 status_manager.go:851] "Failed to get status for pod" podUID="1a37f770-07c2-40b1-9f24-ccddc3215658" pod="openshift-marketplace/certified-operators-gbhvj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gbhvj\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:52.171362 4947 status_manager.go:851] "Failed to get status for pod" podUID="1e620beb-b5db-4321-b404-0ef499ded600" pod="openshift-marketplace/redhat-marketplace-fq4zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fq4zz\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:52.171861 4947 status_manager.go:851] "Failed to get status for pod" podUID="253ae1cb-50f4-48e7-a004-a70a958c27cd" pod="openshift-marketplace/redhat-marketplace-8zln5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8zln5\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:52.172182 4947 status_manager.go:851] "Failed to get status for pod" podUID="fd94a1d6-7039-4b84-aa44-ee8ec166da24" pod="openshift-marketplace/redhat-operators-w24f6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-w24f6\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:52.172533 4947 status_manager.go:851] "Failed to get status for pod" podUID="b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8" pod="openshift-marketplace/community-operators-phgdn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-phgdn\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:52.172829 4947 status_manager.go:851] "Failed to get status for pod" podUID="05e8a117-69fe-4488-b4e4-c0d7f1b4a63a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:52.173120 4947 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:52.173428 4947 status_manager.go:851] "Failed to get status for pod" podUID="658b72a9-13fc-4881-88c5-109b221bbc48" pod="openshift-marketplace/community-operators-wqz4n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wqz4n\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:52.173742 4947 status_manager.go:851] "Failed to get status for pod" podUID="93c9ef47-b7f5-41f4-8e91-bee4f16b658d" pod="openshift-marketplace/redhat-operators-qsz24" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qsz24\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:52.765813 4947 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:52.766372 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:53.177469 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:53.177551 4947 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="0d79edf1f405f23d4ec3ff1a52d9583fc26d31d1fb690337a7b081868c397d26" exitCode=1 Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:53.177597 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"0d79edf1f405f23d4ec3ff1a52d9583fc26d31d1fb690337a7b081868c397d26"} Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:53.180647 4947 scope.go:117] "RemoveContainer" containerID="0d79edf1f405f23d4ec3ff1a52d9583fc26d31d1fb690337a7b081868c397d26" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:53.180640 4947 status_manager.go:851] "Failed to get status for pod" podUID="1a37f770-07c2-40b1-9f24-ccddc3215658" pod="openshift-marketplace/certified-operators-gbhvj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gbhvj\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:53.181280 4947 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:53.181622 4947 status_manager.go:851] "Failed to get status for pod" podUID="1e620beb-b5db-4321-b404-0ef499ded600" pod="openshift-marketplace/redhat-marketplace-fq4zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fq4zz\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:53.181913 4947 status_manager.go:851] "Failed to get status for pod" podUID="253ae1cb-50f4-48e7-a004-a70a958c27cd" pod="openshift-marketplace/redhat-marketplace-8zln5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8zln5\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:53.182493 4947 status_manager.go:851] "Failed to get status for pod" podUID="fd94a1d6-7039-4b84-aa44-ee8ec166da24" pod="openshift-marketplace/redhat-operators-w24f6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-w24f6\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:53.182871 4947 status_manager.go:851] "Failed to get status for pod" podUID="b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8" pod="openshift-marketplace/community-operators-phgdn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-phgdn\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:53.183174 4947 status_manager.go:851] "Failed to get status for pod" podUID="05e8a117-69fe-4488-b4e4-c0d7f1b4a63a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:53.183512 4947 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:53.183820 4947 status_manager.go:851] "Failed to get status for pod" podUID="658b72a9-13fc-4881-88c5-109b221bbc48" pod="openshift-marketplace/community-operators-wqz4n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wqz4n\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:53.184109 4947 status_manager.go:851] "Failed to get status for pod" podUID="93c9ef47-b7f5-41f4-8e91-bee4f16b658d" pod="openshift-marketplace/redhat-operators-qsz24" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qsz24\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:53.184429 4947 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:53.780440 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 06:39:02 crc kubenswrapper[4947]: E1129 06:38:55.544427 4947 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" interval="7s" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:56.980174 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fq4zz" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:56.980262 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fq4zz" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:57.041083 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fq4zz" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:57.041972 4947 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:57.042536 4947 status_manager.go:851] "Failed to get status for pod" podUID="1a37f770-07c2-40b1-9f24-ccddc3215658" pod="openshift-marketplace/certified-operators-gbhvj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gbhvj\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:57.043019 4947 status_manager.go:851] "Failed to get status for pod" podUID="1e620beb-b5db-4321-b404-0ef499ded600" pod="openshift-marketplace/redhat-marketplace-fq4zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fq4zz\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:57.043470 4947 status_manager.go:851] "Failed to get status for pod" podUID="253ae1cb-50f4-48e7-a004-a70a958c27cd" pod="openshift-marketplace/redhat-marketplace-8zln5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8zln5\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:57.043792 4947 status_manager.go:851] "Failed to get status for pod" podUID="fd94a1d6-7039-4b84-aa44-ee8ec166da24" pod="openshift-marketplace/redhat-operators-w24f6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-w24f6\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:57.044162 4947 status_manager.go:851] "Failed to get status for pod" podUID="b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8" pod="openshift-marketplace/community-operators-phgdn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-phgdn\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:57.044526 4947 status_manager.go:851] "Failed to get status for pod" podUID="05e8a117-69fe-4488-b4e4-c0d7f1b4a63a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:57.045122 4947 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:57.045842 4947 status_manager.go:851] "Failed to get status for pod" podUID="658b72a9-13fc-4881-88c5-109b221bbc48" pod="openshift-marketplace/community-operators-wqz4n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wqz4n\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:57.046714 4947 status_manager.go:851] "Failed to get status for pod" podUID="93c9ef47-b7f5-41f4-8e91-bee4f16b658d" pod="openshift-marketplace/redhat-operators-qsz24" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qsz24\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:57.047378 4947 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:57.269102 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fq4zz" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:57.269577 4947 status_manager.go:851] "Failed to get status for pod" podUID="fd94a1d6-7039-4b84-aa44-ee8ec166da24" pod="openshift-marketplace/redhat-operators-w24f6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-w24f6\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:57.269755 4947 status_manager.go:851] "Failed to get status for pod" podUID="b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8" pod="openshift-marketplace/community-operators-phgdn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-phgdn\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:57.269898 4947 status_manager.go:851] "Failed to get status for pod" podUID="05e8a117-69fe-4488-b4e4-c0d7f1b4a63a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:57.270050 4947 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:57.270280 4947 status_manager.go:851] "Failed to get status for pod" podUID="658b72a9-13fc-4881-88c5-109b221bbc48" pod="openshift-marketplace/community-operators-wqz4n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wqz4n\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:57.270485 4947 status_manager.go:851] "Failed to get status for pod" podUID="93c9ef47-b7f5-41f4-8e91-bee4f16b658d" pod="openshift-marketplace/redhat-operators-qsz24" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qsz24\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:57.270730 4947 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:57.271014 4947 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:57.271356 4947 status_manager.go:851] "Failed to get status for pod" podUID="1a37f770-07c2-40b1-9f24-ccddc3215658" pod="openshift-marketplace/certified-operators-gbhvj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gbhvj\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:57.271524 4947 status_manager.go:851] "Failed to get status for pod" podUID="1e620beb-b5db-4321-b404-0ef499ded600" pod="openshift-marketplace/redhat-marketplace-fq4zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fq4zz\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:57.271726 4947 status_manager.go:851] "Failed to get status for pod" podUID="253ae1cb-50f4-48e7-a004-a70a958c27cd" pod="openshift-marketplace/redhat-marketplace-8zln5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8zln5\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:57.911763 4947 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 06:39:02 crc kubenswrapper[4947]: E1129 06:38:58.011313 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:38:58Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:38:58Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:38:58Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:38:58Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[],\\\"sizeBytes\\\":1605131077},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:6dbcf185d7aecc64ad77a55ea28c8b2e46764d0976aa0cb7b7e359d7db7c7d99\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:aa3b0a1844156935b9005476446d3a6e00eeaa29c793005658af056bb3739900\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1201826122},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:51ac30ca0620a42d370d558857cd469cdf23a21628a64e38d7a216b9e5021abd\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:6dead86fcee7b924770c08041260eabd5cea0ec81f21621479b03a9a2c6d2c7e\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1201184084},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: E1129 06:38:58.011628 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: E1129 06:38:58.011805 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: E1129 06:38:58.011996 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: E1129 06:38:58.012193 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: E1129 06:38:58.012206 4947 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:59.183517 4947 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:59.183961 4947 status_manager.go:851] "Failed to get status for pod" podUID="1a37f770-07c2-40b1-9f24-ccddc3215658" pod="openshift-marketplace/certified-operators-gbhvj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gbhvj\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:59.184517 4947 status_manager.go:851] "Failed to get status for pod" podUID="1e620beb-b5db-4321-b404-0ef499ded600" pod="openshift-marketplace/redhat-marketplace-fq4zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fq4zz\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:59.185000 4947 status_manager.go:851] "Failed to get status for pod" podUID="253ae1cb-50f4-48e7-a004-a70a958c27cd" pod="openshift-marketplace/redhat-marketplace-8zln5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8zln5\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:59.185523 4947 status_manager.go:851] "Failed to get status for pod" podUID="fd94a1d6-7039-4b84-aa44-ee8ec166da24" pod="openshift-marketplace/redhat-operators-w24f6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-w24f6\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:59.185983 4947 status_manager.go:851] "Failed to get status for pod" podUID="b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8" pod="openshift-marketplace/community-operators-phgdn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-phgdn\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:59.186423 4947 status_manager.go:851] "Failed to get status for pod" podUID="05e8a117-69fe-4488-b4e4-c0d7f1b4a63a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:59.186821 4947 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:59.188487 4947 status_manager.go:851] "Failed to get status for pod" podUID="658b72a9-13fc-4881-88c5-109b221bbc48" pod="openshift-marketplace/community-operators-wqz4n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wqz4n\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:59.189118 4947 status_manager.go:851] "Failed to get status for pod" podUID="93c9ef47-b7f5-41f4-8e91-bee4f16b658d" pod="openshift-marketplace/redhat-operators-qsz24" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qsz24\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:38:59.189617 4947 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:02 crc kubenswrapper[4947]: E1129 06:39:01.170319 4947 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.47:6443: connect: connection refused" event="&Event{ObjectMeta:{community-operators-phgdn.187c66ec229416c5 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-phgdn,UID:b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8,APIVersion:v1,ResourceVersion:28371,FieldPath:spec.containers{registry-server},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\" in 34.594s (34.594s including waiting). Image size: 907837715 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-29 06:38:38.400837317 +0000 UTC m=+269.445219398,LastTimestamp:2025-11-29 06:38:38.400837317 +0000 UTC m=+269.445219398,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 29 06:39:02 crc kubenswrapper[4947]: E1129 06:39:02.545456 4947 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" interval="7s" Nov 29 06:39:02 crc kubenswrapper[4947]: I1129 06:39:02.764759 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 06:39:08 crc kubenswrapper[4947]: E1129 06:39:08.269948 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:39:08Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:39:08Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:39:08Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:39:08Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[],\\\"sizeBytes\\\":1605131077},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:6dbcf185d7aecc64ad77a55ea28c8b2e46764d0976aa0cb7b7e359d7db7c7d99\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:aa3b0a1844156935b9005476446d3a6e00eeaa29c793005658af056bb3739900\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1201826122},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:51ac30ca0620a42d370d558857cd469cdf23a21628a64e38d7a216b9e5021abd\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:6dead86fcee7b924770c08041260eabd5cea0ec81f21621479b03a9a2c6d2c7e\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1201184084},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:08 crc kubenswrapper[4947]: E1129 06:39:08.271344 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:08 crc kubenswrapper[4947]: E1129 06:39:08.272025 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:08 crc kubenswrapper[4947]: E1129 06:39:08.272419 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:08 crc kubenswrapper[4947]: E1129 06:39:08.272879 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:08 crc kubenswrapper[4947]: E1129 06:39:08.272904 4947 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 29 06:39:09 crc kubenswrapper[4947]: I1129 06:39:09.067903 4947 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Nov 29 06:39:09 crc kubenswrapper[4947]: I1129 06:39:09.189393 4947 status_manager.go:851] "Failed to get status for pod" podUID="05e8a117-69fe-4488-b4e4-c0d7f1b4a63a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:09 crc kubenswrapper[4947]: I1129 06:39:09.190323 4947 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:09 crc kubenswrapper[4947]: I1129 06:39:09.190878 4947 status_manager.go:851] "Failed to get status for pod" podUID="658b72a9-13fc-4881-88c5-109b221bbc48" pod="openshift-marketplace/community-operators-wqz4n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wqz4n\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:09 crc kubenswrapper[4947]: I1129 06:39:09.191273 4947 status_manager.go:851] "Failed to get status for pod" podUID="93c9ef47-b7f5-41f4-8e91-bee4f16b658d" pod="openshift-marketplace/redhat-operators-qsz24" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qsz24\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:09 crc kubenswrapper[4947]: I1129 06:39:09.191585 4947 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:09 crc kubenswrapper[4947]: I1129 06:39:09.192153 4947 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:09 crc kubenswrapper[4947]: I1129 06:39:09.193300 4947 status_manager.go:851] "Failed to get status for pod" podUID="1a37f770-07c2-40b1-9f24-ccddc3215658" pod="openshift-marketplace/certified-operators-gbhvj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gbhvj\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:09 crc kubenswrapper[4947]: I1129 06:39:09.193723 4947 status_manager.go:851] "Failed to get status for pod" podUID="1e620beb-b5db-4321-b404-0ef499ded600" pod="openshift-marketplace/redhat-marketplace-fq4zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fq4zz\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:09 crc kubenswrapper[4947]: I1129 06:39:09.194113 4947 status_manager.go:851] "Failed to get status for pod" podUID="253ae1cb-50f4-48e7-a004-a70a958c27cd" pod="openshift-marketplace/redhat-marketplace-8zln5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8zln5\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:09 crc kubenswrapper[4947]: I1129 06:39:09.194531 4947 status_manager.go:851] "Failed to get status for pod" podUID="fd94a1d6-7039-4b84-aa44-ee8ec166da24" pod="openshift-marketplace/redhat-operators-w24f6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-w24f6\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:09 crc kubenswrapper[4947]: I1129 06:39:09.194956 4947 status_manager.go:851] "Failed to get status for pod" podUID="b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8" pod="openshift-marketplace/community-operators-phgdn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-phgdn\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:09 crc kubenswrapper[4947]: E1129 06:39:09.546319 4947 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" interval="7s" Nov 29 06:39:11 crc kubenswrapper[4947]: E1129 06:39:11.172056 4947 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.47:6443: connect: connection refused" event="&Event{ObjectMeta:{community-operators-phgdn.187c66ec229416c5 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-phgdn,UID:b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8,APIVersion:v1,ResourceVersion:28371,FieldPath:spec.containers{registry-server},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\" in 34.594s (34.594s including waiting). Image size: 907837715 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-29 06:38:38.400837317 +0000 UTC m=+269.445219398,LastTimestamp:2025-11-29 06:38:38.400837317 +0000 UTC m=+269.445219398,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 29 06:39:12 crc kubenswrapper[4947]: W1129 06:39:12.912261 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-d4cdeeaf54380be0255b166ea3b343fa8cd56e94fedbaa048912ecef8f1c3b7b WatchSource:0}: Error finding container d4cdeeaf54380be0255b166ea3b343fa8cd56e94fedbaa048912ecef8f1c3b7b: Status 404 returned error can't find the container with id d4cdeeaf54380be0255b166ea3b343fa8cd56e94fedbaa048912ecef8f1c3b7b Nov 29 06:39:13 crc kubenswrapper[4947]: I1129 06:39:13.291257 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d4cdeeaf54380be0255b166ea3b343fa8cd56e94fedbaa048912ecef8f1c3b7b"} Nov 29 06:39:15 crc kubenswrapper[4947]: I1129 06:39:15.409432 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8zln5" event={"ID":"253ae1cb-50f4-48e7-a004-a70a958c27cd","Type":"ContainerStarted","Data":"eb9e4b7b7f076600be5b39d1c3d2ea329804a209cdc7289606d4489cde5e4899"} Nov 29 06:39:15 crc kubenswrapper[4947]: I1129 06:39:15.412075 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8c8f3880708b7624dc47e0ec26d298dd3b934ddf6ecd3f47e892036cd038d9ff"} Nov 29 06:39:16 crc kubenswrapper[4947]: I1129 06:39:16.421545 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qsz24" event={"ID":"93c9ef47-b7f5-41f4-8e91-bee4f16b658d","Type":"ContainerStarted","Data":"e5a081649ed745e050f904f0ae94eb22fa1c6557e9ec6c3b5ee6e8e36030ad83"} Nov 29 06:39:16 crc kubenswrapper[4947]: I1129 06:39:16.425974 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 29 06:39:16 crc kubenswrapper[4947]: I1129 06:39:16.426051 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c64c6567246245787ad9b5262cc97b45e7ab81b00fc9e092edcdde13ffd4f7d5"} Nov 29 06:39:16 crc kubenswrapper[4947]: E1129 06:39:16.547980 4947 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" interval="7s" Nov 29 06:39:17 crc kubenswrapper[4947]: I1129 06:39:17.435442 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w24f6" event={"ID":"fd94a1d6-7039-4b84-aa44-ee8ec166da24","Type":"ContainerStarted","Data":"27af4d6836630a884f7269dc64e86119906fed8e3338c4c3d4c2269060dcd173"} Nov 29 06:39:17 crc kubenswrapper[4947]: I1129 06:39:17.435760 4947 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="eabaff26-a896-4929-8b32-6e32efe02ffc" Nov 29 06:39:17 crc kubenswrapper[4947]: I1129 06:39:17.435785 4947 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="eabaff26-a896-4929-8b32-6e32efe02ffc" Nov 29 06:39:17 crc kubenswrapper[4947]: I1129 06:39:17.436318 4947 status_manager.go:851] "Failed to get status for pod" podUID="1e620beb-b5db-4321-b404-0ef499ded600" pod="openshift-marketplace/redhat-marketplace-fq4zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fq4zz\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:17 crc kubenswrapper[4947]: E1129 06:39:17.436323 4947 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 06:39:17 crc kubenswrapper[4947]: I1129 06:39:17.436603 4947 status_manager.go:851] "Failed to get status for pod" podUID="253ae1cb-50f4-48e7-a004-a70a958c27cd" pod="openshift-marketplace/redhat-marketplace-8zln5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8zln5\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:17 crc kubenswrapper[4947]: I1129 06:39:17.436816 4947 status_manager.go:851] "Failed to get status for pod" podUID="fd94a1d6-7039-4b84-aa44-ee8ec166da24" pod="openshift-marketplace/redhat-operators-w24f6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-w24f6\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:17 crc kubenswrapper[4947]: I1129 06:39:17.437147 4947 status_manager.go:851] "Failed to get status for pod" podUID="b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8" pod="openshift-marketplace/community-operators-phgdn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-phgdn\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:17 crc kubenswrapper[4947]: I1129 06:39:17.437786 4947 status_manager.go:851] "Failed to get status for pod" podUID="05e8a117-69fe-4488-b4e4-c0d7f1b4a63a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:17 crc kubenswrapper[4947]: I1129 06:39:17.438261 4947 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:17 crc kubenswrapper[4947]: I1129 06:39:17.438501 4947 status_manager.go:851] "Failed to get status for pod" podUID="658b72a9-13fc-4881-88c5-109b221bbc48" pod="openshift-marketplace/community-operators-wqz4n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wqz4n\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:17 crc kubenswrapper[4947]: I1129 06:39:17.438704 4947 status_manager.go:851] "Failed to get status for pod" podUID="93c9ef47-b7f5-41f4-8e91-bee4f16b658d" pod="openshift-marketplace/redhat-operators-qsz24" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qsz24\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:17 crc kubenswrapper[4947]: I1129 06:39:17.438976 4947 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:17 crc kubenswrapper[4947]: I1129 06:39:17.439277 4947 status_manager.go:851] "Failed to get status for pod" podUID="1a37f770-07c2-40b1-9f24-ccddc3215658" pod="openshift-marketplace/certified-operators-gbhvj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gbhvj\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:17 crc kubenswrapper[4947]: I1129 06:39:17.439504 4947 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:17 crc kubenswrapper[4947]: I1129 06:39:17.439892 4947 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:17 crc kubenswrapper[4947]: I1129 06:39:17.440343 4947 status_manager.go:851] "Failed to get status for pod" podUID="658b72a9-13fc-4881-88c5-109b221bbc48" pod="openshift-marketplace/community-operators-wqz4n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wqz4n\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:17 crc kubenswrapper[4947]: I1129 06:39:17.440846 4947 status_manager.go:851] "Failed to get status for pod" podUID="93c9ef47-b7f5-41f4-8e91-bee4f16b658d" pod="openshift-marketplace/redhat-operators-qsz24" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qsz24\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:17 crc kubenswrapper[4947]: I1129 06:39:17.441339 4947 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:17 crc kubenswrapper[4947]: I1129 06:39:17.441678 4947 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:17 crc kubenswrapper[4947]: I1129 06:39:17.442166 4947 status_manager.go:851] "Failed to get status for pod" podUID="1a37f770-07c2-40b1-9f24-ccddc3215658" pod="openshift-marketplace/certified-operators-gbhvj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gbhvj\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:17 crc kubenswrapper[4947]: I1129 06:39:17.442643 4947 status_manager.go:851] "Failed to get status for pod" podUID="1e620beb-b5db-4321-b404-0ef499ded600" pod="openshift-marketplace/redhat-marketplace-fq4zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fq4zz\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:17 crc kubenswrapper[4947]: I1129 06:39:17.443040 4947 status_manager.go:851] "Failed to get status for pod" podUID="253ae1cb-50f4-48e7-a004-a70a958c27cd" pod="openshift-marketplace/redhat-marketplace-8zln5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8zln5\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:17 crc kubenswrapper[4947]: I1129 06:39:17.443381 4947 status_manager.go:851] "Failed to get status for pod" podUID="fd94a1d6-7039-4b84-aa44-ee8ec166da24" pod="openshift-marketplace/redhat-operators-w24f6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-w24f6\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:17 crc kubenswrapper[4947]: I1129 06:39:17.443643 4947 status_manager.go:851] "Failed to get status for pod" podUID="b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8" pod="openshift-marketplace/community-operators-phgdn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-phgdn\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:17 crc kubenswrapper[4947]: I1129 06:39:17.443954 4947 status_manager.go:851] "Failed to get status for pod" podUID="05e8a117-69fe-4488-b4e4-c0d7f1b4a63a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:18 crc kubenswrapper[4947]: I1129 06:39:18.441161 4947 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="8c8f3880708b7624dc47e0ec26d298dd3b934ddf6ecd3f47e892036cd038d9ff" exitCode=0 Nov 29 06:39:18 crc kubenswrapper[4947]: I1129 06:39:18.441351 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"8c8f3880708b7624dc47e0ec26d298dd3b934ddf6ecd3f47e892036cd038d9ff"} Nov 29 06:39:18 crc kubenswrapper[4947]: I1129 06:39:18.442103 4947 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:18 crc kubenswrapper[4947]: I1129 06:39:18.442164 4947 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="eabaff26-a896-4929-8b32-6e32efe02ffc" Nov 29 06:39:18 crc kubenswrapper[4947]: I1129 06:39:18.442202 4947 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="eabaff26-a896-4929-8b32-6e32efe02ffc" Nov 29 06:39:18 crc kubenswrapper[4947]: I1129 06:39:18.442797 4947 status_manager.go:851] "Failed to get status for pod" podUID="658b72a9-13fc-4881-88c5-109b221bbc48" pod="openshift-marketplace/community-operators-wqz4n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wqz4n\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:18 crc kubenswrapper[4947]: E1129 06:39:18.442917 4947 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 06:39:18 crc kubenswrapper[4947]: I1129 06:39:18.443163 4947 status_manager.go:851] "Failed to get status for pod" podUID="93c9ef47-b7f5-41f4-8e91-bee4f16b658d" pod="openshift-marketplace/redhat-operators-qsz24" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qsz24\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:18 crc kubenswrapper[4947]: I1129 06:39:18.443730 4947 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:18 crc kubenswrapper[4947]: I1129 06:39:18.444128 4947 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:18 crc kubenswrapper[4947]: I1129 06:39:18.444451 4947 status_manager.go:851] "Failed to get status for pod" podUID="1a37f770-07c2-40b1-9f24-ccddc3215658" pod="openshift-marketplace/certified-operators-gbhvj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gbhvj\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:18 crc kubenswrapper[4947]: I1129 06:39:18.444955 4947 status_manager.go:851] "Failed to get status for pod" podUID="1e620beb-b5db-4321-b404-0ef499ded600" pod="openshift-marketplace/redhat-marketplace-fq4zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fq4zz\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:18 crc kubenswrapper[4947]: I1129 06:39:18.445375 4947 status_manager.go:851] "Failed to get status for pod" podUID="253ae1cb-50f4-48e7-a004-a70a958c27cd" pod="openshift-marketplace/redhat-marketplace-8zln5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8zln5\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:18 crc kubenswrapper[4947]: I1129 06:39:18.445729 4947 status_manager.go:851] "Failed to get status for pod" podUID="fd94a1d6-7039-4b84-aa44-ee8ec166da24" pod="openshift-marketplace/redhat-operators-w24f6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-w24f6\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:18 crc kubenswrapper[4947]: I1129 06:39:18.446066 4947 status_manager.go:851] "Failed to get status for pod" podUID="b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8" pod="openshift-marketplace/community-operators-phgdn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-phgdn\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:18 crc kubenswrapper[4947]: I1129 06:39:18.446327 4947 status_manager.go:851] "Failed to get status for pod" podUID="05e8a117-69fe-4488-b4e4-c0d7f1b4a63a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:18 crc kubenswrapper[4947]: I1129 06:39:18.446669 4947 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:18 crc kubenswrapper[4947]: I1129 06:39:18.447081 4947 status_manager.go:851] "Failed to get status for pod" podUID="658b72a9-13fc-4881-88c5-109b221bbc48" pod="openshift-marketplace/community-operators-wqz4n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wqz4n\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:18 crc kubenswrapper[4947]: I1129 06:39:18.447330 4947 status_manager.go:851] "Failed to get status for pod" podUID="93c9ef47-b7f5-41f4-8e91-bee4f16b658d" pod="openshift-marketplace/redhat-operators-qsz24" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qsz24\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:18 crc kubenswrapper[4947]: I1129 06:39:18.448811 4947 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:18 crc kubenswrapper[4947]: I1129 06:39:18.449298 4947 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:18 crc kubenswrapper[4947]: I1129 06:39:18.449642 4947 status_manager.go:851] "Failed to get status for pod" podUID="1a37f770-07c2-40b1-9f24-ccddc3215658" pod="openshift-marketplace/certified-operators-gbhvj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gbhvj\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:18 crc kubenswrapper[4947]: I1129 06:39:18.449948 4947 status_manager.go:851] "Failed to get status for pod" podUID="1e620beb-b5db-4321-b404-0ef499ded600" pod="openshift-marketplace/redhat-marketplace-fq4zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fq4zz\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:18 crc kubenswrapper[4947]: I1129 06:39:18.450232 4947 status_manager.go:851] "Failed to get status for pod" podUID="253ae1cb-50f4-48e7-a004-a70a958c27cd" pod="openshift-marketplace/redhat-marketplace-8zln5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8zln5\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:18 crc kubenswrapper[4947]: I1129 06:39:18.450501 4947 status_manager.go:851] "Failed to get status for pod" podUID="fd94a1d6-7039-4b84-aa44-ee8ec166da24" pod="openshift-marketplace/redhat-operators-w24f6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-w24f6\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:18 crc kubenswrapper[4947]: I1129 06:39:18.450810 4947 status_manager.go:851] "Failed to get status for pod" podUID="b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8" pod="openshift-marketplace/community-operators-phgdn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-phgdn\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:18 crc kubenswrapper[4947]: I1129 06:39:18.451092 4947 status_manager.go:851] "Failed to get status for pod" podUID="05e8a117-69fe-4488-b4e4-c0d7f1b4a63a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:18 crc kubenswrapper[4947]: E1129 06:39:18.615507 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:39:18Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:39:18Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:39:18Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T06:39:18Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:20434c856c20158a4c73986bf7de93188afa338ed356d293a59f9e621072cfc3\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:24f7dab5f4a6fcbb16d41b8a7345f9f9bae2ef1e2c53abed71c4f18eeafebc85\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1605131077},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:6dbcf185d7aecc64ad77a55ea28c8b2e46764d0976aa0cb7b7e359d7db7c7d99\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:aa3b0a1844156935b9005476446d3a6e00eeaa29c793005658af056bb3739900\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1201826122},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:51ac30ca0620a42d370d558857cd469cdf23a21628a64e38d7a216b9e5021abd\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:6dead86fcee7b924770c08041260eabd5cea0ec81f21621479b03a9a2c6d2c7e\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1201184084},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:e8990432556acad31519b1a73ec32f32d27c2034cf9e5cc4db8980efc7331594\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:ebe9f523f5c211a3a0f2570331dddcd5be15b12c1fecd9b8b121f881bfaad029\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1129027903},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:18 crc kubenswrapper[4947]: E1129 06:39:18.616114 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:18 crc kubenswrapper[4947]: E1129 06:39:18.616648 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:18 crc kubenswrapper[4947]: E1129 06:39:18.617068 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:18 crc kubenswrapper[4947]: E1129 06:39:18.617413 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" Nov 29 06:39:18 crc kubenswrapper[4947]: E1129 06:39:18.617438 4947 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 29 06:39:18 crc kubenswrapper[4947]: I1129 06:39:18.853017 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-w24f6" Nov 29 06:39:18 crc kubenswrapper[4947]: I1129 06:39:18.853273 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-w24f6" Nov 29 06:39:19 crc kubenswrapper[4947]: I1129 06:39:19.454417 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f623c5cb5e8a6284c1ac5165ecde8d663f1096bb99b69af8de77c4305bb46aad"} Nov 29 06:39:19 crc kubenswrapper[4947]: I1129 06:39:19.454779 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4e5e50523de15d3f8eb0acab74abe8db5221cb23cfbc57744f2c453899960b1d"} Nov 29 06:39:19 crc kubenswrapper[4947]: I1129 06:39:19.454791 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"37be498cb06a95d376405142bc353e0181c9f3cca2af8d64ccd8f63c696693b2"} Nov 29 06:39:19 crc kubenswrapper[4947]: I1129 06:39:19.907545 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-w24f6" podUID="fd94a1d6-7039-4b84-aa44-ee8ec166da24" containerName="registry-server" probeResult="failure" output=< Nov 29 06:39:19 crc kubenswrapper[4947]: timeout: failed to connect service ":50051" within 1s Nov 29 06:39:19 crc kubenswrapper[4947]: > Nov 29 06:39:20 crc kubenswrapper[4947]: I1129 06:39:20.463351 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"41836dcc416077e2a7f3c02875f640b523f0f5e466b9c94f9510c407cfa40bb0"} Nov 29 06:39:21 crc kubenswrapper[4947]: I1129 06:39:21.472307 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"aede6ea5ec49c56c1080774720cc11ab540b0e042600bed3383aefde0ffa91cb"} Nov 29 06:39:21 crc kubenswrapper[4947]: I1129 06:39:21.472696 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 06:39:21 crc kubenswrapper[4947]: I1129 06:39:21.472780 4947 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="eabaff26-a896-4929-8b32-6e32efe02ffc" Nov 29 06:39:21 crc kubenswrapper[4947]: I1129 06:39:21.472810 4947 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="eabaff26-a896-4929-8b32-6e32efe02ffc" Nov 29 06:39:22 crc kubenswrapper[4947]: I1129 06:39:22.764415 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 06:39:23 crc kubenswrapper[4947]: I1129 06:39:23.781274 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 06:39:23 crc kubenswrapper[4947]: I1129 06:39:23.782519 4947 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Nov 29 06:39:23 crc kubenswrapper[4947]: I1129 06:39:23.782633 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Nov 29 06:39:25 crc kubenswrapper[4947]: I1129 06:39:25.207725 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 06:39:25 crc kubenswrapper[4947]: I1129 06:39:25.208165 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 06:39:25 crc kubenswrapper[4947]: I1129 06:39:25.215599 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 06:39:25 crc kubenswrapper[4947]: I1129 06:39:25.494130 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-vrzqb_ef543e1b-8068-4ea3-b32a-61027b32e95d/approver/0.log" Nov 29 06:39:25 crc kubenswrapper[4947]: I1129 06:39:25.494828 4947 generic.go:334] "Generic (PLEG): container finished" podID="ef543e1b-8068-4ea3-b32a-61027b32e95d" containerID="92f526fea4a10baf89765a67868ca0fe47ce19bbde73cd538bd65add7578f000" exitCode=1 Nov 29 06:39:25 crc kubenswrapper[4947]: I1129 06:39:25.494876 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerDied","Data":"92f526fea4a10baf89765a67868ca0fe47ce19bbde73cd538bd65add7578f000"} Nov 29 06:39:25 crc kubenswrapper[4947]: I1129 06:39:25.496324 4947 scope.go:117] "RemoveContainer" containerID="92f526fea4a10baf89765a67868ca0fe47ce19bbde73cd538bd65add7578f000" Nov 29 06:39:26 crc kubenswrapper[4947]: I1129 06:39:26.481199 4947 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 06:39:26 crc kubenswrapper[4947]: I1129 06:39:26.501534 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-vrzqb_ef543e1b-8068-4ea3-b32a-61027b32e95d/approver/0.log" Nov 29 06:39:26 crc kubenswrapper[4947]: I1129 06:39:26.502728 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"04b5215c16fe71b34fb6c6dc8a702cc87d97dcd72835b860c73336325932edef"} Nov 29 06:39:26 crc kubenswrapper[4947]: I1129 06:39:26.503319 4947 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="eabaff26-a896-4929-8b32-6e32efe02ffc" Nov 29 06:39:26 crc kubenswrapper[4947]: I1129 06:39:26.503362 4947 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="eabaff26-a896-4929-8b32-6e32efe02ffc" Nov 29 06:39:26 crc kubenswrapper[4947]: I1129 06:39:26.507007 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 06:39:26 crc kubenswrapper[4947]: I1129 06:39:26.520967 4947 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="db7ae625-1d53-4ec1-8e18-019e4f823c65" Nov 29 06:39:27 crc kubenswrapper[4947]: I1129 06:39:27.399270 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8zln5" Nov 29 06:39:27 crc kubenswrapper[4947]: I1129 06:39:27.399332 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8zln5" Nov 29 06:39:27 crc kubenswrapper[4947]: I1129 06:39:27.435515 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8zln5" Nov 29 06:39:27 crc kubenswrapper[4947]: I1129 06:39:27.509014 4947 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="eabaff26-a896-4929-8b32-6e32efe02ffc" Nov 29 06:39:27 crc kubenswrapper[4947]: I1129 06:39:27.509077 4947 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="eabaff26-a896-4929-8b32-6e32efe02ffc" Nov 29 06:39:27 crc kubenswrapper[4947]: I1129 06:39:27.557041 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8zln5" Nov 29 06:39:28 crc kubenswrapper[4947]: I1129 06:39:28.145397 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qsz24" Nov 29 06:39:28 crc kubenswrapper[4947]: I1129 06:39:28.145442 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qsz24" Nov 29 06:39:28 crc kubenswrapper[4947]: I1129 06:39:28.200675 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qsz24" Nov 29 06:39:28 crc kubenswrapper[4947]: I1129 06:39:28.556943 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qsz24" Nov 29 06:39:28 crc kubenswrapper[4947]: I1129 06:39:28.893204 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-w24f6" Nov 29 06:39:28 crc kubenswrapper[4947]: I1129 06:39:28.933572 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-w24f6" Nov 29 06:39:29 crc kubenswrapper[4947]: I1129 06:39:29.204838 4947 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="db7ae625-1d53-4ec1-8e18-019e4f823c65" Nov 29 06:39:33 crc kubenswrapper[4947]: I1129 06:39:33.784458 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 06:39:33 crc kubenswrapper[4947]: I1129 06:39:33.788451 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 06:39:36 crc kubenswrapper[4947]: I1129 06:39:36.565458 4947 generic.go:334] "Generic (PLEG): container finished" podID="675de8ae-169a-4737-a290-54cdb32d8cb0" containerID="2111f172b313f21f1ec488e2f051f6deac9e9588feb338a541ecdbd5d75e5c13" exitCode=0 Nov 29 06:39:36 crc kubenswrapper[4947]: I1129 06:39:36.565499 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6mhws" event={"ID":"675de8ae-169a-4737-a290-54cdb32d8cb0","Type":"ContainerDied","Data":"2111f172b313f21f1ec488e2f051f6deac9e9588feb338a541ecdbd5d75e5c13"} Nov 29 06:39:36 crc kubenswrapper[4947]: I1129 06:39:36.565840 4947 scope.go:117] "RemoveContainer" containerID="2111f172b313f21f1ec488e2f051f6deac9e9588feb338a541ecdbd5d75e5c13" Nov 29 06:39:37 crc kubenswrapper[4947]: I1129 06:39:37.574212 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-6mhws_675de8ae-169a-4737-a290-54cdb32d8cb0/marketplace-operator/1.log" Nov 29 06:39:37 crc kubenswrapper[4947]: I1129 06:39:37.575976 4947 generic.go:334] "Generic (PLEG): container finished" podID="675de8ae-169a-4737-a290-54cdb32d8cb0" containerID="606942d39f8eb0ca22a66583f862ec20060c657a960757f94b0d0c383fa91d6d" exitCode=1 Nov 29 06:39:37 crc kubenswrapper[4947]: I1129 06:39:37.576132 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6mhws" event={"ID":"675de8ae-169a-4737-a290-54cdb32d8cb0","Type":"ContainerDied","Data":"606942d39f8eb0ca22a66583f862ec20060c657a960757f94b0d0c383fa91d6d"} Nov 29 06:39:37 crc kubenswrapper[4947]: I1129 06:39:37.576251 4947 scope.go:117] "RemoveContainer" containerID="2111f172b313f21f1ec488e2f051f6deac9e9588feb338a541ecdbd5d75e5c13" Nov 29 06:39:37 crc kubenswrapper[4947]: I1129 06:39:37.577198 4947 scope.go:117] "RemoveContainer" containerID="606942d39f8eb0ca22a66583f862ec20060c657a960757f94b0d0c383fa91d6d" Nov 29 06:39:37 crc kubenswrapper[4947]: E1129 06:39:37.578171 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-6mhws_openshift-marketplace(675de8ae-169a-4737-a290-54cdb32d8cb0)\"" pod="openshift-marketplace/marketplace-operator-79b997595-6mhws" podUID="675de8ae-169a-4737-a290-54cdb32d8cb0" Nov 29 06:39:38 crc kubenswrapper[4947]: I1129 06:39:38.005709 4947 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-6mhws" Nov 29 06:39:38 crc kubenswrapper[4947]: I1129 06:39:38.006208 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-6mhws" Nov 29 06:39:38 crc kubenswrapper[4947]: I1129 06:39:38.581878 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-6mhws_675de8ae-169a-4737-a290-54cdb32d8cb0/marketplace-operator/1.log" Nov 29 06:39:38 crc kubenswrapper[4947]: I1129 06:39:38.582562 4947 scope.go:117] "RemoveContainer" containerID="606942d39f8eb0ca22a66583f862ec20060c657a960757f94b0d0c383fa91d6d" Nov 29 06:39:38 crc kubenswrapper[4947]: E1129 06:39:38.582759 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-6mhws_openshift-marketplace(675de8ae-169a-4737-a290-54cdb32d8cb0)\"" pod="openshift-marketplace/marketplace-operator-79b997595-6mhws" podUID="675de8ae-169a-4737-a290-54cdb32d8cb0" Nov 29 06:39:52 crc kubenswrapper[4947]: I1129 06:39:52.438714 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 29 06:39:54 crc kubenswrapper[4947]: I1129 06:39:54.178858 4947 scope.go:117] "RemoveContainer" containerID="606942d39f8eb0ca22a66583f862ec20060c657a960757f94b0d0c383fa91d6d" Nov 29 06:39:54 crc kubenswrapper[4947]: I1129 06:39:54.673310 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-6mhws_675de8ae-169a-4737-a290-54cdb32d8cb0/marketplace-operator/2.log" Nov 29 06:39:54 crc kubenswrapper[4947]: I1129 06:39:54.674192 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-6mhws_675de8ae-169a-4737-a290-54cdb32d8cb0/marketplace-operator/1.log" Nov 29 06:39:54 crc kubenswrapper[4947]: I1129 06:39:54.674265 4947 generic.go:334] "Generic (PLEG): container finished" podID="675de8ae-169a-4737-a290-54cdb32d8cb0" containerID="9cb1b99a6de15a7299b40ce7a384ef6f8eb2aa3e76ae8306f11b78be3a24f57f" exitCode=1 Nov 29 06:39:54 crc kubenswrapper[4947]: I1129 06:39:54.674296 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6mhws" event={"ID":"675de8ae-169a-4737-a290-54cdb32d8cb0","Type":"ContainerDied","Data":"9cb1b99a6de15a7299b40ce7a384ef6f8eb2aa3e76ae8306f11b78be3a24f57f"} Nov 29 06:39:54 crc kubenswrapper[4947]: I1129 06:39:54.674331 4947 scope.go:117] "RemoveContainer" containerID="606942d39f8eb0ca22a66583f862ec20060c657a960757f94b0d0c383fa91d6d" Nov 29 06:39:54 crc kubenswrapper[4947]: I1129 06:39:54.674694 4947 scope.go:117] "RemoveContainer" containerID="9cb1b99a6de15a7299b40ce7a384ef6f8eb2aa3e76ae8306f11b78be3a24f57f" Nov 29 06:39:54 crc kubenswrapper[4947]: E1129 06:39:54.674921 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-6mhws_openshift-marketplace(675de8ae-169a-4737-a290-54cdb32d8cb0)\"" pod="openshift-marketplace/marketplace-operator-79b997595-6mhws" podUID="675de8ae-169a-4737-a290-54cdb32d8cb0" Nov 29 06:39:55 crc kubenswrapper[4947]: I1129 06:39:55.681288 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-6mhws_675de8ae-169a-4737-a290-54cdb32d8cb0/marketplace-operator/2.log" Nov 29 06:39:58 crc kubenswrapper[4947]: I1129 06:39:58.006588 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-6mhws" Nov 29 06:39:58 crc kubenswrapper[4947]: I1129 06:39:58.006863 4947 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-6mhws" Nov 29 06:39:58 crc kubenswrapper[4947]: I1129 06:39:58.007382 4947 scope.go:117] "RemoveContainer" containerID="9cb1b99a6de15a7299b40ce7a384ef6f8eb2aa3e76ae8306f11b78be3a24f57f" Nov 29 06:39:58 crc kubenswrapper[4947]: E1129 06:39:58.007594 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-6mhws_openshift-marketplace(675de8ae-169a-4737-a290-54cdb32d8cb0)\"" pod="openshift-marketplace/marketplace-operator-79b997595-6mhws" podUID="675de8ae-169a-4737-a290-54cdb32d8cb0" Nov 29 06:40:01 crc kubenswrapper[4947]: I1129 06:40:01.063100 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 29 06:40:02 crc kubenswrapper[4947]: I1129 06:40:02.632338 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 29 06:40:06 crc kubenswrapper[4947]: I1129 06:40:06.706488 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 29 06:40:07 crc kubenswrapper[4947]: I1129 06:40:07.721607 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 29 06:40:07 crc kubenswrapper[4947]: I1129 06:40:07.870768 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 29 06:40:08 crc kubenswrapper[4947]: I1129 06:40:08.497349 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 29 06:40:08 crc kubenswrapper[4947]: I1129 06:40:08.595916 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 29 06:40:09 crc kubenswrapper[4947]: I1129 06:40:09.379547 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 29 06:40:10 crc kubenswrapper[4947]: I1129 06:40:10.571183 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 29 06:40:10 crc kubenswrapper[4947]: I1129 06:40:10.676552 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 29 06:40:10 crc kubenswrapper[4947]: I1129 06:40:10.901542 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 29 06:40:11 crc kubenswrapper[4947]: I1129 06:40:11.739681 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 29 06:40:12 crc kubenswrapper[4947]: I1129 06:40:12.081640 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 29 06:40:12 crc kubenswrapper[4947]: I1129 06:40:12.110000 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 29 06:40:12 crc kubenswrapper[4947]: I1129 06:40:12.284073 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 29 06:40:12 crc kubenswrapper[4947]: I1129 06:40:12.714699 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 29 06:40:13 crc kubenswrapper[4947]: I1129 06:40:13.108132 4947 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 29 06:40:13 crc kubenswrapper[4947]: I1129 06:40:13.178683 4947 scope.go:117] "RemoveContainer" containerID="9cb1b99a6de15a7299b40ce7a384ef6f8eb2aa3e76ae8306f11b78be3a24f57f" Nov 29 06:40:13 crc kubenswrapper[4947]: E1129 06:40:13.178918 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-6mhws_openshift-marketplace(675de8ae-169a-4737-a290-54cdb32d8cb0)\"" pod="openshift-marketplace/marketplace-operator-79b997595-6mhws" podUID="675de8ae-169a-4737-a290-54cdb32d8cb0" Nov 29 06:40:13 crc kubenswrapper[4947]: I1129 06:40:13.703809 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 29 06:40:14 crc kubenswrapper[4947]: I1129 06:40:14.290788 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 29 06:40:14 crc kubenswrapper[4947]: I1129 06:40:14.546415 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 29 06:40:14 crc kubenswrapper[4947]: I1129 06:40:14.632332 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 29 06:40:15 crc kubenswrapper[4947]: I1129 06:40:15.572339 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 29 06:40:15 crc kubenswrapper[4947]: I1129 06:40:15.868358 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 29 06:40:16 crc kubenswrapper[4947]: I1129 06:40:16.470684 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 29 06:40:16 crc kubenswrapper[4947]: I1129 06:40:16.505075 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 29 06:40:16 crc kubenswrapper[4947]: I1129 06:40:16.947840 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 29 06:40:17 crc kubenswrapper[4947]: I1129 06:40:17.069771 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 29 06:40:17 crc kubenswrapper[4947]: I1129 06:40:17.213364 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 29 06:40:17 crc kubenswrapper[4947]: I1129 06:40:17.479176 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 29 06:40:17 crc kubenswrapper[4947]: I1129 06:40:17.732983 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 29 06:40:18 crc kubenswrapper[4947]: I1129 06:40:18.006803 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 29 06:40:18 crc kubenswrapper[4947]: I1129 06:40:18.758818 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 29 06:40:18 crc kubenswrapper[4947]: I1129 06:40:18.787676 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 29 06:40:18 crc kubenswrapper[4947]: I1129 06:40:18.868774 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 29 06:40:19 crc kubenswrapper[4947]: I1129 06:40:19.351113 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 29 06:40:19 crc kubenswrapper[4947]: I1129 06:40:19.398751 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 29 06:40:19 crc kubenswrapper[4947]: I1129 06:40:19.400876 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 29 06:40:19 crc kubenswrapper[4947]: I1129 06:40:19.656324 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 29 06:40:19 crc kubenswrapper[4947]: I1129 06:40:19.835648 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 29 06:40:20 crc kubenswrapper[4947]: I1129 06:40:20.027687 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 29 06:40:20 crc kubenswrapper[4947]: I1129 06:40:20.028415 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 29 06:40:20 crc kubenswrapper[4947]: I1129 06:40:20.403465 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 29 06:40:20 crc kubenswrapper[4947]: I1129 06:40:20.563309 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 29 06:40:20 crc kubenswrapper[4947]: I1129 06:40:20.576883 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 29 06:40:20 crc kubenswrapper[4947]: I1129 06:40:20.813616 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 29 06:40:20 crc kubenswrapper[4947]: I1129 06:40:20.841294 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 29 06:40:21 crc kubenswrapper[4947]: I1129 06:40:21.223263 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 29 06:40:21 crc kubenswrapper[4947]: I1129 06:40:21.323921 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 29 06:40:21 crc kubenswrapper[4947]: I1129 06:40:21.811201 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 29 06:40:21 crc kubenswrapper[4947]: I1129 06:40:21.966893 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 29 06:40:22 crc kubenswrapper[4947]: I1129 06:40:22.028799 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 29 06:40:22 crc kubenswrapper[4947]: I1129 06:40:22.297136 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 29 06:40:22 crc kubenswrapper[4947]: I1129 06:40:22.348165 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 29 06:40:22 crc kubenswrapper[4947]: I1129 06:40:22.389903 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 29 06:40:22 crc kubenswrapper[4947]: I1129 06:40:22.489725 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 29 06:40:22 crc kubenswrapper[4947]: I1129 06:40:22.528578 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 29 06:40:22 crc kubenswrapper[4947]: I1129 06:40:22.734684 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 29 06:40:22 crc kubenswrapper[4947]: I1129 06:40:22.795707 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 29 06:40:22 crc kubenswrapper[4947]: I1129 06:40:22.951308 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 29 06:40:22 crc kubenswrapper[4947]: I1129 06:40:22.984648 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 29 06:40:22 crc kubenswrapper[4947]: I1129 06:40:22.987572 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 06:40:22 crc kubenswrapper[4947]: I1129 06:40:22.987642 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 06:40:23 crc kubenswrapper[4947]: I1129 06:40:23.331315 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 29 06:40:23 crc kubenswrapper[4947]: I1129 06:40:23.601451 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 29 06:40:24 crc kubenswrapper[4947]: I1129 06:40:24.178879 4947 scope.go:117] "RemoveContainer" containerID="9cb1b99a6de15a7299b40ce7a384ef6f8eb2aa3e76ae8306f11b78be3a24f57f" Nov 29 06:40:24 crc kubenswrapper[4947]: I1129 06:40:24.424244 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 29 06:40:24 crc kubenswrapper[4947]: I1129 06:40:24.653085 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 29 06:40:24 crc kubenswrapper[4947]: I1129 06:40:24.760389 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 29 06:40:24 crc kubenswrapper[4947]: I1129 06:40:24.845777 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-6mhws_675de8ae-169a-4737-a290-54cdb32d8cb0/marketplace-operator/2.log" Nov 29 06:40:24 crc kubenswrapper[4947]: I1129 06:40:24.846058 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6mhws" event={"ID":"675de8ae-169a-4737-a290-54cdb32d8cb0","Type":"ContainerStarted","Data":"e30cf77ba007e7f1f28eff7678bb4f5d42c6343b852d9a769d57318fc5668082"} Nov 29 06:40:24 crc kubenswrapper[4947]: I1129 06:40:24.846482 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-6mhws" Nov 29 06:40:24 crc kubenswrapper[4947]: I1129 06:40:24.847498 4947 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-6mhws container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Nov 29 06:40:24 crc kubenswrapper[4947]: I1129 06:40:24.847566 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-6mhws" podUID="675de8ae-169a-4737-a290-54cdb32d8cb0" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" Nov 29 06:40:24 crc kubenswrapper[4947]: I1129 06:40:24.855718 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 29 06:40:24 crc kubenswrapper[4947]: I1129 06:40:24.915314 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 29 06:40:25 crc kubenswrapper[4947]: I1129 06:40:25.035107 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 29 06:40:25 crc kubenswrapper[4947]: I1129 06:40:25.232134 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 29 06:40:25 crc kubenswrapper[4947]: I1129 06:40:25.403833 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 29 06:40:25 crc kubenswrapper[4947]: I1129 06:40:25.424214 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 29 06:40:25 crc kubenswrapper[4947]: I1129 06:40:25.440999 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 29 06:40:25 crc kubenswrapper[4947]: I1129 06:40:25.541681 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 29 06:40:25 crc kubenswrapper[4947]: I1129 06:40:25.574506 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 29 06:40:25 crc kubenswrapper[4947]: I1129 06:40:25.862532 4947 generic.go:334] "Generic (PLEG): container finished" podID="8ff97c32-0757-44e2-8cad-55b8bfadf0a8" containerID="5fc41fdc4f8e60ff9b865b0ad50f446ce7ea47c4e71ad1c71a696910a8cb6862" exitCode=0 Nov 29 06:40:25 crc kubenswrapper[4947]: I1129 06:40:25.862914 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-2vgqt" event={"ID":"8ff97c32-0757-44e2-8cad-55b8bfadf0a8","Type":"ContainerDied","Data":"5fc41fdc4f8e60ff9b865b0ad50f446ce7ea47c4e71ad1c71a696910a8cb6862"} Nov 29 06:40:25 crc kubenswrapper[4947]: I1129 06:40:25.863488 4947 scope.go:117] "RemoveContainer" containerID="5fc41fdc4f8e60ff9b865b0ad50f446ce7ea47c4e71ad1c71a696910a8cb6862" Nov 29 06:40:25 crc kubenswrapper[4947]: I1129 06:40:25.866704 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-6mhws" Nov 29 06:40:26 crc kubenswrapper[4947]: I1129 06:40:26.009790 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 29 06:40:26 crc kubenswrapper[4947]: I1129 06:40:26.250877 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 29 06:40:26 crc kubenswrapper[4947]: I1129 06:40:26.420343 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 29 06:40:26 crc kubenswrapper[4947]: I1129 06:40:26.508546 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 29 06:40:26 crc kubenswrapper[4947]: I1129 06:40:26.570790 4947 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 29 06:40:26 crc kubenswrapper[4947]: I1129 06:40:26.570939 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gbhvj" podStartSLOduration=117.395171137 podStartE2EDuration="3m51.570906641s" podCreationTimestamp="2025-11-29 06:36:35 +0000 UTC" firstStartedPulling="2025-11-29 06:36:44.900081037 +0000 UTC m=+155.944463128" lastFinishedPulling="2025-11-29 06:38:39.075816551 +0000 UTC m=+270.120198632" observedRunningTime="2025-11-29 06:39:25.288213046 +0000 UTC m=+316.332595127" watchObservedRunningTime="2025-11-29 06:40:26.570906641 +0000 UTC m=+377.615288732" Nov 29 06:40:26 crc kubenswrapper[4947]: I1129 06:40:26.571284 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8zln5" podStartSLOduration=82.197193585 podStartE2EDuration="3m49.57127742s" podCreationTimestamp="2025-11-29 06:36:37 +0000 UTC" firstStartedPulling="2025-11-29 06:36:44.89985186 +0000 UTC m=+155.944233941" lastFinishedPulling="2025-11-29 06:39:12.273935695 +0000 UTC m=+303.318317776" observedRunningTime="2025-11-29 06:39:25.351275926 +0000 UTC m=+316.395658007" watchObservedRunningTime="2025-11-29 06:40:26.57127742 +0000 UTC m=+377.615659501" Nov 29 06:40:26 crc kubenswrapper[4947]: I1129 06:40:26.571940 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=116.571934817 podStartE2EDuration="1m56.571934817s" podCreationTimestamp="2025-11-29 06:38:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:39:25.266387842 +0000 UTC m=+316.310769923" watchObservedRunningTime="2025-11-29 06:40:26.571934817 +0000 UTC m=+377.616316898" Nov 29 06:40:26 crc kubenswrapper[4947]: I1129 06:40:26.572141 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-phgdn" podStartSLOduration=116.439379869 podStartE2EDuration="3m52.572137362s" podCreationTimestamp="2025-11-29 06:36:34 +0000 UTC" firstStartedPulling="2025-11-29 06:36:42.268065393 +0000 UTC m=+153.312447474" lastFinishedPulling="2025-11-29 06:38:38.400822886 +0000 UTC m=+269.445204967" observedRunningTime="2025-11-29 06:39:25.161981433 +0000 UTC m=+316.206363554" watchObservedRunningTime="2025-11-29 06:40:26.572137362 +0000 UTC m=+377.616519443" Nov 29 06:40:26 crc kubenswrapper[4947]: I1129 06:40:26.572207 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wqz4n" podStartSLOduration=116.831930298 podStartE2EDuration="3m51.572204654s" podCreationTimestamp="2025-11-29 06:36:35 +0000 UTC" firstStartedPulling="2025-11-29 06:36:44.900138508 +0000 UTC m=+155.944520589" lastFinishedPulling="2025-11-29 06:38:39.640412864 +0000 UTC m=+270.684794945" observedRunningTime="2025-11-29 06:39:25.239318772 +0000 UTC m=+316.283700863" watchObservedRunningTime="2025-11-29 06:40:26.572204654 +0000 UTC m=+377.616586735" Nov 29 06:40:26 crc kubenswrapper[4947]: I1129 06:40:26.574016 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fq4zz" podStartSLOduration=113.675310556 podStartE2EDuration="3m50.574010581s" podCreationTimestamp="2025-11-29 06:36:36 +0000 UTC" firstStartedPulling="2025-11-29 06:36:45.79656073 +0000 UTC m=+156.840942811" lastFinishedPulling="2025-11-29 06:38:42.695260735 +0000 UTC m=+273.739642836" observedRunningTime="2025-11-29 06:39:25.321817074 +0000 UTC m=+316.366199155" watchObservedRunningTime="2025-11-29 06:40:26.574010581 +0000 UTC m=+377.618392662" Nov 29 06:40:26 crc kubenswrapper[4947]: I1129 06:40:26.574429 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qsz24" podStartSLOduration=82.500424526 podStartE2EDuration="3m49.574425751s" podCreationTimestamp="2025-11-29 06:36:37 +0000 UTC" firstStartedPulling="2025-11-29 06:36:45.796526169 +0000 UTC m=+156.840908290" lastFinishedPulling="2025-11-29 06:39:12.870527394 +0000 UTC m=+303.914909515" observedRunningTime="2025-11-29 06:39:25.252930854 +0000 UTC m=+316.297312945" watchObservedRunningTime="2025-11-29 06:40:26.574425751 +0000 UTC m=+377.618807832" Nov 29 06:40:26 crc kubenswrapper[4947]: I1129 06:40:26.574494 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-w24f6" podStartSLOduration=81.640130857 podStartE2EDuration="3m48.574492103s" podCreationTimestamp="2025-11-29 06:36:38 +0000 UTC" firstStartedPulling="2025-11-29 06:36:45.79657007 +0000 UTC m=+156.840952151" lastFinishedPulling="2025-11-29 06:39:12.730931316 +0000 UTC m=+303.775313397" observedRunningTime="2025-11-29 06:39:25.373609173 +0000 UTC m=+316.417991254" watchObservedRunningTime="2025-11-29 06:40:26.574492103 +0000 UTC m=+377.618874174" Nov 29 06:40:26 crc kubenswrapper[4947]: I1129 06:40:26.575031 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 29 06:40:26 crc kubenswrapper[4947]: I1129 06:40:26.575063 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 29 06:40:26 crc kubenswrapper[4947]: I1129 06:40:26.580038 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 06:40:26 crc kubenswrapper[4947]: I1129 06:40:26.595421 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=60.59538964 podStartE2EDuration="1m0.59538964s" podCreationTimestamp="2025-11-29 06:39:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:40:26.590309129 +0000 UTC m=+377.634691210" watchObservedRunningTime="2025-11-29 06:40:26.59538964 +0000 UTC m=+377.639771761" Nov 29 06:40:26 crc kubenswrapper[4947]: I1129 06:40:26.800145 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 29 06:40:26 crc kubenswrapper[4947]: I1129 06:40:26.869423 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-56656f9798-sghgg_e6c322b6-b29e-4177-9e8c-7fefbf9d7e4a/machine-approver-controller/0.log" Nov 29 06:40:26 crc kubenswrapper[4947]: I1129 06:40:26.869824 4947 generic.go:334] "Generic (PLEG): container finished" podID="e6c322b6-b29e-4177-9e8c-7fefbf9d7e4a" containerID="86b2c705ea366b42df7dd6aca63a17370e3fb121328a6e8f4781a7f57bbf7ed5" exitCode=255 Nov 29 06:40:26 crc kubenswrapper[4947]: I1129 06:40:26.869895 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sghgg" event={"ID":"e6c322b6-b29e-4177-9e8c-7fefbf9d7e4a","Type":"ContainerDied","Data":"86b2c705ea366b42df7dd6aca63a17370e3fb121328a6e8f4781a7f57bbf7ed5"} Nov 29 06:40:26 crc kubenswrapper[4947]: I1129 06:40:26.870549 4947 scope.go:117] "RemoveContainer" containerID="86b2c705ea366b42df7dd6aca63a17370e3fb121328a6e8f4781a7f57bbf7ed5" Nov 29 06:40:26 crc kubenswrapper[4947]: I1129 06:40:26.871846 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-2vgqt" event={"ID":"8ff97c32-0757-44e2-8cad-55b8bfadf0a8","Type":"ContainerStarted","Data":"41d331cfd7ad358032e7559f40cde0b4cdea27ff90e04cd51e848ff078d49828"} Nov 29 06:40:26 crc kubenswrapper[4947]: I1129 06:40:26.873124 4947 status_manager.go:317] "Container readiness changed for unknown container" pod="openshift-controller-manager/controller-manager-879f6c89f-2vgqt" containerID="cri-o://5fc41fdc4f8e60ff9b865b0ad50f446ce7ea47c4e71ad1c71a696910a8cb6862" Nov 29 06:40:26 crc kubenswrapper[4947]: I1129 06:40:26.873146 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-2vgqt" Nov 29 06:40:27 crc kubenswrapper[4947]: I1129 06:40:27.036740 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 29 06:40:27 crc kubenswrapper[4947]: I1129 06:40:27.043945 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 29 06:40:27 crc kubenswrapper[4947]: I1129 06:40:27.044104 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-2vgqt" Nov 29 06:40:27 crc kubenswrapper[4947]: I1129 06:40:27.048776 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-2vgqt" Nov 29 06:40:27 crc kubenswrapper[4947]: I1129 06:40:27.171283 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 29 06:40:27 crc kubenswrapper[4947]: I1129 06:40:27.280482 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 29 06:40:27 crc kubenswrapper[4947]: I1129 06:40:27.331332 4947 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 29 06:40:27 crc kubenswrapper[4947]: I1129 06:40:27.352406 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 29 06:40:27 crc kubenswrapper[4947]: I1129 06:40:27.372045 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 29 06:40:27 crc kubenswrapper[4947]: I1129 06:40:27.431255 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 29 06:40:27 crc kubenswrapper[4947]: I1129 06:40:27.442733 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 29 06:40:27 crc kubenswrapper[4947]: I1129 06:40:27.888317 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-56656f9798-sghgg_e6c322b6-b29e-4177-9e8c-7fefbf9d7e4a/machine-approver-controller/0.log" Nov 29 06:40:27 crc kubenswrapper[4947]: I1129 06:40:27.889116 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sghgg" event={"ID":"e6c322b6-b29e-4177-9e8c-7fefbf9d7e4a","Type":"ContainerStarted","Data":"fd99f1e3ec7b84fa3c33e4edfb9bb70dd842be5a81adb0254f107920e5179d89"} Nov 29 06:40:28 crc kubenswrapper[4947]: I1129 06:40:28.082844 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 29 06:40:28 crc kubenswrapper[4947]: I1129 06:40:28.121071 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 29 06:40:28 crc kubenswrapper[4947]: I1129 06:40:28.442546 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 29 06:40:28 crc kubenswrapper[4947]: I1129 06:40:28.478489 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 29 06:40:28 crc kubenswrapper[4947]: I1129 06:40:28.775915 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 29 06:40:28 crc kubenswrapper[4947]: I1129 06:40:28.896692 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 29 06:40:29 crc kubenswrapper[4947]: I1129 06:40:29.122867 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 29 06:40:29 crc kubenswrapper[4947]: I1129 06:40:29.144700 4947 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 29 06:40:29 crc kubenswrapper[4947]: I1129 06:40:29.370707 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 29 06:40:29 crc kubenswrapper[4947]: I1129 06:40:29.389105 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 29 06:40:29 crc kubenswrapper[4947]: I1129 06:40:29.631447 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 29 06:40:29 crc kubenswrapper[4947]: I1129 06:40:29.889778 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 29 06:40:29 crc kubenswrapper[4947]: I1129 06:40:29.914150 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 29 06:40:30 crc kubenswrapper[4947]: I1129 06:40:30.043836 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 29 06:40:30 crc kubenswrapper[4947]: I1129 06:40:30.243966 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 29 06:40:30 crc kubenswrapper[4947]: I1129 06:40:30.334338 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 29 06:40:30 crc kubenswrapper[4947]: I1129 06:40:30.541694 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 29 06:40:30 crc kubenswrapper[4947]: I1129 06:40:30.651320 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 29 06:40:30 crc kubenswrapper[4947]: I1129 06:40:30.918317 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 29 06:40:30 crc kubenswrapper[4947]: I1129 06:40:30.936738 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 29 06:40:31 crc kubenswrapper[4947]: I1129 06:40:31.134552 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 29 06:40:31 crc kubenswrapper[4947]: I1129 06:40:31.396032 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 29 06:40:31 crc kubenswrapper[4947]: I1129 06:40:31.515303 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 29 06:40:31 crc kubenswrapper[4947]: I1129 06:40:31.555415 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 29 06:40:31 crc kubenswrapper[4947]: I1129 06:40:31.649577 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 29 06:40:31 crc kubenswrapper[4947]: I1129 06:40:31.654669 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 29 06:40:31 crc kubenswrapper[4947]: I1129 06:40:31.930979 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 29 06:40:32 crc kubenswrapper[4947]: I1129 06:40:32.028638 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 29 06:40:32 crc kubenswrapper[4947]: I1129 06:40:32.314874 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 29 06:40:32 crc kubenswrapper[4947]: I1129 06:40:32.465897 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 29 06:40:32 crc kubenswrapper[4947]: I1129 06:40:32.578617 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 29 06:40:32 crc kubenswrapper[4947]: I1129 06:40:32.612901 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 29 06:40:32 crc kubenswrapper[4947]: I1129 06:40:32.647706 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 29 06:40:32 crc kubenswrapper[4947]: I1129 06:40:32.891062 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 29 06:40:33 crc kubenswrapper[4947]: I1129 06:40:33.119991 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 29 06:40:33 crc kubenswrapper[4947]: I1129 06:40:33.156357 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 29 06:40:33 crc kubenswrapper[4947]: I1129 06:40:33.204476 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 29 06:40:33 crc kubenswrapper[4947]: I1129 06:40:33.271965 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 29 06:40:33 crc kubenswrapper[4947]: I1129 06:40:33.312280 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 29 06:40:33 crc kubenswrapper[4947]: I1129 06:40:33.347029 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 29 06:40:33 crc kubenswrapper[4947]: I1129 06:40:33.381912 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 29 06:40:33 crc kubenswrapper[4947]: I1129 06:40:33.382941 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 29 06:40:33 crc kubenswrapper[4947]: I1129 06:40:33.610913 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 29 06:40:33 crc kubenswrapper[4947]: I1129 06:40:33.803923 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 29 06:40:33 crc kubenswrapper[4947]: I1129 06:40:33.934504 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 29 06:40:34 crc kubenswrapper[4947]: I1129 06:40:34.078648 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 29 06:40:34 crc kubenswrapper[4947]: I1129 06:40:34.127746 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 29 06:40:34 crc kubenswrapper[4947]: I1129 06:40:34.232997 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 29 06:40:34 crc kubenswrapper[4947]: I1129 06:40:34.248358 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 29 06:40:34 crc kubenswrapper[4947]: I1129 06:40:34.252646 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 29 06:40:34 crc kubenswrapper[4947]: I1129 06:40:34.435194 4947 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 29 06:40:34 crc kubenswrapper[4947]: I1129 06:40:34.435420 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://6199b757c8dee13c0d44384a4a68198095a559a7877f519a7913d261149cbd16" gracePeriod=5 Nov 29 06:40:34 crc kubenswrapper[4947]: I1129 06:40:34.516953 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 29 06:40:34 crc kubenswrapper[4947]: I1129 06:40:34.579944 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 29 06:40:34 crc kubenswrapper[4947]: I1129 06:40:34.661736 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 29 06:40:34 crc kubenswrapper[4947]: I1129 06:40:34.791499 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 29 06:40:35 crc kubenswrapper[4947]: I1129 06:40:35.322795 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 29 06:40:35 crc kubenswrapper[4947]: I1129 06:40:35.362938 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 29 06:40:35 crc kubenswrapper[4947]: I1129 06:40:35.506803 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 29 06:40:35 crc kubenswrapper[4947]: I1129 06:40:35.547797 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 29 06:40:35 crc kubenswrapper[4947]: I1129 06:40:35.610637 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 29 06:40:35 crc kubenswrapper[4947]: I1129 06:40:35.623727 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 29 06:40:35 crc kubenswrapper[4947]: I1129 06:40:35.761785 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 29 06:40:35 crc kubenswrapper[4947]: I1129 06:40:35.848125 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 29 06:40:35 crc kubenswrapper[4947]: I1129 06:40:35.870936 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 29 06:40:35 crc kubenswrapper[4947]: I1129 06:40:35.967709 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 29 06:40:35 crc kubenswrapper[4947]: I1129 06:40:35.980476 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 29 06:40:36 crc kubenswrapper[4947]: I1129 06:40:36.029874 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 29 06:40:36 crc kubenswrapper[4947]: I1129 06:40:36.100884 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 29 06:40:36 crc kubenswrapper[4947]: I1129 06:40:36.273819 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 29 06:40:36 crc kubenswrapper[4947]: I1129 06:40:36.298765 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 29 06:40:36 crc kubenswrapper[4947]: I1129 06:40:36.450558 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 29 06:40:36 crc kubenswrapper[4947]: I1129 06:40:36.605888 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 29 06:40:36 crc kubenswrapper[4947]: I1129 06:40:36.783102 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 29 06:40:36 crc kubenswrapper[4947]: I1129 06:40:36.897454 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 29 06:40:36 crc kubenswrapper[4947]: I1129 06:40:36.947764 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.001080 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.011517 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.224722 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.397050 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gbhvj"] Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.397420 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gbhvj" podUID="1a37f770-07c2-40b1-9f24-ccddc3215658" containerName="registry-server" containerID="cri-o://d80950f3cbe94e5e3050599144d1cebbe62ec504c014ce28ba34c389c2765f48" gracePeriod=30 Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.414599 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.417271 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t4vb4"] Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.417960 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-t4vb4" podUID="159f707e-f150-45c6-9371-6b4b272eaf5d" containerName="registry-server" containerID="cri-o://27e50bde31a847edd63619b0d0cb538fccb516602f935f9cf663104e2776d315" gracePeriod=30 Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.430272 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-phgdn"] Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.437621 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-phgdn" podUID="b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8" containerName="registry-server" containerID="cri-o://d95e3edb448e83c6e101d519aab848bff447a7d2868f37d4581bde4155486473" gracePeriod=30 Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.447542 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wqz4n"] Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.449676 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wqz4n" podUID="658b72a9-13fc-4881-88c5-109b221bbc48" containerName="registry-server" containerID="cri-o://2e178f5efb0c6e07219ed6d7ed32a5fe57693bf837256dc21fd3955805b19763" gracePeriod=30 Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.454687 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6mhws"] Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.454976 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-6mhws" podUID="675de8ae-169a-4737-a290-54cdb32d8cb0" containerName="marketplace-operator" containerID="cri-o://e30cf77ba007e7f1f28eff7678bb4f5d42c6343b852d9a769d57318fc5668082" gracePeriod=30 Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.463345 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8zln5"] Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.463644 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8zln5" podUID="253ae1cb-50f4-48e7-a004-a70a958c27cd" containerName="registry-server" containerID="cri-o://eb9e4b7b7f076600be5b39d1c3d2ea329804a209cdc7289606d4489cde5e4899" gracePeriod=30 Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.468529 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fq4zz"] Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.470538 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fq4zz" podUID="1e620beb-b5db-4321-b404-0ef499ded600" containerName="registry-server" containerID="cri-o://57fa2c131f3e218b323f0f7cb8f09fe3245232caad748f83b980bc0f141674fa" gracePeriod=30 Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.475775 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gfzmz"] Nov 29 06:40:37 crc kubenswrapper[4947]: E1129 06:40:37.476213 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05e8a117-69fe-4488-b4e4-c0d7f1b4a63a" containerName="installer" Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.476369 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="05e8a117-69fe-4488-b4e4-c0d7f1b4a63a" containerName="installer" Nov 29 06:40:37 crc kubenswrapper[4947]: E1129 06:40:37.476466 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.476602 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.476901 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.476993 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="05e8a117-69fe-4488-b4e4-c0d7f1b4a63a" containerName="installer" Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.477859 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gfzmz" Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.481480 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qsz24"] Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.481786 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qsz24" podUID="93c9ef47-b7f5-41f4-8e91-bee4f16b658d" containerName="registry-server" containerID="cri-o://e5a081649ed745e050f904f0ae94eb22fa1c6557e9ec6c3b5ee6e8e36030ad83" gracePeriod=30 Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.489106 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w24f6"] Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.489446 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-w24f6" podUID="fd94a1d6-7039-4b84-aa44-ee8ec166da24" containerName="registry-server" containerID="cri-o://27af4d6836630a884f7269dc64e86119906fed8e3338c4c3d4c2269060dcd173" gracePeriod=30 Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.643036 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f48b107b-5cce-4f64-b7d9-20d9efaa76c6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gfzmz\" (UID: \"f48b107b-5cce-4f64-b7d9-20d9efaa76c6\") " pod="openshift-marketplace/marketplace-operator-79b997595-gfzmz" Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.643424 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f48b107b-5cce-4f64-b7d9-20d9efaa76c6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gfzmz\" (UID: \"f48b107b-5cce-4f64-b7d9-20d9efaa76c6\") " pod="openshift-marketplace/marketplace-operator-79b997595-gfzmz" Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.643451 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55qxq\" (UniqueName: \"kubernetes.io/projected/f48b107b-5cce-4f64-b7d9-20d9efaa76c6-kube-api-access-55qxq\") pod \"marketplace-operator-79b997595-gfzmz\" (UID: \"f48b107b-5cce-4f64-b7d9-20d9efaa76c6\") " pod="openshift-marketplace/marketplace-operator-79b997595-gfzmz" Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.747895 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55qxq\" (UniqueName: \"kubernetes.io/projected/f48b107b-5cce-4f64-b7d9-20d9efaa76c6-kube-api-access-55qxq\") pod \"marketplace-operator-79b997595-gfzmz\" (UID: \"f48b107b-5cce-4f64-b7d9-20d9efaa76c6\") " pod="openshift-marketplace/marketplace-operator-79b997595-gfzmz" Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.747968 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f48b107b-5cce-4f64-b7d9-20d9efaa76c6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gfzmz\" (UID: \"f48b107b-5cce-4f64-b7d9-20d9efaa76c6\") " pod="openshift-marketplace/marketplace-operator-79b997595-gfzmz" Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.748032 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f48b107b-5cce-4f64-b7d9-20d9efaa76c6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gfzmz\" (UID: \"f48b107b-5cce-4f64-b7d9-20d9efaa76c6\") " pod="openshift-marketplace/marketplace-operator-79b997595-gfzmz" Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.749598 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f48b107b-5cce-4f64-b7d9-20d9efaa76c6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gfzmz\" (UID: \"f48b107b-5cce-4f64-b7d9-20d9efaa76c6\") " pod="openshift-marketplace/marketplace-operator-79b997595-gfzmz" Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.759018 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f48b107b-5cce-4f64-b7d9-20d9efaa76c6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gfzmz\" (UID: \"f48b107b-5cce-4f64-b7d9-20d9efaa76c6\") " pod="openshift-marketplace/marketplace-operator-79b997595-gfzmz" Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.766800 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55qxq\" (UniqueName: \"kubernetes.io/projected/f48b107b-5cce-4f64-b7d9-20d9efaa76c6-kube-api-access-55qxq\") pod \"marketplace-operator-79b997595-gfzmz\" (UID: \"f48b107b-5cce-4f64-b7d9-20d9efaa76c6\") " pod="openshift-marketplace/marketplace-operator-79b997595-gfzmz" Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.875862 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.943416 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.949257 4947 generic.go:334] "Generic (PLEG): container finished" podID="253ae1cb-50f4-48e7-a004-a70a958c27cd" containerID="eb9e4b7b7f076600be5b39d1c3d2ea329804a209cdc7289606d4489cde5e4899" exitCode=0 Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.949320 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8zln5" event={"ID":"253ae1cb-50f4-48e7-a004-a70a958c27cd","Type":"ContainerDied","Data":"eb9e4b7b7f076600be5b39d1c3d2ea329804a209cdc7289606d4489cde5e4899"} Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.958803 4947 generic.go:334] "Generic (PLEG): container finished" podID="1e620beb-b5db-4321-b404-0ef499ded600" containerID="57fa2c131f3e218b323f0f7cb8f09fe3245232caad748f83b980bc0f141674fa" exitCode=0 Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.958935 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fq4zz" event={"ID":"1e620beb-b5db-4321-b404-0ef499ded600","Type":"ContainerDied","Data":"57fa2c131f3e218b323f0f7cb8f09fe3245232caad748f83b980bc0f141674fa"} Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.961874 4947 generic.go:334] "Generic (PLEG): container finished" podID="159f707e-f150-45c6-9371-6b4b272eaf5d" containerID="27e50bde31a847edd63619b0d0cb538fccb516602f935f9cf663104e2776d315" exitCode=0 Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.961954 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t4vb4" event={"ID":"159f707e-f150-45c6-9371-6b4b272eaf5d","Type":"ContainerDied","Data":"27e50bde31a847edd63619b0d0cb538fccb516602f935f9cf663104e2776d315"} Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.965484 4947 generic.go:334] "Generic (PLEG): container finished" podID="1a37f770-07c2-40b1-9f24-ccddc3215658" containerID="d80950f3cbe94e5e3050599144d1cebbe62ec504c014ce28ba34c389c2765f48" exitCode=0 Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.965564 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gbhvj" event={"ID":"1a37f770-07c2-40b1-9f24-ccddc3215658","Type":"ContainerDied","Data":"d80950f3cbe94e5e3050599144d1cebbe62ec504c014ce28ba34c389c2765f48"} Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.965585 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gbhvj" event={"ID":"1a37f770-07c2-40b1-9f24-ccddc3215658","Type":"ContainerDied","Data":"881876073f241f7e31165ab4e1d26169e20c99427fe39b13dff3e31fc512ab9f"} Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.965595 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="881876073f241f7e31165ab4e1d26169e20c99427fe39b13dff3e31fc512ab9f" Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.967692 4947 generic.go:334] "Generic (PLEG): container finished" podID="fd94a1d6-7039-4b84-aa44-ee8ec166da24" containerID="27af4d6836630a884f7269dc64e86119906fed8e3338c4c3d4c2269060dcd173" exitCode=0 Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.967765 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w24f6" event={"ID":"fd94a1d6-7039-4b84-aa44-ee8ec166da24","Type":"ContainerDied","Data":"27af4d6836630a884f7269dc64e86119906fed8e3338c4c3d4c2269060dcd173"} Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.970458 4947 generic.go:334] "Generic (PLEG): container finished" podID="658b72a9-13fc-4881-88c5-109b221bbc48" containerID="2e178f5efb0c6e07219ed6d7ed32a5fe57693bf837256dc21fd3955805b19763" exitCode=0 Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.970550 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wqz4n" event={"ID":"658b72a9-13fc-4881-88c5-109b221bbc48","Type":"ContainerDied","Data":"2e178f5efb0c6e07219ed6d7ed32a5fe57693bf837256dc21fd3955805b19763"} Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.974352 4947 generic.go:334] "Generic (PLEG): container finished" podID="93c9ef47-b7f5-41f4-8e91-bee4f16b658d" containerID="e5a081649ed745e050f904f0ae94eb22fa1c6557e9ec6c3b5ee6e8e36030ad83" exitCode=0 Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.974425 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qsz24" event={"ID":"93c9ef47-b7f5-41f4-8e91-bee4f16b658d","Type":"ContainerDied","Data":"e5a081649ed745e050f904f0ae94eb22fa1c6557e9ec6c3b5ee6e8e36030ad83"} Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.977303 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-6mhws_675de8ae-169a-4737-a290-54cdb32d8cb0/marketplace-operator/2.log" Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.977343 4947 generic.go:334] "Generic (PLEG): container finished" podID="675de8ae-169a-4737-a290-54cdb32d8cb0" containerID="e30cf77ba007e7f1f28eff7678bb4f5d42c6343b852d9a769d57318fc5668082" exitCode=0 Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.977476 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6mhws" event={"ID":"675de8ae-169a-4737-a290-54cdb32d8cb0","Type":"ContainerDied","Data":"e30cf77ba007e7f1f28eff7678bb4f5d42c6343b852d9a769d57318fc5668082"} Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.977576 4947 scope.go:117] "RemoveContainer" containerID="9cb1b99a6de15a7299b40ce7a384ef6f8eb2aa3e76ae8306f11b78be3a24f57f" Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.991332 4947 generic.go:334] "Generic (PLEG): container finished" podID="b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8" containerID="d95e3edb448e83c6e101d519aab848bff447a7d2868f37d4581bde4155486473" exitCode=0 Nov 29 06:40:37 crc kubenswrapper[4947]: I1129 06:40:37.991376 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-phgdn" event={"ID":"b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8","Type":"ContainerDied","Data":"d95e3edb448e83c6e101d519aab848bff447a7d2868f37d4581bde4155486473"} Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.047951 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gfzmz" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.069071 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gbhvj" Nov 29 06:40:38 crc kubenswrapper[4947]: E1129 06:40:38.152357 4947 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e5a081649ed745e050f904f0ae94eb22fa1c6557e9ec6c3b5ee6e8e36030ad83 is running failed: container process not found" containerID="e5a081649ed745e050f904f0ae94eb22fa1c6557e9ec6c3b5ee6e8e36030ad83" cmd=["grpc_health_probe","-addr=:50051"] Nov 29 06:40:38 crc kubenswrapper[4947]: E1129 06:40:38.155529 4947 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e5a081649ed745e050f904f0ae94eb22fa1c6557e9ec6c3b5ee6e8e36030ad83 is running failed: container process not found" containerID="e5a081649ed745e050f904f0ae94eb22fa1c6557e9ec6c3b5ee6e8e36030ad83" cmd=["grpc_health_probe","-addr=:50051"] Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.156103 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-phgdn" Nov 29 06:40:38 crc kubenswrapper[4947]: E1129 06:40:38.158232 4947 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e5a081649ed745e050f904f0ae94eb22fa1c6557e9ec6c3b5ee6e8e36030ad83 is running failed: container process not found" containerID="e5a081649ed745e050f904f0ae94eb22fa1c6557e9ec6c3b5ee6e8e36030ad83" cmd=["grpc_health_probe","-addr=:50051"] Nov 29 06:40:38 crc kubenswrapper[4947]: E1129 06:40:38.158279 4947 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e5a081649ed745e050f904f0ae94eb22fa1c6557e9ec6c3b5ee6e8e36030ad83 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-qsz24" podUID="93c9ef47-b7f5-41f4-8e91-bee4f16b658d" containerName="registry-server" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.158623 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a37f770-07c2-40b1-9f24-ccddc3215658-catalog-content\") pod \"1a37f770-07c2-40b1-9f24-ccddc3215658\" (UID: \"1a37f770-07c2-40b1-9f24-ccddc3215658\") " Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.158729 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lztlt\" (UniqueName: \"kubernetes.io/projected/1a37f770-07c2-40b1-9f24-ccddc3215658-kube-api-access-lztlt\") pod \"1a37f770-07c2-40b1-9f24-ccddc3215658\" (UID: \"1a37f770-07c2-40b1-9f24-ccddc3215658\") " Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.158762 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a37f770-07c2-40b1-9f24-ccddc3215658-utilities\") pod \"1a37f770-07c2-40b1-9f24-ccddc3215658\" (UID: \"1a37f770-07c2-40b1-9f24-ccddc3215658\") " Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.159688 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a37f770-07c2-40b1-9f24-ccddc3215658-utilities" (OuterVolumeSpecName: "utilities") pod "1a37f770-07c2-40b1-9f24-ccddc3215658" (UID: "1a37f770-07c2-40b1-9f24-ccddc3215658"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.169282 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a37f770-07c2-40b1-9f24-ccddc3215658-kube-api-access-lztlt" (OuterVolumeSpecName: "kube-api-access-lztlt") pod "1a37f770-07c2-40b1-9f24-ccddc3215658" (UID: "1a37f770-07c2-40b1-9f24-ccddc3215658"). InnerVolumeSpecName "kube-api-access-lztlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.174024 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qsz24" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.232392 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wqz4n" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.239876 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fq4zz" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.265560 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8-catalog-content\") pod \"b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8\" (UID: \"b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8\") " Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.266010 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93c9ef47-b7f5-41f4-8e91-bee4f16b658d-catalog-content\") pod \"93c9ef47-b7f5-41f4-8e91-bee4f16b658d\" (UID: \"93c9ef47-b7f5-41f4-8e91-bee4f16b658d\") " Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.266266 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pfz9\" (UniqueName: \"kubernetes.io/projected/b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8-kube-api-access-5pfz9\") pod \"b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8\" (UID: \"b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8\") " Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.266395 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wt875\" (UniqueName: \"kubernetes.io/projected/658b72a9-13fc-4881-88c5-109b221bbc48-kube-api-access-wt875\") pod \"658b72a9-13fc-4881-88c5-109b221bbc48\" (UID: \"658b72a9-13fc-4881-88c5-109b221bbc48\") " Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.266587 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpvng\" (UniqueName: \"kubernetes.io/projected/93c9ef47-b7f5-41f4-8e91-bee4f16b658d-kube-api-access-tpvng\") pod \"93c9ef47-b7f5-41f4-8e91-bee4f16b658d\" (UID: \"93c9ef47-b7f5-41f4-8e91-bee4f16b658d\") " Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.266797 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e620beb-b5db-4321-b404-0ef499ded600-utilities\") pod \"1e620beb-b5db-4321-b404-0ef499ded600\" (UID: \"1e620beb-b5db-4321-b404-0ef499ded600\") " Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.266927 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xpnc\" (UniqueName: \"kubernetes.io/projected/1e620beb-b5db-4321-b404-0ef499ded600-kube-api-access-4xpnc\") pod \"1e620beb-b5db-4321-b404-0ef499ded600\" (UID: \"1e620beb-b5db-4321-b404-0ef499ded600\") " Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.267164 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93c9ef47-b7f5-41f4-8e91-bee4f16b658d-utilities\") pod \"93c9ef47-b7f5-41f4-8e91-bee4f16b658d\" (UID: \"93c9ef47-b7f5-41f4-8e91-bee4f16b658d\") " Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.268528 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e620beb-b5db-4321-b404-0ef499ded600-utilities" (OuterVolumeSpecName: "utilities") pod "1e620beb-b5db-4321-b404-0ef499ded600" (UID: "1e620beb-b5db-4321-b404-0ef499ded600"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.269012 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a37f770-07c2-40b1-9f24-ccddc3215658-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1a37f770-07c2-40b1-9f24-ccddc3215658" (UID: "1a37f770-07c2-40b1-9f24-ccddc3215658"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.270827 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93c9ef47-b7f5-41f4-8e91-bee4f16b658d-kube-api-access-tpvng" (OuterVolumeSpecName: "kube-api-access-tpvng") pod "93c9ef47-b7f5-41f4-8e91-bee4f16b658d" (UID: "93c9ef47-b7f5-41f4-8e91-bee4f16b658d"). InnerVolumeSpecName "kube-api-access-tpvng". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.270868 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/658b72a9-13fc-4881-88c5-109b221bbc48-kube-api-access-wt875" (OuterVolumeSpecName: "kube-api-access-wt875") pod "658b72a9-13fc-4881-88c5-109b221bbc48" (UID: "658b72a9-13fc-4881-88c5-109b221bbc48"). InnerVolumeSpecName "kube-api-access-wt875". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.271439 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93c9ef47-b7f5-41f4-8e91-bee4f16b658d-utilities" (OuterVolumeSpecName: "utilities") pod "93c9ef47-b7f5-41f4-8e91-bee4f16b658d" (UID: "93c9ef47-b7f5-41f4-8e91-bee4f16b658d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.272050 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e620beb-b5db-4321-b404-0ef499ded600-catalog-content\") pod \"1e620beb-b5db-4321-b404-0ef499ded600\" (UID: \"1e620beb-b5db-4321-b404-0ef499ded600\") " Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.273020 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8-utilities\") pod \"b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8\" (UID: \"b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8\") " Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.273713 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/658b72a9-13fc-4881-88c5-109b221bbc48-utilities\") pod \"658b72a9-13fc-4881-88c5-109b221bbc48\" (UID: \"658b72a9-13fc-4881-88c5-109b221bbc48\") " Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.273952 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/658b72a9-13fc-4881-88c5-109b221bbc48-catalog-content\") pod \"658b72a9-13fc-4881-88c5-109b221bbc48\" (UID: \"658b72a9-13fc-4881-88c5-109b221bbc48\") " Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.273772 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e620beb-b5db-4321-b404-0ef499ded600-kube-api-access-4xpnc" (OuterVolumeSpecName: "kube-api-access-4xpnc") pod "1e620beb-b5db-4321-b404-0ef499ded600" (UID: "1e620beb-b5db-4321-b404-0ef499ded600"). InnerVolumeSpecName "kube-api-access-4xpnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.274245 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8-kube-api-access-5pfz9" (OuterVolumeSpecName: "kube-api-access-5pfz9") pod "b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8" (UID: "b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8"). InnerVolumeSpecName "kube-api-access-5pfz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.274477 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8-utilities" (OuterVolumeSpecName: "utilities") pod "b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8" (UID: "b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.274538 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/658b72a9-13fc-4881-88c5-109b221bbc48-utilities" (OuterVolumeSpecName: "utilities") pod "658b72a9-13fc-4881-88c5-109b221bbc48" (UID: "658b72a9-13fc-4881-88c5-109b221bbc48"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.276916 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.277019 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/658b72a9-13fc-4881-88c5-109b221bbc48-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.277094 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a37f770-07c2-40b1-9f24-ccddc3215658-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.277610 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pfz9\" (UniqueName: \"kubernetes.io/projected/b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8-kube-api-access-5pfz9\") on node \"crc\" DevicePath \"\"" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.277700 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wt875\" (UniqueName: \"kubernetes.io/projected/658b72a9-13fc-4881-88c5-109b221bbc48-kube-api-access-wt875\") on node \"crc\" DevicePath \"\"" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.277785 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tpvng\" (UniqueName: \"kubernetes.io/projected/93c9ef47-b7f5-41f4-8e91-bee4f16b658d-kube-api-access-tpvng\") on node \"crc\" DevicePath \"\"" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.277856 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lztlt\" (UniqueName: \"kubernetes.io/projected/1a37f770-07c2-40b1-9f24-ccddc3215658-kube-api-access-lztlt\") on node \"crc\" DevicePath \"\"" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.277932 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e620beb-b5db-4321-b404-0ef499ded600-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.278010 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a37f770-07c2-40b1-9f24-ccddc3215658-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.278080 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xpnc\" (UniqueName: \"kubernetes.io/projected/1e620beb-b5db-4321-b404-0ef499ded600-kube-api-access-4xpnc\") on node \"crc\" DevicePath \"\"" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.278148 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93c9ef47-b7f5-41f4-8e91-bee4f16b658d-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.277581 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6mhws" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.277425 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8zln5" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.281082 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w24f6" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.289399 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t4vb4" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.294997 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e620beb-b5db-4321-b404-0ef499ded600-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1e620beb-b5db-4321-b404-0ef499ded600" (UID: "1e620beb-b5db-4321-b404-0ef499ded600"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.341803 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/658b72a9-13fc-4881-88c5-109b221bbc48-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "658b72a9-13fc-4881-88c5-109b221bbc48" (UID: "658b72a9-13fc-4881-88c5-109b221bbc48"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.376087 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8" (UID: "b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.380560 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/159f707e-f150-45c6-9371-6b4b272eaf5d-utilities\") pod \"159f707e-f150-45c6-9371-6b4b272eaf5d\" (UID: \"159f707e-f150-45c6-9371-6b4b272eaf5d\") " Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.380898 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2cf6\" (UniqueName: \"kubernetes.io/projected/253ae1cb-50f4-48e7-a004-a70a958c27cd-kube-api-access-m2cf6\") pod \"253ae1cb-50f4-48e7-a004-a70a958c27cd\" (UID: \"253ae1cb-50f4-48e7-a004-a70a958c27cd\") " Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.380938 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/675de8ae-169a-4737-a290-54cdb32d8cb0-marketplace-operator-metrics\") pod \"675de8ae-169a-4737-a290-54cdb32d8cb0\" (UID: \"675de8ae-169a-4737-a290-54cdb32d8cb0\") " Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.380956 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/159f707e-f150-45c6-9371-6b4b272eaf5d-catalog-content\") pod \"159f707e-f150-45c6-9371-6b4b272eaf5d\" (UID: \"159f707e-f150-45c6-9371-6b4b272eaf5d\") " Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.380982 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd94a1d6-7039-4b84-aa44-ee8ec166da24-catalog-content\") pod \"fd94a1d6-7039-4b84-aa44-ee8ec166da24\" (UID: \"fd94a1d6-7039-4b84-aa44-ee8ec166da24\") " Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.381036 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmdhm\" (UniqueName: \"kubernetes.io/projected/675de8ae-169a-4737-a290-54cdb32d8cb0-kube-api-access-wmdhm\") pod \"675de8ae-169a-4737-a290-54cdb32d8cb0\" (UID: \"675de8ae-169a-4737-a290-54cdb32d8cb0\") " Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.381060 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fd5ln\" (UniqueName: \"kubernetes.io/projected/159f707e-f150-45c6-9371-6b4b272eaf5d-kube-api-access-fd5ln\") pod \"159f707e-f150-45c6-9371-6b4b272eaf5d\" (UID: \"159f707e-f150-45c6-9371-6b4b272eaf5d\") " Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.381095 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/253ae1cb-50f4-48e7-a004-a70a958c27cd-catalog-content\") pod \"253ae1cb-50f4-48e7-a004-a70a958c27cd\" (UID: \"253ae1cb-50f4-48e7-a004-a70a958c27cd\") " Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.381123 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/253ae1cb-50f4-48e7-a004-a70a958c27cd-utilities\") pod \"253ae1cb-50f4-48e7-a004-a70a958c27cd\" (UID: \"253ae1cb-50f4-48e7-a004-a70a958c27cd\") " Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.381144 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd94a1d6-7039-4b84-aa44-ee8ec166da24-utilities\") pod \"fd94a1d6-7039-4b84-aa44-ee8ec166da24\" (UID: \"fd94a1d6-7039-4b84-aa44-ee8ec166da24\") " Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.381174 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/675de8ae-169a-4737-a290-54cdb32d8cb0-marketplace-trusted-ca\") pod \"675de8ae-169a-4737-a290-54cdb32d8cb0\" (UID: \"675de8ae-169a-4737-a290-54cdb32d8cb0\") " Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.381192 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb9l4\" (UniqueName: \"kubernetes.io/projected/fd94a1d6-7039-4b84-aa44-ee8ec166da24-kube-api-access-sb9l4\") pod \"fd94a1d6-7039-4b84-aa44-ee8ec166da24\" (UID: \"fd94a1d6-7039-4b84-aa44-ee8ec166da24\") " Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.381269 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/159f707e-f150-45c6-9371-6b4b272eaf5d-utilities" (OuterVolumeSpecName: "utilities") pod "159f707e-f150-45c6-9371-6b4b272eaf5d" (UID: "159f707e-f150-45c6-9371-6b4b272eaf5d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.381704 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e620beb-b5db-4321-b404-0ef499ded600-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.381732 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/658b72a9-13fc-4881-88c5-109b221bbc48-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.381746 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/159f707e-f150-45c6-9371-6b4b272eaf5d-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.381761 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.382892 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd94a1d6-7039-4b84-aa44-ee8ec166da24-utilities" (OuterVolumeSpecName: "utilities") pod "fd94a1d6-7039-4b84-aa44-ee8ec166da24" (UID: "fd94a1d6-7039-4b84-aa44-ee8ec166da24"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.383732 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/253ae1cb-50f4-48e7-a004-a70a958c27cd-utilities" (OuterVolumeSpecName: "utilities") pod "253ae1cb-50f4-48e7-a004-a70a958c27cd" (UID: "253ae1cb-50f4-48e7-a004-a70a958c27cd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.386586 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd94a1d6-7039-4b84-aa44-ee8ec166da24-kube-api-access-sb9l4" (OuterVolumeSpecName: "kube-api-access-sb9l4") pod "fd94a1d6-7039-4b84-aa44-ee8ec166da24" (UID: "fd94a1d6-7039-4b84-aa44-ee8ec166da24"). InnerVolumeSpecName "kube-api-access-sb9l4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.386766 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/159f707e-f150-45c6-9371-6b4b272eaf5d-kube-api-access-fd5ln" (OuterVolumeSpecName: "kube-api-access-fd5ln") pod "159f707e-f150-45c6-9371-6b4b272eaf5d" (UID: "159f707e-f150-45c6-9371-6b4b272eaf5d"). InnerVolumeSpecName "kube-api-access-fd5ln". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.394433 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/253ae1cb-50f4-48e7-a004-a70a958c27cd-kube-api-access-m2cf6" (OuterVolumeSpecName: "kube-api-access-m2cf6") pod "253ae1cb-50f4-48e7-a004-a70a958c27cd" (UID: "253ae1cb-50f4-48e7-a004-a70a958c27cd"). InnerVolumeSpecName "kube-api-access-m2cf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.397396 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/675de8ae-169a-4737-a290-54cdb32d8cb0-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "675de8ae-169a-4737-a290-54cdb32d8cb0" (UID: "675de8ae-169a-4737-a290-54cdb32d8cb0"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.400079 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/675de8ae-169a-4737-a290-54cdb32d8cb0-kube-api-access-wmdhm" (OuterVolumeSpecName: "kube-api-access-wmdhm") pod "675de8ae-169a-4737-a290-54cdb32d8cb0" (UID: "675de8ae-169a-4737-a290-54cdb32d8cb0"). InnerVolumeSpecName "kube-api-access-wmdhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.403195 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/675de8ae-169a-4737-a290-54cdb32d8cb0-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "675de8ae-169a-4737-a290-54cdb32d8cb0" (UID: "675de8ae-169a-4737-a290-54cdb32d8cb0"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.421590 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/253ae1cb-50f4-48e7-a004-a70a958c27cd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "253ae1cb-50f4-48e7-a004-a70a958c27cd" (UID: "253ae1cb-50f4-48e7-a004-a70a958c27cd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.438817 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93c9ef47-b7f5-41f4-8e91-bee4f16b658d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "93c9ef47-b7f5-41f4-8e91-bee4f16b658d" (UID: "93c9ef47-b7f5-41f4-8e91-bee4f16b658d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.448779 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/159f707e-f150-45c6-9371-6b4b272eaf5d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "159f707e-f150-45c6-9371-6b4b272eaf5d" (UID: "159f707e-f150-45c6-9371-6b4b272eaf5d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.483556 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93c9ef47-b7f5-41f4-8e91-bee4f16b658d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.483600 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmdhm\" (UniqueName: \"kubernetes.io/projected/675de8ae-169a-4737-a290-54cdb32d8cb0-kube-api-access-wmdhm\") on node \"crc\" DevicePath \"\"" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.483618 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fd5ln\" (UniqueName: \"kubernetes.io/projected/159f707e-f150-45c6-9371-6b4b272eaf5d-kube-api-access-fd5ln\") on node \"crc\" DevicePath \"\"" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.483629 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/253ae1cb-50f4-48e7-a004-a70a958c27cd-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.483641 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/253ae1cb-50f4-48e7-a004-a70a958c27cd-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.483651 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd94a1d6-7039-4b84-aa44-ee8ec166da24-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.483661 4947 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/675de8ae-169a-4737-a290-54cdb32d8cb0-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.483675 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb9l4\" (UniqueName: \"kubernetes.io/projected/fd94a1d6-7039-4b84-aa44-ee8ec166da24-kube-api-access-sb9l4\") on node \"crc\" DevicePath \"\"" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.483685 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2cf6\" (UniqueName: \"kubernetes.io/projected/253ae1cb-50f4-48e7-a004-a70a958c27cd-kube-api-access-m2cf6\") on node \"crc\" DevicePath \"\"" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.483696 4947 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/675de8ae-169a-4737-a290-54cdb32d8cb0-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.483708 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/159f707e-f150-45c6-9371-6b4b272eaf5d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.495769 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd94a1d6-7039-4b84-aa44-ee8ec166da24-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fd94a1d6-7039-4b84-aa44-ee8ec166da24" (UID: "fd94a1d6-7039-4b84-aa44-ee8ec166da24"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.563928 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.585336 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd94a1d6-7039-4b84-aa44-ee8ec166da24-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.626871 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 29 06:40:38 crc kubenswrapper[4947]: I1129 06:40:38.743143 4947 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.000372 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6mhws" event={"ID":"675de8ae-169a-4737-a290-54cdb32d8cb0","Type":"ContainerDied","Data":"bbd1e7e850d1b521e66d40250ebb40ea0fa999a2aa6c6ae2912fcb9ab3031f7f"} Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.000718 4947 scope.go:117] "RemoveContainer" containerID="e30cf77ba007e7f1f28eff7678bb4f5d42c6343b852d9a769d57318fc5668082" Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.000428 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6mhws" Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.003682 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-phgdn" event={"ID":"b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8","Type":"ContainerDied","Data":"add974d42f317674dda406080d8dcc2d231cb389ce6b47c1a8d2456f6b1831b7"} Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.003708 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-phgdn" Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.006427 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w24f6" event={"ID":"fd94a1d6-7039-4b84-aa44-ee8ec166da24","Type":"ContainerDied","Data":"6d07517addb7352585fbc41b52e6ae93d1a48b53856041ebd6b1a86f0f2c9e5a"} Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.006553 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w24f6" Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.007549 4947 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-6mhws container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.007642 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-6mhws" podUID="675de8ae-169a-4737-a290-54cdb32d8cb0" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.011435 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t4vb4" event={"ID":"159f707e-f150-45c6-9371-6b4b272eaf5d","Type":"ContainerDied","Data":"aa77bd2983d90833ecdc4ddbc40076fdcfce1edf477bd5e816563a24ecdb6105"} Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.011539 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t4vb4" Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.013879 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8zln5" event={"ID":"253ae1cb-50f4-48e7-a004-a70a958c27cd","Type":"ContainerDied","Data":"6d5bfc375b7fb87fab2d1d765997e5912baae8a4340fef9ac54176aae988450a"} Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.013892 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8zln5" Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.018288 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fq4zz" event={"ID":"1e620beb-b5db-4321-b404-0ef499ded600","Type":"ContainerDied","Data":"f4b5cf2791740beecb54d837d3351f1f1be1163d1d7c1602b4267b4ecaa9d6c1"} Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.018369 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fq4zz" Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.021087 4947 scope.go:117] "RemoveContainer" containerID="d95e3edb448e83c6e101d519aab848bff447a7d2868f37d4581bde4155486473" Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.022386 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wqz4n" Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.022394 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wqz4n" event={"ID":"658b72a9-13fc-4881-88c5-109b221bbc48","Type":"ContainerDied","Data":"6df30375539c85f723c5b1e1bc66fed564abaefeea10a108dd05a182cd3c5bb4"} Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.036240 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gbhvj" Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.039047 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qsz24" Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.039071 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qsz24" event={"ID":"93c9ef47-b7f5-41f4-8e91-bee4f16b658d","Type":"ContainerDied","Data":"00d4ec0c49505de171c295ed3db5700c4603af0de15aa4acfca9a970fea10ac2"} Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.055289 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-phgdn"] Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.058308 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.068393 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-phgdn"] Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.071237 4947 scope.go:117] "RemoveContainer" containerID="bae87c40c5d616ee71eac6558dc4f929c824e01eb3eb68699062b24f52ebde62" Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.088426 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fq4zz"] Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.098384 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fq4zz"] Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.103360 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wqz4n"] Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.107258 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.110919 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wqz4n"] Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.115784 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w24f6"] Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.118612 4947 scope.go:117] "RemoveContainer" containerID="916c62a2f75dca89ddefabefc3aa70951292f893d1f25d243e3ea1333a2aaa3a" Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.125192 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-w24f6"] Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.131527 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8zln5"] Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.136422 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8zln5"] Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.142649 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6mhws"] Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.146981 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6mhws"] Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.151578 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qsz24"] Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.154534 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qsz24"] Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.158442 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t4vb4"] Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.161387 4947 scope.go:117] "RemoveContainer" containerID="27af4d6836630a884f7269dc64e86119906fed8e3338c4c3d4c2269060dcd173" Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.161974 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-t4vb4"] Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.165701 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gbhvj"] Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.169312 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gbhvj"] Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.187632 4947 scope.go:117] "RemoveContainer" containerID="6d2570c91b1cae30cc1b3a92b39850cf05c6aefcc546590bd2f8036d76002d51" Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.191003 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="159f707e-f150-45c6-9371-6b4b272eaf5d" path="/var/lib/kubelet/pods/159f707e-f150-45c6-9371-6b4b272eaf5d/volumes" Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.191616 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a37f770-07c2-40b1-9f24-ccddc3215658" path="/var/lib/kubelet/pods/1a37f770-07c2-40b1-9f24-ccddc3215658/volumes" Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.192204 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e620beb-b5db-4321-b404-0ef499ded600" path="/var/lib/kubelet/pods/1e620beb-b5db-4321-b404-0ef499ded600/volumes" Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.193184 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="253ae1cb-50f4-48e7-a004-a70a958c27cd" path="/var/lib/kubelet/pods/253ae1cb-50f4-48e7-a004-a70a958c27cd/volumes" Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.193823 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="658b72a9-13fc-4881-88c5-109b221bbc48" path="/var/lib/kubelet/pods/658b72a9-13fc-4881-88c5-109b221bbc48/volumes" Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.194929 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="675de8ae-169a-4737-a290-54cdb32d8cb0" path="/var/lib/kubelet/pods/675de8ae-169a-4737-a290-54cdb32d8cb0/volumes" Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.195480 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93c9ef47-b7f5-41f4-8e91-bee4f16b658d" path="/var/lib/kubelet/pods/93c9ef47-b7f5-41f4-8e91-bee4f16b658d/volumes" Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.196966 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8" path="/var/lib/kubelet/pods/b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8/volumes" Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.197829 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd94a1d6-7039-4b84-aa44-ee8ec166da24" path="/var/lib/kubelet/pods/fd94a1d6-7039-4b84-aa44-ee8ec166da24/volumes" Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.212919 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.218153 4947 scope.go:117] "RemoveContainer" containerID="33967f8a2bfa6fd85b97d82e57ac6127b4f2d826788d375fc2ebb91d9772287f" Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.242156 4947 scope.go:117] "RemoveContainer" containerID="27e50bde31a847edd63619b0d0cb538fccb516602f935f9cf663104e2776d315" Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.263328 4947 scope.go:117] "RemoveContainer" containerID="283afd4d499611cd02ae6d506bf253cfd055b10a46905adc340ec986a71861e2" Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.275062 4947 scope.go:117] "RemoveContainer" containerID="36ff54017bf5d6cb1a3fa22d28583e9c2b073eee805669cf1bfb9f564682371e" Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.285462 4947 scope.go:117] "RemoveContainer" containerID="eb9e4b7b7f076600be5b39d1c3d2ea329804a209cdc7289606d4489cde5e4899" Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.296750 4947 scope.go:117] "RemoveContainer" containerID="c2fae668c2ec078a95e6c7b2608cc32d7b1257a983016f67137bb7dca946cbc9" Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.320286 4947 scope.go:117] "RemoveContainer" containerID="c68eed3d9db1869817626a6f965483b1ed0d01a01d450a271ab6c273ae51aeb3" Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.335123 4947 scope.go:117] "RemoveContainer" containerID="57fa2c131f3e218b323f0f7cb8f09fe3245232caad748f83b980bc0f141674fa" Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.345986 4947 scope.go:117] "RemoveContainer" containerID="a64f74c2e3ba9f27f62725df7e0a64846a2da7f01b7d1886cf22e755e666235a" Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.358654 4947 scope.go:117] "RemoveContainer" containerID="86f2002f25356ee3382834b6865c179bef0e838a309ca7342f577cefffae10bd" Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.369238 4947 scope.go:117] "RemoveContainer" containerID="2e178f5efb0c6e07219ed6d7ed32a5fe57693bf837256dc21fd3955805b19763" Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.381235 4947 scope.go:117] "RemoveContainer" containerID="976cf47c7a81ded79cab697b184562dfadbf98223f35c31af39b4d34941240e1" Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.404655 4947 scope.go:117] "RemoveContainer" containerID="8305120d2c16842620cab8de3a5af9aabfd34c903721287ec8b672723e321425" Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.419078 4947 scope.go:117] "RemoveContainer" containerID="e5a081649ed745e050f904f0ae94eb22fa1c6557e9ec6c3b5ee6e8e36030ad83" Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.439261 4947 scope.go:117] "RemoveContainer" containerID="2924e68fe6b0334e6b27b698c25d36a60d6ece07f2b9ca5328e18bfe18cf4d00" Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.464516 4947 scope.go:117] "RemoveContainer" containerID="d62a5d2f3d75367628e637726ba319a36808d77db75ae6be396a2295a5b501e7" Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.532539 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.683865 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 29 06:40:39 crc kubenswrapper[4947]: I1129 06:40:39.981738 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 29 06:40:40 crc kubenswrapper[4947]: I1129 06:40:40.016740 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 29 06:40:40 crc kubenswrapper[4947]: I1129 06:40:40.016810 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 06:40:40 crc kubenswrapper[4947]: I1129 06:40:40.041348 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 29 06:40:40 crc kubenswrapper[4947]: I1129 06:40:40.041421 4947 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="6199b757c8dee13c0d44384a4a68198095a559a7877f519a7913d261149cbd16" exitCode=137 Nov 29 06:40:40 crc kubenswrapper[4947]: I1129 06:40:40.041503 4947 scope.go:117] "RemoveContainer" containerID="6199b757c8dee13c0d44384a4a68198095a559a7877f519a7913d261149cbd16" Nov 29 06:40:40 crc kubenswrapper[4947]: I1129 06:40:40.041556 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 06:40:40 crc kubenswrapper[4947]: I1129 06:40:40.061552 4947 scope.go:117] "RemoveContainer" containerID="6199b757c8dee13c0d44384a4a68198095a559a7877f519a7913d261149cbd16" Nov 29 06:40:40 crc kubenswrapper[4947]: E1129 06:40:40.062026 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6199b757c8dee13c0d44384a4a68198095a559a7877f519a7913d261149cbd16\": container with ID starting with 6199b757c8dee13c0d44384a4a68198095a559a7877f519a7913d261149cbd16 not found: ID does not exist" containerID="6199b757c8dee13c0d44384a4a68198095a559a7877f519a7913d261149cbd16" Nov 29 06:40:40 crc kubenswrapper[4947]: I1129 06:40:40.062067 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6199b757c8dee13c0d44384a4a68198095a559a7877f519a7913d261149cbd16"} err="failed to get container status \"6199b757c8dee13c0d44384a4a68198095a559a7877f519a7913d261149cbd16\": rpc error: code = NotFound desc = could not find container \"6199b757c8dee13c0d44384a4a68198095a559a7877f519a7913d261149cbd16\": container with ID starting with 6199b757c8dee13c0d44384a4a68198095a559a7877f519a7913d261149cbd16 not found: ID does not exist" Nov 29 06:40:40 crc kubenswrapper[4947]: I1129 06:40:40.106630 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 29 06:40:40 crc kubenswrapper[4947]: I1129 06:40:40.106678 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 29 06:40:40 crc kubenswrapper[4947]: I1129 06:40:40.106725 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 29 06:40:40 crc kubenswrapper[4947]: I1129 06:40:40.106784 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 06:40:40 crc kubenswrapper[4947]: I1129 06:40:40.106801 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 29 06:40:40 crc kubenswrapper[4947]: I1129 06:40:40.106821 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 29 06:40:40 crc kubenswrapper[4947]: I1129 06:40:40.106837 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 06:40:40 crc kubenswrapper[4947]: I1129 06:40:40.106911 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 06:40:40 crc kubenswrapper[4947]: I1129 06:40:40.107013 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 06:40:40 crc kubenswrapper[4947]: I1129 06:40:40.107058 4947 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Nov 29 06:40:40 crc kubenswrapper[4947]: I1129 06:40:40.107076 4947 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Nov 29 06:40:40 crc kubenswrapper[4947]: I1129 06:40:40.107086 4947 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Nov 29 06:40:40 crc kubenswrapper[4947]: I1129 06:40:40.112754 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 06:40:40 crc kubenswrapper[4947]: I1129 06:40:40.150554 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 29 06:40:40 crc kubenswrapper[4947]: I1129 06:40:40.177213 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 29 06:40:40 crc kubenswrapper[4947]: I1129 06:40:40.201634 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 29 06:40:40 crc kubenswrapper[4947]: I1129 06:40:40.207820 4947 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 29 06:40:40 crc kubenswrapper[4947]: I1129 06:40:40.207849 4947 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 29 06:40:40 crc kubenswrapper[4947]: I1129 06:40:40.292320 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 29 06:40:40 crc kubenswrapper[4947]: I1129 06:40:40.437521 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 29 06:40:40 crc kubenswrapper[4947]: I1129 06:40:40.529158 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 29 06:40:40 crc kubenswrapper[4947]: I1129 06:40:40.730845 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 29 06:40:40 crc kubenswrapper[4947]: I1129 06:40:40.789380 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 29 06:40:41 crc kubenswrapper[4947]: I1129 06:40:41.149685 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 29 06:40:41 crc kubenswrapper[4947]: I1129 06:40:41.160411 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 29 06:40:41 crc kubenswrapper[4947]: I1129 06:40:41.186303 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Nov 29 06:40:41 crc kubenswrapper[4947]: I1129 06:40:41.186591 4947 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Nov 29 06:40:41 crc kubenswrapper[4947]: I1129 06:40:41.195679 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 29 06:40:41 crc kubenswrapper[4947]: I1129 06:40:41.195716 4947 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="c90acd5c-66a4-41ca-b732-8922f1589d18" Nov 29 06:40:41 crc kubenswrapper[4947]: I1129 06:40:41.198536 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 29 06:40:41 crc kubenswrapper[4947]: I1129 06:40:41.198572 4947 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="c90acd5c-66a4-41ca-b732-8922f1589d18" Nov 29 06:40:41 crc kubenswrapper[4947]: I1129 06:40:41.721893 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 29 06:40:41 crc kubenswrapper[4947]: I1129 06:40:41.880150 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 29 06:40:41 crc kubenswrapper[4947]: I1129 06:40:41.907428 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 29 06:40:41 crc kubenswrapper[4947]: I1129 06:40:41.969729 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 29 06:40:42 crc kubenswrapper[4947]: I1129 06:40:42.063488 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 29 06:40:42 crc kubenswrapper[4947]: I1129 06:40:42.567803 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 29 06:40:42 crc kubenswrapper[4947]: I1129 06:40:42.826193 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 29 06:40:42 crc kubenswrapper[4947]: I1129 06:40:42.827151 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 29 06:40:42 crc kubenswrapper[4947]: I1129 06:40:42.905210 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 29 06:40:43 crc kubenswrapper[4947]: I1129 06:40:43.106825 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 29 06:40:43 crc kubenswrapper[4947]: I1129 06:40:43.814908 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 29 06:40:43 crc kubenswrapper[4947]: I1129 06:40:43.931863 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 29 06:40:44 crc kubenswrapper[4947]: I1129 06:40:44.051385 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 29 06:40:44 crc kubenswrapper[4947]: I1129 06:40:44.321873 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 29 06:40:44 crc kubenswrapper[4947]: I1129 06:40:44.439386 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 29 06:40:44 crc kubenswrapper[4947]: I1129 06:40:44.654854 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 29 06:40:44 crc kubenswrapper[4947]: I1129 06:40:44.740624 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 29 06:40:45 crc kubenswrapper[4947]: I1129 06:40:45.381186 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 29 06:40:45 crc kubenswrapper[4947]: I1129 06:40:45.426024 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 29 06:40:45 crc kubenswrapper[4947]: I1129 06:40:45.467014 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 29 06:40:45 crc kubenswrapper[4947]: I1129 06:40:45.495920 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 29 06:40:45 crc kubenswrapper[4947]: I1129 06:40:45.506464 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 29 06:40:45 crc kubenswrapper[4947]: I1129 06:40:45.557695 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 29 06:40:45 crc kubenswrapper[4947]: I1129 06:40:45.595418 4947 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 29 06:40:45 crc kubenswrapper[4947]: I1129 06:40:45.796480 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 29 06:40:45 crc kubenswrapper[4947]: I1129 06:40:45.834298 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 29 06:40:46 crc kubenswrapper[4947]: I1129 06:40:46.215528 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 29 06:40:46 crc kubenswrapper[4947]: I1129 06:40:46.281816 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 29 06:40:46 crc kubenswrapper[4947]: I1129 06:40:46.584830 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 29 06:40:46 crc kubenswrapper[4947]: I1129 06:40:46.728751 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 29 06:40:47 crc kubenswrapper[4947]: I1129 06:40:47.018158 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 29 06:40:47 crc kubenswrapper[4947]: I1129 06:40:47.565025 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 29 06:40:47 crc kubenswrapper[4947]: I1129 06:40:47.615014 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 29 06:40:47 crc kubenswrapper[4947]: I1129 06:40:47.743584 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 29 06:40:47 crc kubenswrapper[4947]: I1129 06:40:47.831911 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 29 06:40:48 crc kubenswrapper[4947]: I1129 06:40:48.005888 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 29 06:40:48 crc kubenswrapper[4947]: I1129 06:40:48.192617 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 29 06:40:48 crc kubenswrapper[4947]: I1129 06:40:48.267797 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 29 06:40:48 crc kubenswrapper[4947]: I1129 06:40:48.392447 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 29 06:40:48 crc kubenswrapper[4947]: I1129 06:40:48.732046 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 29 06:40:49 crc kubenswrapper[4947]: I1129 06:40:49.019910 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 29 06:40:49 crc kubenswrapper[4947]: I1129 06:40:49.183363 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 29 06:40:49 crc kubenswrapper[4947]: I1129 06:40:49.828661 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 29 06:40:50 crc kubenswrapper[4947]: I1129 06:40:50.383382 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 29 06:40:50 crc kubenswrapper[4947]: I1129 06:40:50.819318 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 29 06:40:50 crc kubenswrapper[4947]: I1129 06:40:50.854820 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 29 06:40:50 crc kubenswrapper[4947]: I1129 06:40:50.885463 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 29 06:40:50 crc kubenswrapper[4947]: I1129 06:40:50.925352 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 29 06:40:51 crc kubenswrapper[4947]: I1129 06:40:51.571319 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 29 06:40:52 crc kubenswrapper[4947]: I1129 06:40:52.156583 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 29 06:40:52 crc kubenswrapper[4947]: I1129 06:40:52.767115 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 29 06:40:52 crc kubenswrapper[4947]: I1129 06:40:52.987775 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 06:40:52 crc kubenswrapper[4947]: I1129 06:40:52.987871 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 06:40:53 crc kubenswrapper[4947]: I1129 06:40:53.090597 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 29 06:40:53 crc kubenswrapper[4947]: I1129 06:40:53.091352 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gfzmz"] Nov 29 06:40:53 crc kubenswrapper[4947]: I1129 06:40:53.182290 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 29 06:40:53 crc kubenswrapper[4947]: I1129 06:40:53.465695 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gfzmz"] Nov 29 06:40:53 crc kubenswrapper[4947]: I1129 06:40:53.673470 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 29 06:40:54 crc kubenswrapper[4947]: I1129 06:40:54.123584 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gfzmz" event={"ID":"f48b107b-5cce-4f64-b7d9-20d9efaa76c6","Type":"ContainerStarted","Data":"b31f742cdecc48cc098bfdf8f6a511e09fa860a9cd401a56a2355afd708019a1"} Nov 29 06:40:54 crc kubenswrapper[4947]: I1129 06:40:54.124013 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gfzmz" event={"ID":"f48b107b-5cce-4f64-b7d9-20d9efaa76c6","Type":"ContainerStarted","Data":"198efd3669bbdf7cc64631650815d6b8723bfef632d555366ba92f0c587ee126"} Nov 29 06:40:54 crc kubenswrapper[4947]: I1129 06:40:54.124036 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-gfzmz" Nov 29 06:40:54 crc kubenswrapper[4947]: I1129 06:40:54.126458 4947 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-gfzmz container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.56:8080/healthz\": dial tcp 10.217.0.56:8080: connect: connection refused" start-of-body= Nov 29 06:40:54 crc kubenswrapper[4947]: I1129 06:40:54.126531 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-gfzmz" podUID="f48b107b-5cce-4f64-b7d9-20d9efaa76c6" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.56:8080/healthz\": dial tcp 10.217.0.56:8080: connect: connection refused" Nov 29 06:40:54 crc kubenswrapper[4947]: I1129 06:40:54.144204 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-gfzmz" podStartSLOduration=17.144182308 podStartE2EDuration="17.144182308s" podCreationTimestamp="2025-11-29 06:40:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:40:54.142396012 +0000 UTC m=+405.186778133" watchObservedRunningTime="2025-11-29 06:40:54.144182308 +0000 UTC m=+405.188564389" Nov 29 06:40:54 crc kubenswrapper[4947]: I1129 06:40:54.524719 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 29 06:40:54 crc kubenswrapper[4947]: I1129 06:40:54.663191 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 29 06:40:55 crc kubenswrapper[4947]: I1129 06:40:55.132739 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-gfzmz" Nov 29 06:40:55 crc kubenswrapper[4947]: I1129 06:40:55.930863 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 29 06:40:56 crc kubenswrapper[4947]: I1129 06:40:56.027997 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 29 06:40:56 crc kubenswrapper[4947]: I1129 06:40:56.344629 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 29 06:40:56 crc kubenswrapper[4947]: I1129 06:40:56.889760 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 29 06:40:57 crc kubenswrapper[4947]: I1129 06:40:57.927073 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 29 06:40:58 crc kubenswrapper[4947]: I1129 06:40:58.185181 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 29 06:40:59 crc kubenswrapper[4947]: I1129 06:40:59.117052 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 29 06:41:04 crc kubenswrapper[4947]: I1129 06:41:04.040630 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-72fb2"] Nov 29 06:41:05 crc kubenswrapper[4947]: I1129 06:41:05.962905 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2vgqt"] Nov 29 06:41:05 crc kubenswrapper[4947]: I1129 06:41:05.963418 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-2vgqt" podUID="8ff97c32-0757-44e2-8cad-55b8bfadf0a8" containerName="controller-manager" containerID="cri-o://41d331cfd7ad358032e7559f40cde0b4cdea27ff90e04cd51e848ff078d49828" gracePeriod=30 Nov 29 06:41:06 crc kubenswrapper[4947]: I1129 06:41:06.090038 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8t9v"] Nov 29 06:41:06 crc kubenswrapper[4947]: I1129 06:41:06.090287 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8t9v" podUID="20404d7a-857c-4f60-beef-e6ef9116804d" containerName="route-controller-manager" containerID="cri-o://ad5f00e402dafba5eab9ea6138a38baba64b289740a22cb0c03187ec38a1a389" gracePeriod=30 Nov 29 06:41:06 crc kubenswrapper[4947]: I1129 06:41:06.202308 4947 generic.go:334] "Generic (PLEG): container finished" podID="8ff97c32-0757-44e2-8cad-55b8bfadf0a8" containerID="41d331cfd7ad358032e7559f40cde0b4cdea27ff90e04cd51e848ff078d49828" exitCode=0 Nov 29 06:41:06 crc kubenswrapper[4947]: I1129 06:41:06.202368 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-2vgqt" event={"ID":"8ff97c32-0757-44e2-8cad-55b8bfadf0a8","Type":"ContainerDied","Data":"41d331cfd7ad358032e7559f40cde0b4cdea27ff90e04cd51e848ff078d49828"} Nov 29 06:41:06 crc kubenswrapper[4947]: I1129 06:41:06.202415 4947 scope.go:117] "RemoveContainer" containerID="5fc41fdc4f8e60ff9b865b0ad50f446ce7ea47c4e71ad1c71a696910a8cb6862" Nov 29 06:41:06 crc kubenswrapper[4947]: I1129 06:41:06.410263 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-2vgqt" Nov 29 06:41:06 crc kubenswrapper[4947]: I1129 06:41:06.470300 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8t9v" Nov 29 06:41:06 crc kubenswrapper[4947]: I1129 06:41:06.547564 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ff97c32-0757-44e2-8cad-55b8bfadf0a8-serving-cert\") pod \"8ff97c32-0757-44e2-8cad-55b8bfadf0a8\" (UID: \"8ff97c32-0757-44e2-8cad-55b8bfadf0a8\") " Nov 29 06:41:06 crc kubenswrapper[4947]: I1129 06:41:06.547609 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5h8f\" (UniqueName: \"kubernetes.io/projected/8ff97c32-0757-44e2-8cad-55b8bfadf0a8-kube-api-access-v5h8f\") pod \"8ff97c32-0757-44e2-8cad-55b8bfadf0a8\" (UID: \"8ff97c32-0757-44e2-8cad-55b8bfadf0a8\") " Nov 29 06:41:06 crc kubenswrapper[4947]: I1129 06:41:06.547654 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8ff97c32-0757-44e2-8cad-55b8bfadf0a8-proxy-ca-bundles\") pod \"8ff97c32-0757-44e2-8cad-55b8bfadf0a8\" (UID: \"8ff97c32-0757-44e2-8cad-55b8bfadf0a8\") " Nov 29 06:41:06 crc kubenswrapper[4947]: I1129 06:41:06.547684 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ff97c32-0757-44e2-8cad-55b8bfadf0a8-config\") pod \"8ff97c32-0757-44e2-8cad-55b8bfadf0a8\" (UID: \"8ff97c32-0757-44e2-8cad-55b8bfadf0a8\") " Nov 29 06:41:06 crc kubenswrapper[4947]: I1129 06:41:06.547756 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8ff97c32-0757-44e2-8cad-55b8bfadf0a8-client-ca\") pod \"8ff97c32-0757-44e2-8cad-55b8bfadf0a8\" (UID: \"8ff97c32-0757-44e2-8cad-55b8bfadf0a8\") " Nov 29 06:41:06 crc kubenswrapper[4947]: I1129 06:41:06.548825 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ff97c32-0757-44e2-8cad-55b8bfadf0a8-client-ca" (OuterVolumeSpecName: "client-ca") pod "8ff97c32-0757-44e2-8cad-55b8bfadf0a8" (UID: "8ff97c32-0757-44e2-8cad-55b8bfadf0a8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:41:06 crc kubenswrapper[4947]: I1129 06:41:06.548932 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ff97c32-0757-44e2-8cad-55b8bfadf0a8-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "8ff97c32-0757-44e2-8cad-55b8bfadf0a8" (UID: "8ff97c32-0757-44e2-8cad-55b8bfadf0a8"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:41:06 crc kubenswrapper[4947]: I1129 06:41:06.549912 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ff97c32-0757-44e2-8cad-55b8bfadf0a8-config" (OuterVolumeSpecName: "config") pod "8ff97c32-0757-44e2-8cad-55b8bfadf0a8" (UID: "8ff97c32-0757-44e2-8cad-55b8bfadf0a8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:41:06 crc kubenswrapper[4947]: I1129 06:41:06.553567 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ff97c32-0757-44e2-8cad-55b8bfadf0a8-kube-api-access-v5h8f" (OuterVolumeSpecName: "kube-api-access-v5h8f") pod "8ff97c32-0757-44e2-8cad-55b8bfadf0a8" (UID: "8ff97c32-0757-44e2-8cad-55b8bfadf0a8"). InnerVolumeSpecName "kube-api-access-v5h8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:41:06 crc kubenswrapper[4947]: I1129 06:41:06.553861 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ff97c32-0757-44e2-8cad-55b8bfadf0a8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8ff97c32-0757-44e2-8cad-55b8bfadf0a8" (UID: "8ff97c32-0757-44e2-8cad-55b8bfadf0a8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:41:06 crc kubenswrapper[4947]: I1129 06:41:06.649092 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20404d7a-857c-4f60-beef-e6ef9116804d-config\") pod \"20404d7a-857c-4f60-beef-e6ef9116804d\" (UID: \"20404d7a-857c-4f60-beef-e6ef9116804d\") " Nov 29 06:41:06 crc kubenswrapper[4947]: I1129 06:41:06.649140 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20404d7a-857c-4f60-beef-e6ef9116804d-client-ca\") pod \"20404d7a-857c-4f60-beef-e6ef9116804d\" (UID: \"20404d7a-857c-4f60-beef-e6ef9116804d\") " Nov 29 06:41:06 crc kubenswrapper[4947]: I1129 06:41:06.649157 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20404d7a-857c-4f60-beef-e6ef9116804d-serving-cert\") pod \"20404d7a-857c-4f60-beef-e6ef9116804d\" (UID: \"20404d7a-857c-4f60-beef-e6ef9116804d\") " Nov 29 06:41:06 crc kubenswrapper[4947]: I1129 06:41:06.649184 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpb2s\" (UniqueName: \"kubernetes.io/projected/20404d7a-857c-4f60-beef-e6ef9116804d-kube-api-access-hpb2s\") pod \"20404d7a-857c-4f60-beef-e6ef9116804d\" (UID: \"20404d7a-857c-4f60-beef-e6ef9116804d\") " Nov 29 06:41:06 crc kubenswrapper[4947]: I1129 06:41:06.649446 4947 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8ff97c32-0757-44e2-8cad-55b8bfadf0a8-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 29 06:41:06 crc kubenswrapper[4947]: I1129 06:41:06.649460 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ff97c32-0757-44e2-8cad-55b8bfadf0a8-config\") on node \"crc\" DevicePath \"\"" Nov 29 06:41:06 crc kubenswrapper[4947]: I1129 06:41:06.649470 4947 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8ff97c32-0757-44e2-8cad-55b8bfadf0a8-client-ca\") on node \"crc\" DevicePath \"\"" Nov 29 06:41:06 crc kubenswrapper[4947]: I1129 06:41:06.649480 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ff97c32-0757-44e2-8cad-55b8bfadf0a8-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 06:41:06 crc kubenswrapper[4947]: I1129 06:41:06.649488 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5h8f\" (UniqueName: \"kubernetes.io/projected/8ff97c32-0757-44e2-8cad-55b8bfadf0a8-kube-api-access-v5h8f\") on node \"crc\" DevicePath \"\"" Nov 29 06:41:06 crc kubenswrapper[4947]: I1129 06:41:06.650083 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20404d7a-857c-4f60-beef-e6ef9116804d-client-ca" (OuterVolumeSpecName: "client-ca") pod "20404d7a-857c-4f60-beef-e6ef9116804d" (UID: "20404d7a-857c-4f60-beef-e6ef9116804d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:41:06 crc kubenswrapper[4947]: I1129 06:41:06.650100 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20404d7a-857c-4f60-beef-e6ef9116804d-config" (OuterVolumeSpecName: "config") pod "20404d7a-857c-4f60-beef-e6ef9116804d" (UID: "20404d7a-857c-4f60-beef-e6ef9116804d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:41:06 crc kubenswrapper[4947]: I1129 06:41:06.653261 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20404d7a-857c-4f60-beef-e6ef9116804d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "20404d7a-857c-4f60-beef-e6ef9116804d" (UID: "20404d7a-857c-4f60-beef-e6ef9116804d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:41:06 crc kubenswrapper[4947]: I1129 06:41:06.653457 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20404d7a-857c-4f60-beef-e6ef9116804d-kube-api-access-hpb2s" (OuterVolumeSpecName: "kube-api-access-hpb2s") pod "20404d7a-857c-4f60-beef-e6ef9116804d" (UID: "20404d7a-857c-4f60-beef-e6ef9116804d"). InnerVolumeSpecName "kube-api-access-hpb2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:41:06 crc kubenswrapper[4947]: I1129 06:41:06.750409 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20404d7a-857c-4f60-beef-e6ef9116804d-config\") on node \"crc\" DevicePath \"\"" Nov 29 06:41:06 crc kubenswrapper[4947]: I1129 06:41:06.750452 4947 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20404d7a-857c-4f60-beef-e6ef9116804d-client-ca\") on node \"crc\" DevicePath \"\"" Nov 29 06:41:06 crc kubenswrapper[4947]: I1129 06:41:06.750464 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20404d7a-857c-4f60-beef-e6ef9116804d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 06:41:06 crc kubenswrapper[4947]: I1129 06:41:06.750476 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpb2s\" (UniqueName: \"kubernetes.io/projected/20404d7a-857c-4f60-beef-e6ef9116804d-kube-api-access-hpb2s\") on node \"crc\" DevicePath \"\"" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.208094 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-2vgqt" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.208088 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-2vgqt" event={"ID":"8ff97c32-0757-44e2-8cad-55b8bfadf0a8","Type":"ContainerDied","Data":"a975cb0d2bf72458514f62e38943ed62940c8573c7801a00877100e6de726b42"} Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.208421 4947 scope.go:117] "RemoveContainer" containerID="41d331cfd7ad358032e7559f40cde0b4cdea27ff90e04cd51e848ff078d49828" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.209166 4947 generic.go:334] "Generic (PLEG): container finished" podID="20404d7a-857c-4f60-beef-e6ef9116804d" containerID="ad5f00e402dafba5eab9ea6138a38baba64b289740a22cb0c03187ec38a1a389" exitCode=0 Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.209200 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8t9v" event={"ID":"20404d7a-857c-4f60-beef-e6ef9116804d","Type":"ContainerDied","Data":"ad5f00e402dafba5eab9ea6138a38baba64b289740a22cb0c03187ec38a1a389"} Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.209238 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8t9v" event={"ID":"20404d7a-857c-4f60-beef-e6ef9116804d","Type":"ContainerDied","Data":"2b390e87f6ffcc305614e4daccab46067f900eae4e11463e1241d9aecbfbc308"} Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.209250 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8t9v" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.223390 4947 scope.go:117] "RemoveContainer" containerID="ad5f00e402dafba5eab9ea6138a38baba64b289740a22cb0c03187ec38a1a389" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.232132 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8t9v"] Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.238356 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8t9v"] Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.242091 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2vgqt"] Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.245115 4947 scope.go:117] "RemoveContainer" containerID="ad5f00e402dafba5eab9ea6138a38baba64b289740a22cb0c03187ec38a1a389" Nov 29 06:41:07 crc kubenswrapper[4947]: E1129 06:41:07.245528 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad5f00e402dafba5eab9ea6138a38baba64b289740a22cb0c03187ec38a1a389\": container with ID starting with ad5f00e402dafba5eab9ea6138a38baba64b289740a22cb0c03187ec38a1a389 not found: ID does not exist" containerID="ad5f00e402dafba5eab9ea6138a38baba64b289740a22cb0c03187ec38a1a389" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.245556 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad5f00e402dafba5eab9ea6138a38baba64b289740a22cb0c03187ec38a1a389"} err="failed to get container status \"ad5f00e402dafba5eab9ea6138a38baba64b289740a22cb0c03187ec38a1a389\": rpc error: code = NotFound desc = could not find container \"ad5f00e402dafba5eab9ea6138a38baba64b289740a22cb0c03187ec38a1a389\": container with ID starting with ad5f00e402dafba5eab9ea6138a38baba64b289740a22cb0c03187ec38a1a389 not found: ID does not exist" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.246559 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2vgqt"] Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.888719 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-75bb457f55-9qpmb"] Nov 29 06:41:07 crc kubenswrapper[4947]: E1129 06:41:07.888971 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e620beb-b5db-4321-b404-0ef499ded600" containerName="registry-server" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.888990 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e620beb-b5db-4321-b404-0ef499ded600" containerName="registry-server" Nov 29 06:41:07 crc kubenswrapper[4947]: E1129 06:41:07.889005 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd94a1d6-7039-4b84-aa44-ee8ec166da24" containerName="registry-server" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.889012 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd94a1d6-7039-4b84-aa44-ee8ec166da24" containerName="registry-server" Nov 29 06:41:07 crc kubenswrapper[4947]: E1129 06:41:07.889021 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8" containerName="extract-utilities" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.889032 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8" containerName="extract-utilities" Nov 29 06:41:07 crc kubenswrapper[4947]: E1129 06:41:07.889042 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ff97c32-0757-44e2-8cad-55b8bfadf0a8" containerName="controller-manager" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.889048 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ff97c32-0757-44e2-8cad-55b8bfadf0a8" containerName="controller-manager" Nov 29 06:41:07 crc kubenswrapper[4947]: E1129 06:41:07.889056 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20404d7a-857c-4f60-beef-e6ef9116804d" containerName="route-controller-manager" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.889063 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="20404d7a-857c-4f60-beef-e6ef9116804d" containerName="route-controller-manager" Nov 29 06:41:07 crc kubenswrapper[4947]: E1129 06:41:07.889072 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8" containerName="registry-server" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.889079 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8" containerName="registry-server" Nov 29 06:41:07 crc kubenswrapper[4947]: E1129 06:41:07.889088 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="253ae1cb-50f4-48e7-a004-a70a958c27cd" containerName="extract-utilities" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.889097 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="253ae1cb-50f4-48e7-a004-a70a958c27cd" containerName="extract-utilities" Nov 29 06:41:07 crc kubenswrapper[4947]: E1129 06:41:07.889107 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="159f707e-f150-45c6-9371-6b4b272eaf5d" containerName="registry-server" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.889116 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="159f707e-f150-45c6-9371-6b4b272eaf5d" containerName="registry-server" Nov 29 06:41:07 crc kubenswrapper[4947]: E1129 06:41:07.889126 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a37f770-07c2-40b1-9f24-ccddc3215658" containerName="extract-content" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.889133 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a37f770-07c2-40b1-9f24-ccddc3215658" containerName="extract-content" Nov 29 06:41:07 crc kubenswrapper[4947]: E1129 06:41:07.889141 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93c9ef47-b7f5-41f4-8e91-bee4f16b658d" containerName="extract-utilities" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.889148 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="93c9ef47-b7f5-41f4-8e91-bee4f16b658d" containerName="extract-utilities" Nov 29 06:41:07 crc kubenswrapper[4947]: E1129 06:41:07.889157 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd94a1d6-7039-4b84-aa44-ee8ec166da24" containerName="extract-content" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.889164 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd94a1d6-7039-4b84-aa44-ee8ec166da24" containerName="extract-content" Nov 29 06:41:07 crc kubenswrapper[4947]: E1129 06:41:07.889175 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="658b72a9-13fc-4881-88c5-109b221bbc48" containerName="extract-utilities" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.889183 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="658b72a9-13fc-4881-88c5-109b221bbc48" containerName="extract-utilities" Nov 29 06:41:07 crc kubenswrapper[4947]: E1129 06:41:07.889191 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="675de8ae-169a-4737-a290-54cdb32d8cb0" containerName="marketplace-operator" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.889198 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="675de8ae-169a-4737-a290-54cdb32d8cb0" containerName="marketplace-operator" Nov 29 06:41:07 crc kubenswrapper[4947]: E1129 06:41:07.889208 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a37f770-07c2-40b1-9f24-ccddc3215658" containerName="extract-utilities" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.889216 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a37f770-07c2-40b1-9f24-ccddc3215658" containerName="extract-utilities" Nov 29 06:41:07 crc kubenswrapper[4947]: E1129 06:41:07.889256 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="658b72a9-13fc-4881-88c5-109b221bbc48" containerName="registry-server" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.889270 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="658b72a9-13fc-4881-88c5-109b221bbc48" containerName="registry-server" Nov 29 06:41:07 crc kubenswrapper[4947]: E1129 06:41:07.889281 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="253ae1cb-50f4-48e7-a004-a70a958c27cd" containerName="extract-content" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.889288 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="253ae1cb-50f4-48e7-a004-a70a958c27cd" containerName="extract-content" Nov 29 06:41:07 crc kubenswrapper[4947]: E1129 06:41:07.889299 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="658b72a9-13fc-4881-88c5-109b221bbc48" containerName="extract-content" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.889306 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="658b72a9-13fc-4881-88c5-109b221bbc48" containerName="extract-content" Nov 29 06:41:07 crc kubenswrapper[4947]: E1129 06:41:07.889316 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8" containerName="extract-content" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.889324 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8" containerName="extract-content" Nov 29 06:41:07 crc kubenswrapper[4947]: E1129 06:41:07.889333 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93c9ef47-b7f5-41f4-8e91-bee4f16b658d" containerName="registry-server" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.889339 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="93c9ef47-b7f5-41f4-8e91-bee4f16b658d" containerName="registry-server" Nov 29 06:41:07 crc kubenswrapper[4947]: E1129 06:41:07.889350 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="159f707e-f150-45c6-9371-6b4b272eaf5d" containerName="extract-content" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.889357 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="159f707e-f150-45c6-9371-6b4b272eaf5d" containerName="extract-content" Nov 29 06:41:07 crc kubenswrapper[4947]: E1129 06:41:07.889367 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="253ae1cb-50f4-48e7-a004-a70a958c27cd" containerName="registry-server" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.889374 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="253ae1cb-50f4-48e7-a004-a70a958c27cd" containerName="registry-server" Nov 29 06:41:07 crc kubenswrapper[4947]: E1129 06:41:07.889383 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e620beb-b5db-4321-b404-0ef499ded600" containerName="extract-utilities" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.889390 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e620beb-b5db-4321-b404-0ef499ded600" containerName="extract-utilities" Nov 29 06:41:07 crc kubenswrapper[4947]: E1129 06:41:07.889398 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="159f707e-f150-45c6-9371-6b4b272eaf5d" containerName="extract-utilities" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.889406 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="159f707e-f150-45c6-9371-6b4b272eaf5d" containerName="extract-utilities" Nov 29 06:41:07 crc kubenswrapper[4947]: E1129 06:41:07.889415 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="675de8ae-169a-4737-a290-54cdb32d8cb0" containerName="marketplace-operator" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.889423 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="675de8ae-169a-4737-a290-54cdb32d8cb0" containerName="marketplace-operator" Nov 29 06:41:07 crc kubenswrapper[4947]: E1129 06:41:07.889434 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e620beb-b5db-4321-b404-0ef499ded600" containerName="extract-content" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.889440 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e620beb-b5db-4321-b404-0ef499ded600" containerName="extract-content" Nov 29 06:41:07 crc kubenswrapper[4947]: E1129 06:41:07.889450 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd94a1d6-7039-4b84-aa44-ee8ec166da24" containerName="extract-utilities" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.889455 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd94a1d6-7039-4b84-aa44-ee8ec166da24" containerName="extract-utilities" Nov 29 06:41:07 crc kubenswrapper[4947]: E1129 06:41:07.889464 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a37f770-07c2-40b1-9f24-ccddc3215658" containerName="registry-server" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.889471 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a37f770-07c2-40b1-9f24-ccddc3215658" containerName="registry-server" Nov 29 06:41:07 crc kubenswrapper[4947]: E1129 06:41:07.889481 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93c9ef47-b7f5-41f4-8e91-bee4f16b658d" containerName="extract-content" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.889486 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="93c9ef47-b7f5-41f4-8e91-bee4f16b658d" containerName="extract-content" Nov 29 06:41:07 crc kubenswrapper[4947]: E1129 06:41:07.889493 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="675de8ae-169a-4737-a290-54cdb32d8cb0" containerName="marketplace-operator" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.889499 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="675de8ae-169a-4737-a290-54cdb32d8cb0" containerName="marketplace-operator" Nov 29 06:41:07 crc kubenswrapper[4947]: E1129 06:41:07.889507 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="675de8ae-169a-4737-a290-54cdb32d8cb0" containerName="marketplace-operator" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.889513 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="675de8ae-169a-4737-a290-54cdb32d8cb0" containerName="marketplace-operator" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.889599 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="675de8ae-169a-4737-a290-54cdb32d8cb0" containerName="marketplace-operator" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.889606 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9d714ea-dfd5-4e58-83d6-13a1c1cebfd8" containerName="registry-server" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.889614 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd94a1d6-7039-4b84-aa44-ee8ec166da24" containerName="registry-server" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.889621 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="159f707e-f150-45c6-9371-6b4b272eaf5d" containerName="registry-server" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.889633 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="93c9ef47-b7f5-41f4-8e91-bee4f16b658d" containerName="registry-server" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.889644 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="658b72a9-13fc-4881-88c5-109b221bbc48" containerName="registry-server" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.889653 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="253ae1cb-50f4-48e7-a004-a70a958c27cd" containerName="registry-server" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.889660 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="675de8ae-169a-4737-a290-54cdb32d8cb0" containerName="marketplace-operator" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.889669 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ff97c32-0757-44e2-8cad-55b8bfadf0a8" containerName="controller-manager" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.889677 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a37f770-07c2-40b1-9f24-ccddc3215658" containerName="registry-server" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.889685 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e620beb-b5db-4321-b404-0ef499ded600" containerName="registry-server" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.889691 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="20404d7a-857c-4f60-beef-e6ef9116804d" containerName="route-controller-manager" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.889697 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ff97c32-0757-44e2-8cad-55b8bfadf0a8" containerName="controller-manager" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.889704 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="675de8ae-169a-4737-a290-54cdb32d8cb0" containerName="marketplace-operator" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.890126 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75bb457f55-9qpmb" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.893602 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d97b4688c-m9zfq"] Nov 29 06:41:07 crc kubenswrapper[4947]: E1129 06:41:07.893809 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ff97c32-0757-44e2-8cad-55b8bfadf0a8" containerName="controller-manager" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.893825 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ff97c32-0757-44e2-8cad-55b8bfadf0a8" containerName="controller-manager" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.893914 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="675de8ae-169a-4737-a290-54cdb32d8cb0" containerName="marketplace-operator" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.894263 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d97b4688c-m9zfq" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.896728 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.896739 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.896806 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.897176 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.897350 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.897444 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.897589 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.899719 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.899769 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.899927 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.900329 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.901804 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.903799 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.906350 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-75bb457f55-9qpmb"] Nov 29 06:41:07 crc kubenswrapper[4947]: I1129 06:41:07.915548 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d97b4688c-m9zfq"] Nov 29 06:41:08 crc kubenswrapper[4947]: I1129 06:41:08.065868 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd2b8da9-7993-4aba-82cc-e3d90783978a-proxy-ca-bundles\") pod \"controller-manager-75bb457f55-9qpmb\" (UID: \"dd2b8da9-7993-4aba-82cc-e3d90783978a\") " pod="openshift-controller-manager/controller-manager-75bb457f55-9qpmb" Nov 29 06:41:08 crc kubenswrapper[4947]: I1129 06:41:08.066419 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07acab73-b7a9-4e96-bc80-91f6f11d4dd2-serving-cert\") pod \"route-controller-manager-d97b4688c-m9zfq\" (UID: \"07acab73-b7a9-4e96-bc80-91f6f11d4dd2\") " pod="openshift-route-controller-manager/route-controller-manager-d97b4688c-m9zfq" Nov 29 06:41:08 crc kubenswrapper[4947]: I1129 06:41:08.066467 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd2b8da9-7993-4aba-82cc-e3d90783978a-config\") pod \"controller-manager-75bb457f55-9qpmb\" (UID: \"dd2b8da9-7993-4aba-82cc-e3d90783978a\") " pod="openshift-controller-manager/controller-manager-75bb457f55-9qpmb" Nov 29 06:41:08 crc kubenswrapper[4947]: I1129 06:41:08.066498 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r99kf\" (UniqueName: \"kubernetes.io/projected/07acab73-b7a9-4e96-bc80-91f6f11d4dd2-kube-api-access-r99kf\") pod \"route-controller-manager-d97b4688c-m9zfq\" (UID: \"07acab73-b7a9-4e96-bc80-91f6f11d4dd2\") " pod="openshift-route-controller-manager/route-controller-manager-d97b4688c-m9zfq" Nov 29 06:41:08 crc kubenswrapper[4947]: I1129 06:41:08.066604 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07acab73-b7a9-4e96-bc80-91f6f11d4dd2-client-ca\") pod \"route-controller-manager-d97b4688c-m9zfq\" (UID: \"07acab73-b7a9-4e96-bc80-91f6f11d4dd2\") " pod="openshift-route-controller-manager/route-controller-manager-d97b4688c-m9zfq" Nov 29 06:41:08 crc kubenswrapper[4947]: I1129 06:41:08.066708 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07acab73-b7a9-4e96-bc80-91f6f11d4dd2-config\") pod \"route-controller-manager-d97b4688c-m9zfq\" (UID: \"07acab73-b7a9-4e96-bc80-91f6f11d4dd2\") " pod="openshift-route-controller-manager/route-controller-manager-d97b4688c-m9zfq" Nov 29 06:41:08 crc kubenswrapper[4947]: I1129 06:41:08.066775 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhxv8\" (UniqueName: \"kubernetes.io/projected/dd2b8da9-7993-4aba-82cc-e3d90783978a-kube-api-access-qhxv8\") pod \"controller-manager-75bb457f55-9qpmb\" (UID: \"dd2b8da9-7993-4aba-82cc-e3d90783978a\") " pod="openshift-controller-manager/controller-manager-75bb457f55-9qpmb" Nov 29 06:41:08 crc kubenswrapper[4947]: I1129 06:41:08.066894 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd2b8da9-7993-4aba-82cc-e3d90783978a-serving-cert\") pod \"controller-manager-75bb457f55-9qpmb\" (UID: \"dd2b8da9-7993-4aba-82cc-e3d90783978a\") " pod="openshift-controller-manager/controller-manager-75bb457f55-9qpmb" Nov 29 06:41:08 crc kubenswrapper[4947]: I1129 06:41:08.071050 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd2b8da9-7993-4aba-82cc-e3d90783978a-client-ca\") pod \"controller-manager-75bb457f55-9qpmb\" (UID: \"dd2b8da9-7993-4aba-82cc-e3d90783978a\") " pod="openshift-controller-manager/controller-manager-75bb457f55-9qpmb" Nov 29 06:41:08 crc kubenswrapper[4947]: I1129 06:41:08.262486 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd2b8da9-7993-4aba-82cc-e3d90783978a-serving-cert\") pod \"controller-manager-75bb457f55-9qpmb\" (UID: \"dd2b8da9-7993-4aba-82cc-e3d90783978a\") " pod="openshift-controller-manager/controller-manager-75bb457f55-9qpmb" Nov 29 06:41:08 crc kubenswrapper[4947]: I1129 06:41:08.262555 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd2b8da9-7993-4aba-82cc-e3d90783978a-client-ca\") pod \"controller-manager-75bb457f55-9qpmb\" (UID: \"dd2b8da9-7993-4aba-82cc-e3d90783978a\") " pod="openshift-controller-manager/controller-manager-75bb457f55-9qpmb" Nov 29 06:41:08 crc kubenswrapper[4947]: I1129 06:41:08.262598 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd2b8da9-7993-4aba-82cc-e3d90783978a-proxy-ca-bundles\") pod \"controller-manager-75bb457f55-9qpmb\" (UID: \"dd2b8da9-7993-4aba-82cc-e3d90783978a\") " pod="openshift-controller-manager/controller-manager-75bb457f55-9qpmb" Nov 29 06:41:08 crc kubenswrapper[4947]: I1129 06:41:08.262619 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07acab73-b7a9-4e96-bc80-91f6f11d4dd2-serving-cert\") pod \"route-controller-manager-d97b4688c-m9zfq\" (UID: \"07acab73-b7a9-4e96-bc80-91f6f11d4dd2\") " pod="openshift-route-controller-manager/route-controller-manager-d97b4688c-m9zfq" Nov 29 06:41:08 crc kubenswrapper[4947]: I1129 06:41:08.262654 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd2b8da9-7993-4aba-82cc-e3d90783978a-config\") pod \"controller-manager-75bb457f55-9qpmb\" (UID: \"dd2b8da9-7993-4aba-82cc-e3d90783978a\") " pod="openshift-controller-manager/controller-manager-75bb457f55-9qpmb" Nov 29 06:41:08 crc kubenswrapper[4947]: I1129 06:41:08.262673 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r99kf\" (UniqueName: \"kubernetes.io/projected/07acab73-b7a9-4e96-bc80-91f6f11d4dd2-kube-api-access-r99kf\") pod \"route-controller-manager-d97b4688c-m9zfq\" (UID: \"07acab73-b7a9-4e96-bc80-91f6f11d4dd2\") " pod="openshift-route-controller-manager/route-controller-manager-d97b4688c-m9zfq" Nov 29 06:41:08 crc kubenswrapper[4947]: I1129 06:41:08.262702 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07acab73-b7a9-4e96-bc80-91f6f11d4dd2-client-ca\") pod \"route-controller-manager-d97b4688c-m9zfq\" (UID: \"07acab73-b7a9-4e96-bc80-91f6f11d4dd2\") " pod="openshift-route-controller-manager/route-controller-manager-d97b4688c-m9zfq" Nov 29 06:41:08 crc kubenswrapper[4947]: I1129 06:41:08.262731 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07acab73-b7a9-4e96-bc80-91f6f11d4dd2-config\") pod \"route-controller-manager-d97b4688c-m9zfq\" (UID: \"07acab73-b7a9-4e96-bc80-91f6f11d4dd2\") " pod="openshift-route-controller-manager/route-controller-manager-d97b4688c-m9zfq" Nov 29 06:41:08 crc kubenswrapper[4947]: I1129 06:41:08.262760 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhxv8\" (UniqueName: \"kubernetes.io/projected/dd2b8da9-7993-4aba-82cc-e3d90783978a-kube-api-access-qhxv8\") pod \"controller-manager-75bb457f55-9qpmb\" (UID: \"dd2b8da9-7993-4aba-82cc-e3d90783978a\") " pod="openshift-controller-manager/controller-manager-75bb457f55-9qpmb" Nov 29 06:41:08 crc kubenswrapper[4947]: I1129 06:41:08.265277 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd2b8da9-7993-4aba-82cc-e3d90783978a-client-ca\") pod \"controller-manager-75bb457f55-9qpmb\" (UID: \"dd2b8da9-7993-4aba-82cc-e3d90783978a\") " pod="openshift-controller-manager/controller-manager-75bb457f55-9qpmb" Nov 29 06:41:08 crc kubenswrapper[4947]: I1129 06:41:08.265378 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07acab73-b7a9-4e96-bc80-91f6f11d4dd2-client-ca\") pod \"route-controller-manager-d97b4688c-m9zfq\" (UID: \"07acab73-b7a9-4e96-bc80-91f6f11d4dd2\") " pod="openshift-route-controller-manager/route-controller-manager-d97b4688c-m9zfq" Nov 29 06:41:08 crc kubenswrapper[4947]: I1129 06:41:08.265434 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd2b8da9-7993-4aba-82cc-e3d90783978a-proxy-ca-bundles\") pod \"controller-manager-75bb457f55-9qpmb\" (UID: \"dd2b8da9-7993-4aba-82cc-e3d90783978a\") " pod="openshift-controller-manager/controller-manager-75bb457f55-9qpmb" Nov 29 06:41:08 crc kubenswrapper[4947]: I1129 06:41:08.265568 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd2b8da9-7993-4aba-82cc-e3d90783978a-config\") pod \"controller-manager-75bb457f55-9qpmb\" (UID: \"dd2b8da9-7993-4aba-82cc-e3d90783978a\") " pod="openshift-controller-manager/controller-manager-75bb457f55-9qpmb" Nov 29 06:41:08 crc kubenswrapper[4947]: I1129 06:41:08.274201 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07acab73-b7a9-4e96-bc80-91f6f11d4dd2-config\") pod \"route-controller-manager-d97b4688c-m9zfq\" (UID: \"07acab73-b7a9-4e96-bc80-91f6f11d4dd2\") " pod="openshift-route-controller-manager/route-controller-manager-d97b4688c-m9zfq" Nov 29 06:41:08 crc kubenswrapper[4947]: I1129 06:41:08.278565 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd2b8da9-7993-4aba-82cc-e3d90783978a-serving-cert\") pod \"controller-manager-75bb457f55-9qpmb\" (UID: \"dd2b8da9-7993-4aba-82cc-e3d90783978a\") " pod="openshift-controller-manager/controller-manager-75bb457f55-9qpmb" Nov 29 06:41:08 crc kubenswrapper[4947]: I1129 06:41:08.284467 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07acab73-b7a9-4e96-bc80-91f6f11d4dd2-serving-cert\") pod \"route-controller-manager-d97b4688c-m9zfq\" (UID: \"07acab73-b7a9-4e96-bc80-91f6f11d4dd2\") " pod="openshift-route-controller-manager/route-controller-manager-d97b4688c-m9zfq" Nov 29 06:41:08 crc kubenswrapper[4947]: I1129 06:41:08.289984 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r99kf\" (UniqueName: \"kubernetes.io/projected/07acab73-b7a9-4e96-bc80-91f6f11d4dd2-kube-api-access-r99kf\") pod \"route-controller-manager-d97b4688c-m9zfq\" (UID: \"07acab73-b7a9-4e96-bc80-91f6f11d4dd2\") " pod="openshift-route-controller-manager/route-controller-manager-d97b4688c-m9zfq" Nov 29 06:41:08 crc kubenswrapper[4947]: I1129 06:41:08.301665 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhxv8\" (UniqueName: \"kubernetes.io/projected/dd2b8da9-7993-4aba-82cc-e3d90783978a-kube-api-access-qhxv8\") pod \"controller-manager-75bb457f55-9qpmb\" (UID: \"dd2b8da9-7993-4aba-82cc-e3d90783978a\") " pod="openshift-controller-manager/controller-manager-75bb457f55-9qpmb" Nov 29 06:41:08 crc kubenswrapper[4947]: I1129 06:41:08.509775 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75bb457f55-9qpmb" Nov 29 06:41:08 crc kubenswrapper[4947]: I1129 06:41:08.528897 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d97b4688c-m9zfq" Nov 29 06:41:08 crc kubenswrapper[4947]: I1129 06:41:08.818408 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-75bb457f55-9qpmb"] Nov 29 06:41:08 crc kubenswrapper[4947]: W1129 06:41:08.846355 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd2b8da9_7993_4aba_82cc_e3d90783978a.slice/crio-c41e83ea3c2cbeae9734a51a4333c3376305da10a7848dbaf7dcb00473a281ae WatchSource:0}: Error finding container c41e83ea3c2cbeae9734a51a4333c3376305da10a7848dbaf7dcb00473a281ae: Status 404 returned error can't find the container with id c41e83ea3c2cbeae9734a51a4333c3376305da10a7848dbaf7dcb00473a281ae Nov 29 06:41:09 crc kubenswrapper[4947]: I1129 06:41:09.084911 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d97b4688c-m9zfq"] Nov 29 06:41:09 crc kubenswrapper[4947]: W1129 06:41:09.088694 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07acab73_b7a9_4e96_bc80_91f6f11d4dd2.slice/crio-752197ea1a2f0a94992f4df0629a8d1f622d76f8720cd151cd8ef11fff6f1b3c WatchSource:0}: Error finding container 752197ea1a2f0a94992f4df0629a8d1f622d76f8720cd151cd8ef11fff6f1b3c: Status 404 returned error can't find the container with id 752197ea1a2f0a94992f4df0629a8d1f622d76f8720cd151cd8ef11fff6f1b3c Nov 29 06:41:09 crc kubenswrapper[4947]: I1129 06:41:09.193691 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20404d7a-857c-4f60-beef-e6ef9116804d" path="/var/lib/kubelet/pods/20404d7a-857c-4f60-beef-e6ef9116804d/volumes" Nov 29 06:41:09 crc kubenswrapper[4947]: I1129 06:41:09.194823 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ff97c32-0757-44e2-8cad-55b8bfadf0a8" path="/var/lib/kubelet/pods/8ff97c32-0757-44e2-8cad-55b8bfadf0a8/volumes" Nov 29 06:41:09 crc kubenswrapper[4947]: I1129 06:41:09.290536 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d97b4688c-m9zfq" event={"ID":"07acab73-b7a9-4e96-bc80-91f6f11d4dd2","Type":"ContainerStarted","Data":"09070bd63016a2c395f8a86021cd12a92af269aa0044a6eba5cdd366fb5e5509"} Nov 29 06:41:09 crc kubenswrapper[4947]: I1129 06:41:09.291065 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d97b4688c-m9zfq" event={"ID":"07acab73-b7a9-4e96-bc80-91f6f11d4dd2","Type":"ContainerStarted","Data":"752197ea1a2f0a94992f4df0629a8d1f622d76f8720cd151cd8ef11fff6f1b3c"} Nov 29 06:41:09 crc kubenswrapper[4947]: I1129 06:41:09.291087 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-d97b4688c-m9zfq" Nov 29 06:41:09 crc kubenswrapper[4947]: I1129 06:41:09.294600 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75bb457f55-9qpmb" event={"ID":"dd2b8da9-7993-4aba-82cc-e3d90783978a","Type":"ContainerStarted","Data":"2ba55cc09b55b015440160bb15977fd613a5a6a851ac58b144bf3f4c916b6116"} Nov 29 06:41:09 crc kubenswrapper[4947]: I1129 06:41:09.294657 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75bb457f55-9qpmb" event={"ID":"dd2b8da9-7993-4aba-82cc-e3d90783978a","Type":"ContainerStarted","Data":"c41e83ea3c2cbeae9734a51a4333c3376305da10a7848dbaf7dcb00473a281ae"} Nov 29 06:41:09 crc kubenswrapper[4947]: I1129 06:41:09.294826 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-75bb457f55-9qpmb" Nov 29 06:41:09 crc kubenswrapper[4947]: I1129 06:41:09.298957 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-75bb457f55-9qpmb" Nov 29 06:41:09 crc kubenswrapper[4947]: I1129 06:41:09.305579 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-d97b4688c-m9zfq" podStartSLOduration=3.30555696 podStartE2EDuration="3.30555696s" podCreationTimestamp="2025-11-29 06:41:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:41:09.304547934 +0000 UTC m=+420.348930015" watchObservedRunningTime="2025-11-29 06:41:09.30555696 +0000 UTC m=+420.349939041" Nov 29 06:41:09 crc kubenswrapper[4947]: I1129 06:41:09.324610 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-75bb457f55-9qpmb" podStartSLOduration=3.324589709 podStartE2EDuration="3.324589709s" podCreationTimestamp="2025-11-29 06:41:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:41:09.32151052 +0000 UTC m=+420.365892611" watchObservedRunningTime="2025-11-29 06:41:09.324589709 +0000 UTC m=+420.368971790" Nov 29 06:41:09 crc kubenswrapper[4947]: I1129 06:41:09.629083 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-d97b4688c-m9zfq" Nov 29 06:41:22 crc kubenswrapper[4947]: I1129 06:41:22.987882 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 06:41:22 crc kubenswrapper[4947]: I1129 06:41:22.988572 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 06:41:22 crc kubenswrapper[4947]: I1129 06:41:22.988628 4947 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" Nov 29 06:41:22 crc kubenswrapper[4947]: I1129 06:41:22.989172 4947 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6742510082cd58dfd52c8f7fa3778bd9aaaffe372801b3a708a086461b8d5abd"} pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 06:41:22 crc kubenswrapper[4947]: I1129 06:41:22.989258 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" containerID="cri-o://6742510082cd58dfd52c8f7fa3778bd9aaaffe372801b3a708a086461b8d5abd" gracePeriod=600 Nov 29 06:41:23 crc kubenswrapper[4947]: I1129 06:41:23.369704 4947 generic.go:334] "Generic (PLEG): container finished" podID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerID="6742510082cd58dfd52c8f7fa3778bd9aaaffe372801b3a708a086461b8d5abd" exitCode=0 Nov 29 06:41:23 crc kubenswrapper[4947]: I1129 06:41:23.369761 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" event={"ID":"5f4d791f-bb61-4aaa-a09c-3007b59645a7","Type":"ContainerDied","Data":"6742510082cd58dfd52c8f7fa3778bd9aaaffe372801b3a708a086461b8d5abd"} Nov 29 06:41:23 crc kubenswrapper[4947]: I1129 06:41:23.369802 4947 scope.go:117] "RemoveContainer" containerID="bade16afee9bcb203991caf838d5e9e5302dd0b35462b93b45c34cf98e89c7b8" Nov 29 06:41:24 crc kubenswrapper[4947]: I1129 06:41:24.375576 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" event={"ID":"5f4d791f-bb61-4aaa-a09c-3007b59645a7","Type":"ContainerStarted","Data":"381c0e9d0a59dd5856ec3a6931be38d490899e1db40040b972c52a0ea9ed0855"} Nov 29 06:41:25 crc kubenswrapper[4947]: I1129 06:41:25.115690 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-75bb457f55-9qpmb"] Nov 29 06:41:25 crc kubenswrapper[4947]: I1129 06:41:25.116232 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-75bb457f55-9qpmb" podUID="dd2b8da9-7993-4aba-82cc-e3d90783978a" containerName="controller-manager" containerID="cri-o://2ba55cc09b55b015440160bb15977fd613a5a6a851ac58b144bf3f4c916b6116" gracePeriod=30 Nov 29 06:41:25 crc kubenswrapper[4947]: I1129 06:41:25.385975 4947 generic.go:334] "Generic (PLEG): container finished" podID="dd2b8da9-7993-4aba-82cc-e3d90783978a" containerID="2ba55cc09b55b015440160bb15977fd613a5a6a851ac58b144bf3f4c916b6116" exitCode=0 Nov 29 06:41:25 crc kubenswrapper[4947]: I1129 06:41:25.387300 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75bb457f55-9qpmb" event={"ID":"dd2b8da9-7993-4aba-82cc-e3d90783978a","Type":"ContainerDied","Data":"2ba55cc09b55b015440160bb15977fd613a5a6a851ac58b144bf3f4c916b6116"} Nov 29 06:41:25 crc kubenswrapper[4947]: I1129 06:41:25.674777 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75bb457f55-9qpmb" Nov 29 06:41:25 crc kubenswrapper[4947]: I1129 06:41:25.773349 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd2b8da9-7993-4aba-82cc-e3d90783978a-serving-cert\") pod \"dd2b8da9-7993-4aba-82cc-e3d90783978a\" (UID: \"dd2b8da9-7993-4aba-82cc-e3d90783978a\") " Nov 29 06:41:25 crc kubenswrapper[4947]: I1129 06:41:25.773408 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd2b8da9-7993-4aba-82cc-e3d90783978a-proxy-ca-bundles\") pod \"dd2b8da9-7993-4aba-82cc-e3d90783978a\" (UID: \"dd2b8da9-7993-4aba-82cc-e3d90783978a\") " Nov 29 06:41:25 crc kubenswrapper[4947]: I1129 06:41:25.773449 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhxv8\" (UniqueName: \"kubernetes.io/projected/dd2b8da9-7993-4aba-82cc-e3d90783978a-kube-api-access-qhxv8\") pod \"dd2b8da9-7993-4aba-82cc-e3d90783978a\" (UID: \"dd2b8da9-7993-4aba-82cc-e3d90783978a\") " Nov 29 06:41:25 crc kubenswrapper[4947]: I1129 06:41:25.773474 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd2b8da9-7993-4aba-82cc-e3d90783978a-config\") pod \"dd2b8da9-7993-4aba-82cc-e3d90783978a\" (UID: \"dd2b8da9-7993-4aba-82cc-e3d90783978a\") " Nov 29 06:41:25 crc kubenswrapper[4947]: I1129 06:41:25.773492 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd2b8da9-7993-4aba-82cc-e3d90783978a-client-ca\") pod \"dd2b8da9-7993-4aba-82cc-e3d90783978a\" (UID: \"dd2b8da9-7993-4aba-82cc-e3d90783978a\") " Nov 29 06:41:25 crc kubenswrapper[4947]: I1129 06:41:25.774609 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd2b8da9-7993-4aba-82cc-e3d90783978a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "dd2b8da9-7993-4aba-82cc-e3d90783978a" (UID: "dd2b8da9-7993-4aba-82cc-e3d90783978a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:41:25 crc kubenswrapper[4947]: I1129 06:41:25.774665 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd2b8da9-7993-4aba-82cc-e3d90783978a-client-ca" (OuterVolumeSpecName: "client-ca") pod "dd2b8da9-7993-4aba-82cc-e3d90783978a" (UID: "dd2b8da9-7993-4aba-82cc-e3d90783978a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:41:25 crc kubenswrapper[4947]: I1129 06:41:25.774737 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd2b8da9-7993-4aba-82cc-e3d90783978a-config" (OuterVolumeSpecName: "config") pod "dd2b8da9-7993-4aba-82cc-e3d90783978a" (UID: "dd2b8da9-7993-4aba-82cc-e3d90783978a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:41:25 crc kubenswrapper[4947]: I1129 06:41:25.780371 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd2b8da9-7993-4aba-82cc-e3d90783978a-kube-api-access-qhxv8" (OuterVolumeSpecName: "kube-api-access-qhxv8") pod "dd2b8da9-7993-4aba-82cc-e3d90783978a" (UID: "dd2b8da9-7993-4aba-82cc-e3d90783978a"). InnerVolumeSpecName "kube-api-access-qhxv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:41:25 crc kubenswrapper[4947]: I1129 06:41:25.780440 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd2b8da9-7993-4aba-82cc-e3d90783978a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dd2b8da9-7993-4aba-82cc-e3d90783978a" (UID: "dd2b8da9-7993-4aba-82cc-e3d90783978a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:41:25 crc kubenswrapper[4947]: I1129 06:41:25.875119 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd2b8da9-7993-4aba-82cc-e3d90783978a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 06:41:25 crc kubenswrapper[4947]: I1129 06:41:25.875177 4947 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd2b8da9-7993-4aba-82cc-e3d90783978a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 29 06:41:25 crc kubenswrapper[4947]: I1129 06:41:25.875212 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhxv8\" (UniqueName: \"kubernetes.io/projected/dd2b8da9-7993-4aba-82cc-e3d90783978a-kube-api-access-qhxv8\") on node \"crc\" DevicePath \"\"" Nov 29 06:41:25 crc kubenswrapper[4947]: I1129 06:41:25.875253 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd2b8da9-7993-4aba-82cc-e3d90783978a-config\") on node \"crc\" DevicePath \"\"" Nov 29 06:41:25 crc kubenswrapper[4947]: I1129 06:41:25.875270 4947 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd2b8da9-7993-4aba-82cc-e3d90783978a-client-ca\") on node \"crc\" DevicePath \"\"" Nov 29 06:41:26 crc kubenswrapper[4947]: I1129 06:41:26.392476 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75bb457f55-9qpmb" event={"ID":"dd2b8da9-7993-4aba-82cc-e3d90783978a","Type":"ContainerDied","Data":"c41e83ea3c2cbeae9734a51a4333c3376305da10a7848dbaf7dcb00473a281ae"} Nov 29 06:41:26 crc kubenswrapper[4947]: I1129 06:41:26.392592 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75bb457f55-9qpmb" Nov 29 06:41:26 crc kubenswrapper[4947]: I1129 06:41:26.392894 4947 scope.go:117] "RemoveContainer" containerID="2ba55cc09b55b015440160bb15977fd613a5a6a851ac58b144bf3f4c916b6116" Nov 29 06:41:26 crc kubenswrapper[4947]: I1129 06:41:26.431504 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-75bb457f55-9qpmb"] Nov 29 06:41:26 crc kubenswrapper[4947]: I1129 06:41:26.446549 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-75bb457f55-9qpmb"] Nov 29 06:41:26 crc kubenswrapper[4947]: I1129 06:41:26.891338 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-c47r7"] Nov 29 06:41:26 crc kubenswrapper[4947]: E1129 06:41:26.891610 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd2b8da9-7993-4aba-82cc-e3d90783978a" containerName="controller-manager" Nov 29 06:41:26 crc kubenswrapper[4947]: I1129 06:41:26.891625 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd2b8da9-7993-4aba-82cc-e3d90783978a" containerName="controller-manager" Nov 29 06:41:26 crc kubenswrapper[4947]: I1129 06:41:26.891732 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd2b8da9-7993-4aba-82cc-e3d90783978a" containerName="controller-manager" Nov 29 06:41:26 crc kubenswrapper[4947]: I1129 06:41:26.892212 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-c47r7" Nov 29 06:41:26 crc kubenswrapper[4947]: I1129 06:41:26.903185 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-bf79fb6ff-5zr4m"] Nov 29 06:41:26 crc kubenswrapper[4947]: I1129 06:41:26.903979 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bf79fb6ff-5zr4m" Nov 29 06:41:26 crc kubenswrapper[4947]: I1129 06:41:26.907156 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 29 06:41:26 crc kubenswrapper[4947]: I1129 06:41:26.907283 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 29 06:41:26 crc kubenswrapper[4947]: I1129 06:41:26.907487 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 29 06:41:26 crc kubenswrapper[4947]: I1129 06:41:26.907493 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 29 06:41:26 crc kubenswrapper[4947]: I1129 06:41:26.907666 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 29 06:41:26 crc kubenswrapper[4947]: I1129 06:41:26.907823 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 29 06:41:26 crc kubenswrapper[4947]: I1129 06:41:26.913916 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-c47r7"] Nov 29 06:41:26 crc kubenswrapper[4947]: I1129 06:41:26.918890 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 29 06:41:26 crc kubenswrapper[4947]: I1129 06:41:26.943048 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bf79fb6ff-5zr4m"] Nov 29 06:41:26 crc kubenswrapper[4947]: I1129 06:41:26.992659 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3c66849e-a08e-4092-9009-ae7865bab130-registry-certificates\") pod \"image-registry-66df7c8f76-c47r7\" (UID: \"3c66849e-a08e-4092-9009-ae7865bab130\") " pod="openshift-image-registry/image-registry-66df7c8f76-c47r7" Nov 29 06:41:26 crc kubenswrapper[4947]: I1129 06:41:26.992890 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3c66849e-a08e-4092-9009-ae7865bab130-trusted-ca\") pod \"image-registry-66df7c8f76-c47r7\" (UID: \"3c66849e-a08e-4092-9009-ae7865bab130\") " pod="openshift-image-registry/image-registry-66df7c8f76-c47r7" Nov 29 06:41:26 crc kubenswrapper[4947]: I1129 06:41:26.992925 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3c66849e-a08e-4092-9009-ae7865bab130-installation-pull-secrets\") pod \"image-registry-66df7c8f76-c47r7\" (UID: \"3c66849e-a08e-4092-9009-ae7865bab130\") " pod="openshift-image-registry/image-registry-66df7c8f76-c47r7" Nov 29 06:41:26 crc kubenswrapper[4947]: I1129 06:41:26.992955 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3c66849e-a08e-4092-9009-ae7865bab130-registry-tls\") pod \"image-registry-66df7c8f76-c47r7\" (UID: \"3c66849e-a08e-4092-9009-ae7865bab130\") " pod="openshift-image-registry/image-registry-66df7c8f76-c47r7" Nov 29 06:41:26 crc kubenswrapper[4947]: I1129 06:41:26.992978 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3c66849e-a08e-4092-9009-ae7865bab130-ca-trust-extracted\") pod \"image-registry-66df7c8f76-c47r7\" (UID: \"3c66849e-a08e-4092-9009-ae7865bab130\") " pod="openshift-image-registry/image-registry-66df7c8f76-c47r7" Nov 29 06:41:26 crc kubenswrapper[4947]: I1129 06:41:26.993004 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-c47r7\" (UID: \"3c66849e-a08e-4092-9009-ae7865bab130\") " pod="openshift-image-registry/image-registry-66df7c8f76-c47r7" Nov 29 06:41:26 crc kubenswrapper[4947]: I1129 06:41:26.993029 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3c66849e-a08e-4092-9009-ae7865bab130-bound-sa-token\") pod \"image-registry-66df7c8f76-c47r7\" (UID: \"3c66849e-a08e-4092-9009-ae7865bab130\") " pod="openshift-image-registry/image-registry-66df7c8f76-c47r7" Nov 29 06:41:26 crc kubenswrapper[4947]: I1129 06:41:26.993101 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfgxh\" (UniqueName: \"kubernetes.io/projected/3c66849e-a08e-4092-9009-ae7865bab130-kube-api-access-tfgxh\") pod \"image-registry-66df7c8f76-c47r7\" (UID: \"3c66849e-a08e-4092-9009-ae7865bab130\") " pod="openshift-image-registry/image-registry-66df7c8f76-c47r7" Nov 29 06:41:27 crc kubenswrapper[4947]: I1129 06:41:27.019909 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-c47r7\" (UID: \"3c66849e-a08e-4092-9009-ae7865bab130\") " pod="openshift-image-registry/image-registry-66df7c8f76-c47r7" Nov 29 06:41:27 crc kubenswrapper[4947]: I1129 06:41:27.094764 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfgxh\" (UniqueName: \"kubernetes.io/projected/3c66849e-a08e-4092-9009-ae7865bab130-kube-api-access-tfgxh\") pod \"image-registry-66df7c8f76-c47r7\" (UID: \"3c66849e-a08e-4092-9009-ae7865bab130\") " pod="openshift-image-registry/image-registry-66df7c8f76-c47r7" Nov 29 06:41:27 crc kubenswrapper[4947]: I1129 06:41:27.094825 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e7134509-609a-45d0-9dc3-525dc84544b3-client-ca\") pod \"controller-manager-bf79fb6ff-5zr4m\" (UID: \"e7134509-609a-45d0-9dc3-525dc84544b3\") " pod="openshift-controller-manager/controller-manager-bf79fb6ff-5zr4m" Nov 29 06:41:27 crc kubenswrapper[4947]: I1129 06:41:27.094867 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7134509-609a-45d0-9dc3-525dc84544b3-config\") pod \"controller-manager-bf79fb6ff-5zr4m\" (UID: \"e7134509-609a-45d0-9dc3-525dc84544b3\") " pod="openshift-controller-manager/controller-manager-bf79fb6ff-5zr4m" Nov 29 06:41:27 crc kubenswrapper[4947]: I1129 06:41:27.094893 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3c66849e-a08e-4092-9009-ae7865bab130-registry-certificates\") pod \"image-registry-66df7c8f76-c47r7\" (UID: \"3c66849e-a08e-4092-9009-ae7865bab130\") " pod="openshift-image-registry/image-registry-66df7c8f76-c47r7" Nov 29 06:41:27 crc kubenswrapper[4947]: I1129 06:41:27.094928 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxh2x\" (UniqueName: \"kubernetes.io/projected/e7134509-609a-45d0-9dc3-525dc84544b3-kube-api-access-fxh2x\") pod \"controller-manager-bf79fb6ff-5zr4m\" (UID: \"e7134509-609a-45d0-9dc3-525dc84544b3\") " pod="openshift-controller-manager/controller-manager-bf79fb6ff-5zr4m" Nov 29 06:41:27 crc kubenswrapper[4947]: I1129 06:41:27.094950 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3c66849e-a08e-4092-9009-ae7865bab130-trusted-ca\") pod \"image-registry-66df7c8f76-c47r7\" (UID: \"3c66849e-a08e-4092-9009-ae7865bab130\") " pod="openshift-image-registry/image-registry-66df7c8f76-c47r7" Nov 29 06:41:27 crc kubenswrapper[4947]: I1129 06:41:27.094972 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7134509-609a-45d0-9dc3-525dc84544b3-serving-cert\") pod \"controller-manager-bf79fb6ff-5zr4m\" (UID: \"e7134509-609a-45d0-9dc3-525dc84544b3\") " pod="openshift-controller-manager/controller-manager-bf79fb6ff-5zr4m" Nov 29 06:41:27 crc kubenswrapper[4947]: I1129 06:41:27.095044 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e7134509-609a-45d0-9dc3-525dc84544b3-proxy-ca-bundles\") pod \"controller-manager-bf79fb6ff-5zr4m\" (UID: \"e7134509-609a-45d0-9dc3-525dc84544b3\") " pod="openshift-controller-manager/controller-manager-bf79fb6ff-5zr4m" Nov 29 06:41:27 crc kubenswrapper[4947]: I1129 06:41:27.095069 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3c66849e-a08e-4092-9009-ae7865bab130-installation-pull-secrets\") pod \"image-registry-66df7c8f76-c47r7\" (UID: \"3c66849e-a08e-4092-9009-ae7865bab130\") " pod="openshift-image-registry/image-registry-66df7c8f76-c47r7" Nov 29 06:41:27 crc kubenswrapper[4947]: I1129 06:41:27.095132 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3c66849e-a08e-4092-9009-ae7865bab130-registry-tls\") pod \"image-registry-66df7c8f76-c47r7\" (UID: \"3c66849e-a08e-4092-9009-ae7865bab130\") " pod="openshift-image-registry/image-registry-66df7c8f76-c47r7" Nov 29 06:41:27 crc kubenswrapper[4947]: I1129 06:41:27.095168 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3c66849e-a08e-4092-9009-ae7865bab130-ca-trust-extracted\") pod \"image-registry-66df7c8f76-c47r7\" (UID: \"3c66849e-a08e-4092-9009-ae7865bab130\") " pod="openshift-image-registry/image-registry-66df7c8f76-c47r7" Nov 29 06:41:27 crc kubenswrapper[4947]: I1129 06:41:27.095231 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3c66849e-a08e-4092-9009-ae7865bab130-bound-sa-token\") pod \"image-registry-66df7c8f76-c47r7\" (UID: \"3c66849e-a08e-4092-9009-ae7865bab130\") " pod="openshift-image-registry/image-registry-66df7c8f76-c47r7" Nov 29 06:41:27 crc kubenswrapper[4947]: I1129 06:41:27.095808 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3c66849e-a08e-4092-9009-ae7865bab130-ca-trust-extracted\") pod \"image-registry-66df7c8f76-c47r7\" (UID: \"3c66849e-a08e-4092-9009-ae7865bab130\") " pod="openshift-image-registry/image-registry-66df7c8f76-c47r7" Nov 29 06:41:27 crc kubenswrapper[4947]: I1129 06:41:27.096769 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3c66849e-a08e-4092-9009-ae7865bab130-registry-certificates\") pod \"image-registry-66df7c8f76-c47r7\" (UID: \"3c66849e-a08e-4092-9009-ae7865bab130\") " pod="openshift-image-registry/image-registry-66df7c8f76-c47r7" Nov 29 06:41:27 crc kubenswrapper[4947]: I1129 06:41:27.097070 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3c66849e-a08e-4092-9009-ae7865bab130-trusted-ca\") pod \"image-registry-66df7c8f76-c47r7\" (UID: \"3c66849e-a08e-4092-9009-ae7865bab130\") " pod="openshift-image-registry/image-registry-66df7c8f76-c47r7" Nov 29 06:41:27 crc kubenswrapper[4947]: I1129 06:41:27.100455 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3c66849e-a08e-4092-9009-ae7865bab130-installation-pull-secrets\") pod \"image-registry-66df7c8f76-c47r7\" (UID: \"3c66849e-a08e-4092-9009-ae7865bab130\") " pod="openshift-image-registry/image-registry-66df7c8f76-c47r7" Nov 29 06:41:27 crc kubenswrapper[4947]: I1129 06:41:27.101451 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3c66849e-a08e-4092-9009-ae7865bab130-registry-tls\") pod \"image-registry-66df7c8f76-c47r7\" (UID: \"3c66849e-a08e-4092-9009-ae7865bab130\") " pod="openshift-image-registry/image-registry-66df7c8f76-c47r7" Nov 29 06:41:27 crc kubenswrapper[4947]: I1129 06:41:27.114176 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfgxh\" (UniqueName: \"kubernetes.io/projected/3c66849e-a08e-4092-9009-ae7865bab130-kube-api-access-tfgxh\") pod \"image-registry-66df7c8f76-c47r7\" (UID: \"3c66849e-a08e-4092-9009-ae7865bab130\") " pod="openshift-image-registry/image-registry-66df7c8f76-c47r7" Nov 29 06:41:27 crc kubenswrapper[4947]: I1129 06:41:27.115861 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3c66849e-a08e-4092-9009-ae7865bab130-bound-sa-token\") pod \"image-registry-66df7c8f76-c47r7\" (UID: \"3c66849e-a08e-4092-9009-ae7865bab130\") " pod="openshift-image-registry/image-registry-66df7c8f76-c47r7" Nov 29 06:41:27 crc kubenswrapper[4947]: I1129 06:41:27.187399 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd2b8da9-7993-4aba-82cc-e3d90783978a" path="/var/lib/kubelet/pods/dd2b8da9-7993-4aba-82cc-e3d90783978a/volumes" Nov 29 06:41:27 crc kubenswrapper[4947]: I1129 06:41:27.196729 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e7134509-609a-45d0-9dc3-525dc84544b3-client-ca\") pod \"controller-manager-bf79fb6ff-5zr4m\" (UID: \"e7134509-609a-45d0-9dc3-525dc84544b3\") " pod="openshift-controller-manager/controller-manager-bf79fb6ff-5zr4m" Nov 29 06:41:27 crc kubenswrapper[4947]: I1129 06:41:27.196809 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7134509-609a-45d0-9dc3-525dc84544b3-config\") pod \"controller-manager-bf79fb6ff-5zr4m\" (UID: \"e7134509-609a-45d0-9dc3-525dc84544b3\") " pod="openshift-controller-manager/controller-manager-bf79fb6ff-5zr4m" Nov 29 06:41:27 crc kubenswrapper[4947]: I1129 06:41:27.196858 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxh2x\" (UniqueName: \"kubernetes.io/projected/e7134509-609a-45d0-9dc3-525dc84544b3-kube-api-access-fxh2x\") pod \"controller-manager-bf79fb6ff-5zr4m\" (UID: \"e7134509-609a-45d0-9dc3-525dc84544b3\") " pod="openshift-controller-manager/controller-manager-bf79fb6ff-5zr4m" Nov 29 06:41:27 crc kubenswrapper[4947]: I1129 06:41:27.196889 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7134509-609a-45d0-9dc3-525dc84544b3-serving-cert\") pod \"controller-manager-bf79fb6ff-5zr4m\" (UID: \"e7134509-609a-45d0-9dc3-525dc84544b3\") " pod="openshift-controller-manager/controller-manager-bf79fb6ff-5zr4m" Nov 29 06:41:27 crc kubenswrapper[4947]: I1129 06:41:27.196924 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e7134509-609a-45d0-9dc3-525dc84544b3-proxy-ca-bundles\") pod \"controller-manager-bf79fb6ff-5zr4m\" (UID: \"e7134509-609a-45d0-9dc3-525dc84544b3\") " pod="openshift-controller-manager/controller-manager-bf79fb6ff-5zr4m" Nov 29 06:41:27 crc kubenswrapper[4947]: I1129 06:41:27.197980 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e7134509-609a-45d0-9dc3-525dc84544b3-client-ca\") pod \"controller-manager-bf79fb6ff-5zr4m\" (UID: \"e7134509-609a-45d0-9dc3-525dc84544b3\") " pod="openshift-controller-manager/controller-manager-bf79fb6ff-5zr4m" Nov 29 06:41:27 crc kubenswrapper[4947]: I1129 06:41:27.198085 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e7134509-609a-45d0-9dc3-525dc84544b3-proxy-ca-bundles\") pod \"controller-manager-bf79fb6ff-5zr4m\" (UID: \"e7134509-609a-45d0-9dc3-525dc84544b3\") " pod="openshift-controller-manager/controller-manager-bf79fb6ff-5zr4m" Nov 29 06:41:27 crc kubenswrapper[4947]: I1129 06:41:27.198770 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7134509-609a-45d0-9dc3-525dc84544b3-config\") pod \"controller-manager-bf79fb6ff-5zr4m\" (UID: \"e7134509-609a-45d0-9dc3-525dc84544b3\") " pod="openshift-controller-manager/controller-manager-bf79fb6ff-5zr4m" Nov 29 06:41:27 crc kubenswrapper[4947]: I1129 06:41:27.200889 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7134509-609a-45d0-9dc3-525dc84544b3-serving-cert\") pod \"controller-manager-bf79fb6ff-5zr4m\" (UID: \"e7134509-609a-45d0-9dc3-525dc84544b3\") " pod="openshift-controller-manager/controller-manager-bf79fb6ff-5zr4m" Nov 29 06:41:27 crc kubenswrapper[4947]: I1129 06:41:27.209864 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-c47r7" Nov 29 06:41:27 crc kubenswrapper[4947]: I1129 06:41:27.214985 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxh2x\" (UniqueName: \"kubernetes.io/projected/e7134509-609a-45d0-9dc3-525dc84544b3-kube-api-access-fxh2x\") pod \"controller-manager-bf79fb6ff-5zr4m\" (UID: \"e7134509-609a-45d0-9dc3-525dc84544b3\") " pod="openshift-controller-manager/controller-manager-bf79fb6ff-5zr4m" Nov 29 06:41:27 crc kubenswrapper[4947]: I1129 06:41:27.222853 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bf79fb6ff-5zr4m" Nov 29 06:41:27 crc kubenswrapper[4947]: I1129 06:41:27.408821 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-c47r7"] Nov 29 06:41:27 crc kubenswrapper[4947]: W1129 06:41:27.423974 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c66849e_a08e_4092_9009_ae7865bab130.slice/crio-78dc35fa1c2196710c8519543bc351924c864473a59d615b46fdff4bbdd34233 WatchSource:0}: Error finding container 78dc35fa1c2196710c8519543bc351924c864473a59d615b46fdff4bbdd34233: Status 404 returned error can't find the container with id 78dc35fa1c2196710c8519543bc351924c864473a59d615b46fdff4bbdd34233 Nov 29 06:41:27 crc kubenswrapper[4947]: I1129 06:41:27.477440 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bf79fb6ff-5zr4m"] Nov 29 06:41:27 crc kubenswrapper[4947]: W1129 06:41:27.485795 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7134509_609a_45d0_9dc3_525dc84544b3.slice/crio-9c86d2e651878ea82e43e0000b07517ba4894e5c5e3822633245ca47eac76ff4 WatchSource:0}: Error finding container 9c86d2e651878ea82e43e0000b07517ba4894e5c5e3822633245ca47eac76ff4: Status 404 returned error can't find the container with id 9c86d2e651878ea82e43e0000b07517ba4894e5c5e3822633245ca47eac76ff4 Nov 29 06:41:28 crc kubenswrapper[4947]: I1129 06:41:28.408993 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bf79fb6ff-5zr4m" event={"ID":"e7134509-609a-45d0-9dc3-525dc84544b3","Type":"ContainerStarted","Data":"b32299e259ed316b7ae2ce9e0a443cd47cf39ffac36235e70865379aee4358c8"} Nov 29 06:41:28 crc kubenswrapper[4947]: I1129 06:41:28.409857 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bf79fb6ff-5zr4m" event={"ID":"e7134509-609a-45d0-9dc3-525dc84544b3","Type":"ContainerStarted","Data":"9c86d2e651878ea82e43e0000b07517ba4894e5c5e3822633245ca47eac76ff4"} Nov 29 06:41:28 crc kubenswrapper[4947]: I1129 06:41:28.409955 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-bf79fb6ff-5zr4m" Nov 29 06:41:28 crc kubenswrapper[4947]: I1129 06:41:28.413324 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-c47r7" event={"ID":"3c66849e-a08e-4092-9009-ae7865bab130","Type":"ContainerStarted","Data":"f8a0b2a51f7371c4c4cc20d9750daeb4c18f95dc2967d09c449dfca4321c93bb"} Nov 29 06:41:28 crc kubenswrapper[4947]: I1129 06:41:28.413370 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-c47r7" event={"ID":"3c66849e-a08e-4092-9009-ae7865bab130","Type":"ContainerStarted","Data":"78dc35fa1c2196710c8519543bc351924c864473a59d615b46fdff4bbdd34233"} Nov 29 06:41:28 crc kubenswrapper[4947]: I1129 06:41:28.413521 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-c47r7" Nov 29 06:41:28 crc kubenswrapper[4947]: I1129 06:41:28.413938 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-bf79fb6ff-5zr4m" Nov 29 06:41:28 crc kubenswrapper[4947]: I1129 06:41:28.436126 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-bf79fb6ff-5zr4m" podStartSLOduration=3.436107829 podStartE2EDuration="3.436107829s" podCreationTimestamp="2025-11-29 06:41:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:41:28.435469342 +0000 UTC m=+439.479851433" watchObservedRunningTime="2025-11-29 06:41:28.436107829 +0000 UTC m=+439.480489910" Nov 29 06:41:28 crc kubenswrapper[4947]: I1129 06:41:28.478911 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-c47r7" podStartSLOduration=2.4788939389999998 podStartE2EDuration="2.478893939s" podCreationTimestamp="2025-11-29 06:41:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:41:28.476702943 +0000 UTC m=+439.521085024" watchObservedRunningTime="2025-11-29 06:41:28.478893939 +0000 UTC m=+439.523276030" Nov 29 06:41:29 crc kubenswrapper[4947]: I1129 06:41:29.063061 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-72fb2" podUID="51755494-2de8-480e-a1e5-fc10c9af3d06" containerName="oauth-openshift" containerID="cri-o://8a1a85f4520f8e1fdf1498e0da61293652c59ed96f6786bee8819969576885dc" gracePeriod=15 Nov 29 06:41:29 crc kubenswrapper[4947]: I1129 06:41:29.420342 4947 generic.go:334] "Generic (PLEG): container finished" podID="51755494-2de8-480e-a1e5-fc10c9af3d06" containerID="8a1a85f4520f8e1fdf1498e0da61293652c59ed96f6786bee8819969576885dc" exitCode=0 Nov 29 06:41:29 crc kubenswrapper[4947]: I1129 06:41:29.420437 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-72fb2" event={"ID":"51755494-2de8-480e-a1e5-fc10c9af3d06","Type":"ContainerDied","Data":"8a1a85f4520f8e1fdf1498e0da61293652c59ed96f6786bee8819969576885dc"} Nov 29 06:41:29 crc kubenswrapper[4947]: I1129 06:41:29.489725 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-72fb2" Nov 29 06:41:29 crc kubenswrapper[4947]: I1129 06:41:29.633177 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-user-idp-0-file-data\") pod \"51755494-2de8-480e-a1e5-fc10c9af3d06\" (UID: \"51755494-2de8-480e-a1e5-fc10c9af3d06\") " Nov 29 06:41:29 crc kubenswrapper[4947]: I1129 06:41:29.633311 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-system-trusted-ca-bundle\") pod \"51755494-2de8-480e-a1e5-fc10c9af3d06\" (UID: \"51755494-2de8-480e-a1e5-fc10c9af3d06\") " Nov 29 06:41:29 crc kubenswrapper[4947]: I1129 06:41:29.633335 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-user-template-error\") pod \"51755494-2de8-480e-a1e5-fc10c9af3d06\" (UID: \"51755494-2de8-480e-a1e5-fc10c9af3d06\") " Nov 29 06:41:29 crc kubenswrapper[4947]: I1129 06:41:29.633362 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-user-template-provider-selection\") pod \"51755494-2de8-480e-a1e5-fc10c9af3d06\" (UID: \"51755494-2de8-480e-a1e5-fc10c9af3d06\") " Nov 29 06:41:29 crc kubenswrapper[4947]: I1129 06:41:29.633388 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-system-router-certs\") pod \"51755494-2de8-480e-a1e5-fc10c9af3d06\" (UID: \"51755494-2de8-480e-a1e5-fc10c9af3d06\") " Nov 29 06:41:29 crc kubenswrapper[4947]: I1129 06:41:29.633425 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-system-ocp-branding-template\") pod \"51755494-2de8-480e-a1e5-fc10c9af3d06\" (UID: \"51755494-2de8-480e-a1e5-fc10c9af3d06\") " Nov 29 06:41:29 crc kubenswrapper[4947]: I1129 06:41:29.633460 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-user-template-login\") pod \"51755494-2de8-480e-a1e5-fc10c9af3d06\" (UID: \"51755494-2de8-480e-a1e5-fc10c9af3d06\") " Nov 29 06:41:29 crc kubenswrapper[4947]: I1129 06:41:29.633513 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-system-session\") pod \"51755494-2de8-480e-a1e5-fc10c9af3d06\" (UID: \"51755494-2de8-480e-a1e5-fc10c9af3d06\") " Nov 29 06:41:29 crc kubenswrapper[4947]: I1129 06:41:29.633541 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-system-cliconfig\") pod \"51755494-2de8-480e-a1e5-fc10c9af3d06\" (UID: \"51755494-2de8-480e-a1e5-fc10c9af3d06\") " Nov 29 06:41:29 crc kubenswrapper[4947]: I1129 06:41:29.633585 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/51755494-2de8-480e-a1e5-fc10c9af3d06-audit-dir\") pod \"51755494-2de8-480e-a1e5-fc10c9af3d06\" (UID: \"51755494-2de8-480e-a1e5-fc10c9af3d06\") " Nov 29 06:41:29 crc kubenswrapper[4947]: I1129 06:41:29.633612 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-system-service-ca\") pod \"51755494-2de8-480e-a1e5-fc10c9af3d06\" (UID: \"51755494-2de8-480e-a1e5-fc10c9af3d06\") " Nov 29 06:41:29 crc kubenswrapper[4947]: I1129 06:41:29.633633 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-system-serving-cert\") pod \"51755494-2de8-480e-a1e5-fc10c9af3d06\" (UID: \"51755494-2de8-480e-a1e5-fc10c9af3d06\") " Nov 29 06:41:29 crc kubenswrapper[4947]: I1129 06:41:29.633662 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwvbd\" (UniqueName: \"kubernetes.io/projected/51755494-2de8-480e-a1e5-fc10c9af3d06-kube-api-access-dwvbd\") pod \"51755494-2de8-480e-a1e5-fc10c9af3d06\" (UID: \"51755494-2de8-480e-a1e5-fc10c9af3d06\") " Nov 29 06:41:29 crc kubenswrapper[4947]: I1129 06:41:29.633697 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/51755494-2de8-480e-a1e5-fc10c9af3d06-audit-policies\") pod \"51755494-2de8-480e-a1e5-fc10c9af3d06\" (UID: \"51755494-2de8-480e-a1e5-fc10c9af3d06\") " Nov 29 06:41:29 crc kubenswrapper[4947]: I1129 06:41:29.634036 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51755494-2de8-480e-a1e5-fc10c9af3d06-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "51755494-2de8-480e-a1e5-fc10c9af3d06" (UID: "51755494-2de8-480e-a1e5-fc10c9af3d06"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 06:41:29 crc kubenswrapper[4947]: I1129 06:41:29.635205 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51755494-2de8-480e-a1e5-fc10c9af3d06-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "51755494-2de8-480e-a1e5-fc10c9af3d06" (UID: "51755494-2de8-480e-a1e5-fc10c9af3d06"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:41:29 crc kubenswrapper[4947]: I1129 06:41:29.635326 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "51755494-2de8-480e-a1e5-fc10c9af3d06" (UID: "51755494-2de8-480e-a1e5-fc10c9af3d06"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:41:29 crc kubenswrapper[4947]: I1129 06:41:29.635851 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "51755494-2de8-480e-a1e5-fc10c9af3d06" (UID: "51755494-2de8-480e-a1e5-fc10c9af3d06"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:41:29 crc kubenswrapper[4947]: I1129 06:41:29.635877 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "51755494-2de8-480e-a1e5-fc10c9af3d06" (UID: "51755494-2de8-480e-a1e5-fc10c9af3d06"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:41:29 crc kubenswrapper[4947]: I1129 06:41:29.641659 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "51755494-2de8-480e-a1e5-fc10c9af3d06" (UID: "51755494-2de8-480e-a1e5-fc10c9af3d06"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:41:29 crc kubenswrapper[4947]: I1129 06:41:29.641938 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "51755494-2de8-480e-a1e5-fc10c9af3d06" (UID: "51755494-2de8-480e-a1e5-fc10c9af3d06"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:41:29 crc kubenswrapper[4947]: I1129 06:41:29.643385 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51755494-2de8-480e-a1e5-fc10c9af3d06-kube-api-access-dwvbd" (OuterVolumeSpecName: "kube-api-access-dwvbd") pod "51755494-2de8-480e-a1e5-fc10c9af3d06" (UID: "51755494-2de8-480e-a1e5-fc10c9af3d06"). InnerVolumeSpecName "kube-api-access-dwvbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:41:29 crc kubenswrapper[4947]: I1129 06:41:29.643480 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "51755494-2de8-480e-a1e5-fc10c9af3d06" (UID: "51755494-2de8-480e-a1e5-fc10c9af3d06"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:41:29 crc kubenswrapper[4947]: I1129 06:41:29.644193 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "51755494-2de8-480e-a1e5-fc10c9af3d06" (UID: "51755494-2de8-480e-a1e5-fc10c9af3d06"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:41:29 crc kubenswrapper[4947]: I1129 06:41:29.649768 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "51755494-2de8-480e-a1e5-fc10c9af3d06" (UID: "51755494-2de8-480e-a1e5-fc10c9af3d06"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:41:29 crc kubenswrapper[4947]: I1129 06:41:29.651865 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "51755494-2de8-480e-a1e5-fc10c9af3d06" (UID: "51755494-2de8-480e-a1e5-fc10c9af3d06"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:41:29 crc kubenswrapper[4947]: I1129 06:41:29.651995 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "51755494-2de8-480e-a1e5-fc10c9af3d06" (UID: "51755494-2de8-480e-a1e5-fc10c9af3d06"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:41:29 crc kubenswrapper[4947]: I1129 06:41:29.652360 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "51755494-2de8-480e-a1e5-fc10c9af3d06" (UID: "51755494-2de8-480e-a1e5-fc10c9af3d06"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:41:29 crc kubenswrapper[4947]: I1129 06:41:29.735526 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 29 06:41:29 crc kubenswrapper[4947]: I1129 06:41:29.735571 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 06:41:29 crc kubenswrapper[4947]: I1129 06:41:29.735587 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 29 06:41:29 crc kubenswrapper[4947]: I1129 06:41:29.735602 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 29 06:41:29 crc kubenswrapper[4947]: I1129 06:41:29.735616 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 29 06:41:29 crc kubenswrapper[4947]: I1129 06:41:29.735630 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 29 06:41:29 crc kubenswrapper[4947]: I1129 06:41:29.735644 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 29 06:41:29 crc kubenswrapper[4947]: I1129 06:41:29.735657 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 29 06:41:29 crc kubenswrapper[4947]: I1129 06:41:29.735671 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 29 06:41:29 crc kubenswrapper[4947]: I1129 06:41:29.735684 4947 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/51755494-2de8-480e-a1e5-fc10c9af3d06-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 29 06:41:29 crc kubenswrapper[4947]: I1129 06:41:29.735697 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 06:41:29 crc kubenswrapper[4947]: I1129 06:41:29.735711 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/51755494-2de8-480e-a1e5-fc10c9af3d06-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 29 06:41:29 crc kubenswrapper[4947]: I1129 06:41:29.735726 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwvbd\" (UniqueName: \"kubernetes.io/projected/51755494-2de8-480e-a1e5-fc10c9af3d06-kube-api-access-dwvbd\") on node \"crc\" DevicePath \"\"" Nov 29 06:41:29 crc kubenswrapper[4947]: I1129 06:41:29.735739 4947 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/51755494-2de8-480e-a1e5-fc10c9af3d06-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 29 06:41:30 crc kubenswrapper[4947]: I1129 06:41:30.427466 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-72fb2" Nov 29 06:41:30 crc kubenswrapper[4947]: I1129 06:41:30.427457 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-72fb2" event={"ID":"51755494-2de8-480e-a1e5-fc10c9af3d06","Type":"ContainerDied","Data":"cf0ed6b71d6619d73770585a5962ccd2c8cbe1e644937cf9c5ac1b79bce0896b"} Nov 29 06:41:30 crc kubenswrapper[4947]: I1129 06:41:30.428010 4947 scope.go:117] "RemoveContainer" containerID="8a1a85f4520f8e1fdf1498e0da61293652c59ed96f6786bee8819969576885dc" Nov 29 06:41:30 crc kubenswrapper[4947]: I1129 06:41:30.471681 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-72fb2"] Nov 29 06:41:30 crc kubenswrapper[4947]: I1129 06:41:30.477895 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-72fb2"] Nov 29 06:41:30 crc kubenswrapper[4947]: I1129 06:41:30.679324 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-k262s"] Nov 29 06:41:30 crc kubenswrapper[4947]: E1129 06:41:30.679817 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51755494-2de8-480e-a1e5-fc10c9af3d06" containerName="oauth-openshift" Nov 29 06:41:30 crc kubenswrapper[4947]: I1129 06:41:30.679892 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="51755494-2de8-480e-a1e5-fc10c9af3d06" containerName="oauth-openshift" Nov 29 06:41:30 crc kubenswrapper[4947]: I1129 06:41:30.680045 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="51755494-2de8-480e-a1e5-fc10c9af3d06" containerName="oauth-openshift" Nov 29 06:41:30 crc kubenswrapper[4947]: I1129 06:41:30.680826 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k262s" Nov 29 06:41:30 crc kubenswrapper[4947]: I1129 06:41:30.682877 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 29 06:41:30 crc kubenswrapper[4947]: I1129 06:41:30.691452 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k262s"] Nov 29 06:41:30 crc kubenswrapper[4947]: I1129 06:41:30.849854 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v46w\" (UniqueName: \"kubernetes.io/projected/afb2b7b2-3432-44f3-adb2-f347d656aac2-kube-api-access-2v46w\") pod \"certified-operators-k262s\" (UID: \"afb2b7b2-3432-44f3-adb2-f347d656aac2\") " pod="openshift-marketplace/certified-operators-k262s" Nov 29 06:41:30 crc kubenswrapper[4947]: I1129 06:41:30.850394 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afb2b7b2-3432-44f3-adb2-f347d656aac2-utilities\") pod \"certified-operators-k262s\" (UID: \"afb2b7b2-3432-44f3-adb2-f347d656aac2\") " pod="openshift-marketplace/certified-operators-k262s" Nov 29 06:41:30 crc kubenswrapper[4947]: I1129 06:41:30.850450 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afb2b7b2-3432-44f3-adb2-f347d656aac2-catalog-content\") pod \"certified-operators-k262s\" (UID: \"afb2b7b2-3432-44f3-adb2-f347d656aac2\") " pod="openshift-marketplace/certified-operators-k262s" Nov 29 06:41:30 crc kubenswrapper[4947]: I1129 06:41:30.952132 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2v46w\" (UniqueName: \"kubernetes.io/projected/afb2b7b2-3432-44f3-adb2-f347d656aac2-kube-api-access-2v46w\") pod \"certified-operators-k262s\" (UID: \"afb2b7b2-3432-44f3-adb2-f347d656aac2\") " pod="openshift-marketplace/certified-operators-k262s" Nov 29 06:41:30 crc kubenswrapper[4947]: I1129 06:41:30.952208 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afb2b7b2-3432-44f3-adb2-f347d656aac2-utilities\") pod \"certified-operators-k262s\" (UID: \"afb2b7b2-3432-44f3-adb2-f347d656aac2\") " pod="openshift-marketplace/certified-operators-k262s" Nov 29 06:41:30 crc kubenswrapper[4947]: I1129 06:41:30.952266 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afb2b7b2-3432-44f3-adb2-f347d656aac2-catalog-content\") pod \"certified-operators-k262s\" (UID: \"afb2b7b2-3432-44f3-adb2-f347d656aac2\") " pod="openshift-marketplace/certified-operators-k262s" Nov 29 06:41:30 crc kubenswrapper[4947]: I1129 06:41:30.952884 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afb2b7b2-3432-44f3-adb2-f347d656aac2-catalog-content\") pod \"certified-operators-k262s\" (UID: \"afb2b7b2-3432-44f3-adb2-f347d656aac2\") " pod="openshift-marketplace/certified-operators-k262s" Nov 29 06:41:30 crc kubenswrapper[4947]: I1129 06:41:30.953072 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afb2b7b2-3432-44f3-adb2-f347d656aac2-utilities\") pod \"certified-operators-k262s\" (UID: \"afb2b7b2-3432-44f3-adb2-f347d656aac2\") " pod="openshift-marketplace/certified-operators-k262s" Nov 29 06:41:30 crc kubenswrapper[4947]: I1129 06:41:30.971422 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2v46w\" (UniqueName: \"kubernetes.io/projected/afb2b7b2-3432-44f3-adb2-f347d656aac2-kube-api-access-2v46w\") pod \"certified-operators-k262s\" (UID: \"afb2b7b2-3432-44f3-adb2-f347d656aac2\") " pod="openshift-marketplace/certified-operators-k262s" Nov 29 06:41:30 crc kubenswrapper[4947]: I1129 06:41:30.996437 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k262s" Nov 29 06:41:31 crc kubenswrapper[4947]: I1129 06:41:31.196794 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51755494-2de8-480e-a1e5-fc10c9af3d06" path="/var/lib/kubelet/pods/51755494-2de8-480e-a1e5-fc10c9af3d06/volumes" Nov 29 06:41:31 crc kubenswrapper[4947]: I1129 06:41:31.463054 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k262s"] Nov 29 06:41:31 crc kubenswrapper[4947]: W1129 06:41:31.471494 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podafb2b7b2_3432_44f3_adb2_f347d656aac2.slice/crio-b76864cc11cd3573fab1f00ea713b83caf5d7c1fa1456217c9b7c79a46813677 WatchSource:0}: Error finding container b76864cc11cd3573fab1f00ea713b83caf5d7c1fa1456217c9b7c79a46813677: Status 404 returned error can't find the container with id b76864cc11cd3573fab1f00ea713b83caf5d7c1fa1456217c9b7c79a46813677 Nov 29 06:41:31 crc kubenswrapper[4947]: I1129 06:41:31.671345 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bqjz7"] Nov 29 06:41:31 crc kubenswrapper[4947]: I1129 06:41:31.672518 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bqjz7" Nov 29 06:41:31 crc kubenswrapper[4947]: I1129 06:41:31.678063 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 29 06:41:31 crc kubenswrapper[4947]: I1129 06:41:31.678383 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bqjz7"] Nov 29 06:41:31 crc kubenswrapper[4947]: I1129 06:41:31.765064 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d9d3e66-fc0e-4abd-8992-3e364ae72745-catalog-content\") pod \"community-operators-bqjz7\" (UID: \"3d9d3e66-fc0e-4abd-8992-3e364ae72745\") " pod="openshift-marketplace/community-operators-bqjz7" Nov 29 06:41:31 crc kubenswrapper[4947]: I1129 06:41:31.765146 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2frm6\" (UniqueName: \"kubernetes.io/projected/3d9d3e66-fc0e-4abd-8992-3e364ae72745-kube-api-access-2frm6\") pod \"community-operators-bqjz7\" (UID: \"3d9d3e66-fc0e-4abd-8992-3e364ae72745\") " pod="openshift-marketplace/community-operators-bqjz7" Nov 29 06:41:31 crc kubenswrapper[4947]: I1129 06:41:31.765188 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d9d3e66-fc0e-4abd-8992-3e364ae72745-utilities\") pod \"community-operators-bqjz7\" (UID: \"3d9d3e66-fc0e-4abd-8992-3e364ae72745\") " pod="openshift-marketplace/community-operators-bqjz7" Nov 29 06:41:31 crc kubenswrapper[4947]: I1129 06:41:31.866783 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2frm6\" (UniqueName: \"kubernetes.io/projected/3d9d3e66-fc0e-4abd-8992-3e364ae72745-kube-api-access-2frm6\") pod \"community-operators-bqjz7\" (UID: \"3d9d3e66-fc0e-4abd-8992-3e364ae72745\") " pod="openshift-marketplace/community-operators-bqjz7" Nov 29 06:41:31 crc kubenswrapper[4947]: I1129 06:41:31.866867 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d9d3e66-fc0e-4abd-8992-3e364ae72745-utilities\") pod \"community-operators-bqjz7\" (UID: \"3d9d3e66-fc0e-4abd-8992-3e364ae72745\") " pod="openshift-marketplace/community-operators-bqjz7" Nov 29 06:41:31 crc kubenswrapper[4947]: I1129 06:41:31.866934 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d9d3e66-fc0e-4abd-8992-3e364ae72745-catalog-content\") pod \"community-operators-bqjz7\" (UID: \"3d9d3e66-fc0e-4abd-8992-3e364ae72745\") " pod="openshift-marketplace/community-operators-bqjz7" Nov 29 06:41:31 crc kubenswrapper[4947]: I1129 06:41:31.867624 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d9d3e66-fc0e-4abd-8992-3e364ae72745-catalog-content\") pod \"community-operators-bqjz7\" (UID: \"3d9d3e66-fc0e-4abd-8992-3e364ae72745\") " pod="openshift-marketplace/community-operators-bqjz7" Nov 29 06:41:31 crc kubenswrapper[4947]: I1129 06:41:31.868265 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d9d3e66-fc0e-4abd-8992-3e364ae72745-utilities\") pod \"community-operators-bqjz7\" (UID: \"3d9d3e66-fc0e-4abd-8992-3e364ae72745\") " pod="openshift-marketplace/community-operators-bqjz7" Nov 29 06:41:31 crc kubenswrapper[4947]: I1129 06:41:31.887041 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2frm6\" (UniqueName: \"kubernetes.io/projected/3d9d3e66-fc0e-4abd-8992-3e364ae72745-kube-api-access-2frm6\") pod \"community-operators-bqjz7\" (UID: \"3d9d3e66-fc0e-4abd-8992-3e364ae72745\") " pod="openshift-marketplace/community-operators-bqjz7" Nov 29 06:41:32 crc kubenswrapper[4947]: I1129 06:41:32.007930 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bqjz7" Nov 29 06:41:32 crc kubenswrapper[4947]: I1129 06:41:32.422671 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bqjz7"] Nov 29 06:41:32 crc kubenswrapper[4947]: I1129 06:41:32.445310 4947 generic.go:334] "Generic (PLEG): container finished" podID="afb2b7b2-3432-44f3-adb2-f347d656aac2" containerID="1c8310a584d75da7f1046882448f9541cb6e207fa3968ee01c82dbd3b18a906d" exitCode=0 Nov 29 06:41:32 crc kubenswrapper[4947]: I1129 06:41:32.445412 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k262s" event={"ID":"afb2b7b2-3432-44f3-adb2-f347d656aac2","Type":"ContainerDied","Data":"1c8310a584d75da7f1046882448f9541cb6e207fa3968ee01c82dbd3b18a906d"} Nov 29 06:41:32 crc kubenswrapper[4947]: I1129 06:41:32.445466 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k262s" event={"ID":"afb2b7b2-3432-44f3-adb2-f347d656aac2","Type":"ContainerStarted","Data":"b76864cc11cd3573fab1f00ea713b83caf5d7c1fa1456217c9b7c79a46813677"} Nov 29 06:41:32 crc kubenswrapper[4947]: I1129 06:41:32.446412 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bqjz7" event={"ID":"3d9d3e66-fc0e-4abd-8992-3e364ae72745","Type":"ContainerStarted","Data":"96fd5c13c9d44a1ee23380a6be7770652df9b4580ca7d9c8c7ce820604026577"} Nov 29 06:41:33 crc kubenswrapper[4947]: I1129 06:41:33.064631 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8jm8l"] Nov 29 06:41:33 crc kubenswrapper[4947]: I1129 06:41:33.066129 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8jm8l" Nov 29 06:41:33 crc kubenswrapper[4947]: I1129 06:41:33.068241 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 29 06:41:33 crc kubenswrapper[4947]: I1129 06:41:33.082607 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8jm8l"] Nov 29 06:41:33 crc kubenswrapper[4947]: I1129 06:41:33.184620 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32bcb09c-3f5f-40c7-ab5c-a7ae0c467f7e-utilities\") pod \"redhat-marketplace-8jm8l\" (UID: \"32bcb09c-3f5f-40c7-ab5c-a7ae0c467f7e\") " pod="openshift-marketplace/redhat-marketplace-8jm8l" Nov 29 06:41:33 crc kubenswrapper[4947]: I1129 06:41:33.184667 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32bcb09c-3f5f-40c7-ab5c-a7ae0c467f7e-catalog-content\") pod \"redhat-marketplace-8jm8l\" (UID: \"32bcb09c-3f5f-40c7-ab5c-a7ae0c467f7e\") " pod="openshift-marketplace/redhat-marketplace-8jm8l" Nov 29 06:41:33 crc kubenswrapper[4947]: I1129 06:41:33.184743 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvfvc\" (UniqueName: \"kubernetes.io/projected/32bcb09c-3f5f-40c7-ab5c-a7ae0c467f7e-kube-api-access-bvfvc\") pod \"redhat-marketplace-8jm8l\" (UID: \"32bcb09c-3f5f-40c7-ab5c-a7ae0c467f7e\") " pod="openshift-marketplace/redhat-marketplace-8jm8l" Nov 29 06:41:33 crc kubenswrapper[4947]: I1129 06:41:33.285890 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32bcb09c-3f5f-40c7-ab5c-a7ae0c467f7e-utilities\") pod \"redhat-marketplace-8jm8l\" (UID: \"32bcb09c-3f5f-40c7-ab5c-a7ae0c467f7e\") " pod="openshift-marketplace/redhat-marketplace-8jm8l" Nov 29 06:41:33 crc kubenswrapper[4947]: I1129 06:41:33.285937 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32bcb09c-3f5f-40c7-ab5c-a7ae0c467f7e-catalog-content\") pod \"redhat-marketplace-8jm8l\" (UID: \"32bcb09c-3f5f-40c7-ab5c-a7ae0c467f7e\") " pod="openshift-marketplace/redhat-marketplace-8jm8l" Nov 29 06:41:33 crc kubenswrapper[4947]: I1129 06:41:33.286052 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvfvc\" (UniqueName: \"kubernetes.io/projected/32bcb09c-3f5f-40c7-ab5c-a7ae0c467f7e-kube-api-access-bvfvc\") pod \"redhat-marketplace-8jm8l\" (UID: \"32bcb09c-3f5f-40c7-ab5c-a7ae0c467f7e\") " pod="openshift-marketplace/redhat-marketplace-8jm8l" Nov 29 06:41:33 crc kubenswrapper[4947]: I1129 06:41:33.286339 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32bcb09c-3f5f-40c7-ab5c-a7ae0c467f7e-utilities\") pod \"redhat-marketplace-8jm8l\" (UID: \"32bcb09c-3f5f-40c7-ab5c-a7ae0c467f7e\") " pod="openshift-marketplace/redhat-marketplace-8jm8l" Nov 29 06:41:33 crc kubenswrapper[4947]: I1129 06:41:33.286988 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32bcb09c-3f5f-40c7-ab5c-a7ae0c467f7e-catalog-content\") pod \"redhat-marketplace-8jm8l\" (UID: \"32bcb09c-3f5f-40c7-ab5c-a7ae0c467f7e\") " pod="openshift-marketplace/redhat-marketplace-8jm8l" Nov 29 06:41:33 crc kubenswrapper[4947]: I1129 06:41:33.307792 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvfvc\" (UniqueName: \"kubernetes.io/projected/32bcb09c-3f5f-40c7-ab5c-a7ae0c467f7e-kube-api-access-bvfvc\") pod \"redhat-marketplace-8jm8l\" (UID: \"32bcb09c-3f5f-40c7-ab5c-a7ae0c467f7e\") " pod="openshift-marketplace/redhat-marketplace-8jm8l" Nov 29 06:41:33 crc kubenswrapper[4947]: I1129 06:41:33.380109 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8jm8l" Nov 29 06:41:33 crc kubenswrapper[4947]: I1129 06:41:33.456627 4947 generic.go:334] "Generic (PLEG): container finished" podID="3d9d3e66-fc0e-4abd-8992-3e364ae72745" containerID="786db43ee8d7af2a78b434dbe8d1a56df1a5f166ec79b68e1263d3cec1b1b17d" exitCode=0 Nov 29 06:41:33 crc kubenswrapper[4947]: I1129 06:41:33.456734 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bqjz7" event={"ID":"3d9d3e66-fc0e-4abd-8992-3e364ae72745","Type":"ContainerDied","Data":"786db43ee8d7af2a78b434dbe8d1a56df1a5f166ec79b68e1263d3cec1b1b17d"} Nov 29 06:41:33 crc kubenswrapper[4947]: I1129 06:41:33.804662 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8jm8l"] Nov 29 06:41:33 crc kubenswrapper[4947]: I1129 06:41:33.907119 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-56c495df99-9m7wq"] Nov 29 06:41:33 crc kubenswrapper[4947]: I1129 06:41:33.908129 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-56c495df99-9m7wq" Nov 29 06:41:33 crc kubenswrapper[4947]: I1129 06:41:33.912805 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 29 06:41:33 crc kubenswrapper[4947]: I1129 06:41:33.912943 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 29 06:41:33 crc kubenswrapper[4947]: I1129 06:41:33.912966 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 29 06:41:33 crc kubenswrapper[4947]: I1129 06:41:33.912823 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 29 06:41:33 crc kubenswrapper[4947]: I1129 06:41:33.913648 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 29 06:41:33 crc kubenswrapper[4947]: I1129 06:41:33.915332 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 29 06:41:33 crc kubenswrapper[4947]: I1129 06:41:33.915519 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 29 06:41:33 crc kubenswrapper[4947]: I1129 06:41:33.916117 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 29 06:41:33 crc kubenswrapper[4947]: I1129 06:41:33.916263 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 29 06:41:33 crc kubenswrapper[4947]: I1129 06:41:33.916433 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 29 06:41:33 crc kubenswrapper[4947]: I1129 06:41:33.916558 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 29 06:41:33 crc kubenswrapper[4947]: I1129 06:41:33.919312 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 29 06:41:33 crc kubenswrapper[4947]: I1129 06:41:33.924929 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 29 06:41:33 crc kubenswrapper[4947]: I1129 06:41:33.928375 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 29 06:41:33 crc kubenswrapper[4947]: I1129 06:41:33.929449 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 29 06:41:33 crc kubenswrapper[4947]: I1129 06:41:33.931337 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-56c495df99-9m7wq"] Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.010479 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4b268a85-c922-4882-b3c1-5ae3664cfee6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-56c495df99-9m7wq\" (UID: \"4b268a85-c922-4882-b3c1-5ae3664cfee6\") " pod="openshift-authentication/oauth-openshift-56c495df99-9m7wq" Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.010538 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4b268a85-c922-4882-b3c1-5ae3664cfee6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-56c495df99-9m7wq\" (UID: \"4b268a85-c922-4882-b3c1-5ae3664cfee6\") " pod="openshift-authentication/oauth-openshift-56c495df99-9m7wq" Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.010595 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4b268a85-c922-4882-b3c1-5ae3664cfee6-audit-dir\") pod \"oauth-openshift-56c495df99-9m7wq\" (UID: \"4b268a85-c922-4882-b3c1-5ae3664cfee6\") " pod="openshift-authentication/oauth-openshift-56c495df99-9m7wq" Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.010626 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc7td\" (UniqueName: \"kubernetes.io/projected/4b268a85-c922-4882-b3c1-5ae3664cfee6-kube-api-access-kc7td\") pod \"oauth-openshift-56c495df99-9m7wq\" (UID: \"4b268a85-c922-4882-b3c1-5ae3664cfee6\") " pod="openshift-authentication/oauth-openshift-56c495df99-9m7wq" Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.010646 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4b268a85-c922-4882-b3c1-5ae3664cfee6-v4-0-config-system-service-ca\") pod \"oauth-openshift-56c495df99-9m7wq\" (UID: \"4b268a85-c922-4882-b3c1-5ae3664cfee6\") " pod="openshift-authentication/oauth-openshift-56c495df99-9m7wq" Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.010674 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4b268a85-c922-4882-b3c1-5ae3664cfee6-v4-0-config-system-router-certs\") pod \"oauth-openshift-56c495df99-9m7wq\" (UID: \"4b268a85-c922-4882-b3c1-5ae3664cfee6\") " pod="openshift-authentication/oauth-openshift-56c495df99-9m7wq" Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.010701 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4b268a85-c922-4882-b3c1-5ae3664cfee6-v4-0-config-user-template-error\") pod \"oauth-openshift-56c495df99-9m7wq\" (UID: \"4b268a85-c922-4882-b3c1-5ae3664cfee6\") " pod="openshift-authentication/oauth-openshift-56c495df99-9m7wq" Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.010731 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4b268a85-c922-4882-b3c1-5ae3664cfee6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-56c495df99-9m7wq\" (UID: \"4b268a85-c922-4882-b3c1-5ae3664cfee6\") " pod="openshift-authentication/oauth-openshift-56c495df99-9m7wq" Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.010754 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4b268a85-c922-4882-b3c1-5ae3664cfee6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-56c495df99-9m7wq\" (UID: \"4b268a85-c922-4882-b3c1-5ae3664cfee6\") " pod="openshift-authentication/oauth-openshift-56c495df99-9m7wq" Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.010777 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4b268a85-c922-4882-b3c1-5ae3664cfee6-audit-policies\") pod \"oauth-openshift-56c495df99-9m7wq\" (UID: \"4b268a85-c922-4882-b3c1-5ae3664cfee6\") " pod="openshift-authentication/oauth-openshift-56c495df99-9m7wq" Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.010809 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b268a85-c922-4882-b3c1-5ae3664cfee6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-56c495df99-9m7wq\" (UID: \"4b268a85-c922-4882-b3c1-5ae3664cfee6\") " pod="openshift-authentication/oauth-openshift-56c495df99-9m7wq" Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.010840 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4b268a85-c922-4882-b3c1-5ae3664cfee6-v4-0-config-system-session\") pod \"oauth-openshift-56c495df99-9m7wq\" (UID: \"4b268a85-c922-4882-b3c1-5ae3664cfee6\") " pod="openshift-authentication/oauth-openshift-56c495df99-9m7wq" Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.010860 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4b268a85-c922-4882-b3c1-5ae3664cfee6-v4-0-config-user-template-login\") pod \"oauth-openshift-56c495df99-9m7wq\" (UID: \"4b268a85-c922-4882-b3c1-5ae3664cfee6\") " pod="openshift-authentication/oauth-openshift-56c495df99-9m7wq" Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.010886 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4b268a85-c922-4882-b3c1-5ae3664cfee6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-56c495df99-9m7wq\" (UID: \"4b268a85-c922-4882-b3c1-5ae3664cfee6\") " pod="openshift-authentication/oauth-openshift-56c495df99-9m7wq" Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.066374 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xvx2h"] Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.067560 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xvx2h" Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.070394 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.082185 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xvx2h"] Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.112108 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4b268a85-c922-4882-b3c1-5ae3664cfee6-audit-dir\") pod \"oauth-openshift-56c495df99-9m7wq\" (UID: \"4b268a85-c922-4882-b3c1-5ae3664cfee6\") " pod="openshift-authentication/oauth-openshift-56c495df99-9m7wq" Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.112405 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc7td\" (UniqueName: \"kubernetes.io/projected/4b268a85-c922-4882-b3c1-5ae3664cfee6-kube-api-access-kc7td\") pod \"oauth-openshift-56c495df99-9m7wq\" (UID: \"4b268a85-c922-4882-b3c1-5ae3664cfee6\") " pod="openshift-authentication/oauth-openshift-56c495df99-9m7wq" Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.112491 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4b268a85-c922-4882-b3c1-5ae3664cfee6-v4-0-config-system-service-ca\") pod \"oauth-openshift-56c495df99-9m7wq\" (UID: \"4b268a85-c922-4882-b3c1-5ae3664cfee6\") " pod="openshift-authentication/oauth-openshift-56c495df99-9m7wq" Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.112575 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4b268a85-c922-4882-b3c1-5ae3664cfee6-v4-0-config-system-router-certs\") pod \"oauth-openshift-56c495df99-9m7wq\" (UID: \"4b268a85-c922-4882-b3c1-5ae3664cfee6\") " pod="openshift-authentication/oauth-openshift-56c495df99-9m7wq" Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.112282 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4b268a85-c922-4882-b3c1-5ae3664cfee6-audit-dir\") pod \"oauth-openshift-56c495df99-9m7wq\" (UID: \"4b268a85-c922-4882-b3c1-5ae3664cfee6\") " pod="openshift-authentication/oauth-openshift-56c495df99-9m7wq" Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.112954 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4b268a85-c922-4882-b3c1-5ae3664cfee6-v4-0-config-user-template-error\") pod \"oauth-openshift-56c495df99-9m7wq\" (UID: \"4b268a85-c922-4882-b3c1-5ae3664cfee6\") " pod="openshift-authentication/oauth-openshift-56c495df99-9m7wq" Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.113063 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4b268a85-c922-4882-b3c1-5ae3664cfee6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-56c495df99-9m7wq\" (UID: \"4b268a85-c922-4882-b3c1-5ae3664cfee6\") " pod="openshift-authentication/oauth-openshift-56c495df99-9m7wq" Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.113174 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4b268a85-c922-4882-b3c1-5ae3664cfee6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-56c495df99-9m7wq\" (UID: \"4b268a85-c922-4882-b3c1-5ae3664cfee6\") " pod="openshift-authentication/oauth-openshift-56c495df99-9m7wq" Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.114091 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4b268a85-c922-4882-b3c1-5ae3664cfee6-audit-policies\") pod \"oauth-openshift-56c495df99-9m7wq\" (UID: \"4b268a85-c922-4882-b3c1-5ae3664cfee6\") " pod="openshift-authentication/oauth-openshift-56c495df99-9m7wq" Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.114209 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b268a85-c922-4882-b3c1-5ae3664cfee6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-56c495df99-9m7wq\" (UID: \"4b268a85-c922-4882-b3c1-5ae3664cfee6\") " pod="openshift-authentication/oauth-openshift-56c495df99-9m7wq" Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.114332 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4b268a85-c922-4882-b3c1-5ae3664cfee6-v4-0-config-system-session\") pod \"oauth-openshift-56c495df99-9m7wq\" (UID: \"4b268a85-c922-4882-b3c1-5ae3664cfee6\") " pod="openshift-authentication/oauth-openshift-56c495df99-9m7wq" Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.114415 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4b268a85-c922-4882-b3c1-5ae3664cfee6-v4-0-config-user-template-login\") pod \"oauth-openshift-56c495df99-9m7wq\" (UID: \"4b268a85-c922-4882-b3c1-5ae3664cfee6\") " pod="openshift-authentication/oauth-openshift-56c495df99-9m7wq" Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.114518 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4b268a85-c922-4882-b3c1-5ae3664cfee6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-56c495df99-9m7wq\" (UID: \"4b268a85-c922-4882-b3c1-5ae3664cfee6\") " pod="openshift-authentication/oauth-openshift-56c495df99-9m7wq" Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.114651 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4b268a85-c922-4882-b3c1-5ae3664cfee6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-56c495df99-9m7wq\" (UID: \"4b268a85-c922-4882-b3c1-5ae3664cfee6\") " pod="openshift-authentication/oauth-openshift-56c495df99-9m7wq" Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.114742 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4b268a85-c922-4882-b3c1-5ae3664cfee6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-56c495df99-9m7wq\" (UID: \"4b268a85-c922-4882-b3c1-5ae3664cfee6\") " pod="openshift-authentication/oauth-openshift-56c495df99-9m7wq" Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.115877 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4b268a85-c922-4882-b3c1-5ae3664cfee6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-56c495df99-9m7wq\" (UID: \"4b268a85-c922-4882-b3c1-5ae3664cfee6\") " pod="openshift-authentication/oauth-openshift-56c495df99-9m7wq" Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.113932 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4b268a85-c922-4882-b3c1-5ae3664cfee6-v4-0-config-system-service-ca\") pod \"oauth-openshift-56c495df99-9m7wq\" (UID: \"4b268a85-c922-4882-b3c1-5ae3664cfee6\") " pod="openshift-authentication/oauth-openshift-56c495df99-9m7wq" Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.114704 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4b268a85-c922-4882-b3c1-5ae3664cfee6-audit-policies\") pod \"oauth-openshift-56c495df99-9m7wq\" (UID: \"4b268a85-c922-4882-b3c1-5ae3664cfee6\") " pod="openshift-authentication/oauth-openshift-56c495df99-9m7wq" Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.119452 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4b268a85-c922-4882-b3c1-5ae3664cfee6-v4-0-config-system-session\") pod \"oauth-openshift-56c495df99-9m7wq\" (UID: \"4b268a85-c922-4882-b3c1-5ae3664cfee6\") " pod="openshift-authentication/oauth-openshift-56c495df99-9m7wq" Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.119681 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b268a85-c922-4882-b3c1-5ae3664cfee6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-56c495df99-9m7wq\" (UID: \"4b268a85-c922-4882-b3c1-5ae3664cfee6\") " pod="openshift-authentication/oauth-openshift-56c495df99-9m7wq" Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.119684 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4b268a85-c922-4882-b3c1-5ae3664cfee6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-56c495df99-9m7wq\" (UID: \"4b268a85-c922-4882-b3c1-5ae3664cfee6\") " pod="openshift-authentication/oauth-openshift-56c495df99-9m7wq" Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.119942 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4b268a85-c922-4882-b3c1-5ae3664cfee6-v4-0-config-user-template-error\") pod \"oauth-openshift-56c495df99-9m7wq\" (UID: \"4b268a85-c922-4882-b3c1-5ae3664cfee6\") " pod="openshift-authentication/oauth-openshift-56c495df99-9m7wq" Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.120138 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4b268a85-c922-4882-b3c1-5ae3664cfee6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-56c495df99-9m7wq\" (UID: \"4b268a85-c922-4882-b3c1-5ae3664cfee6\") " pod="openshift-authentication/oauth-openshift-56c495df99-9m7wq" Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.120786 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4b268a85-c922-4882-b3c1-5ae3664cfee6-v4-0-config-system-router-certs\") pod \"oauth-openshift-56c495df99-9m7wq\" (UID: \"4b268a85-c922-4882-b3c1-5ae3664cfee6\") " pod="openshift-authentication/oauth-openshift-56c495df99-9m7wq" Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.121665 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4b268a85-c922-4882-b3c1-5ae3664cfee6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-56c495df99-9m7wq\" (UID: \"4b268a85-c922-4882-b3c1-5ae3664cfee6\") " pod="openshift-authentication/oauth-openshift-56c495df99-9m7wq" Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.123308 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4b268a85-c922-4882-b3c1-5ae3664cfee6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-56c495df99-9m7wq\" (UID: \"4b268a85-c922-4882-b3c1-5ae3664cfee6\") " pod="openshift-authentication/oauth-openshift-56c495df99-9m7wq" Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.124270 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4b268a85-c922-4882-b3c1-5ae3664cfee6-v4-0-config-user-template-login\") pod \"oauth-openshift-56c495df99-9m7wq\" (UID: \"4b268a85-c922-4882-b3c1-5ae3664cfee6\") " pod="openshift-authentication/oauth-openshift-56c495df99-9m7wq" Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.132385 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc7td\" (UniqueName: \"kubernetes.io/projected/4b268a85-c922-4882-b3c1-5ae3664cfee6-kube-api-access-kc7td\") pod \"oauth-openshift-56c495df99-9m7wq\" (UID: \"4b268a85-c922-4882-b3c1-5ae3664cfee6\") " pod="openshift-authentication/oauth-openshift-56c495df99-9m7wq" Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.215978 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb500be3-14fe-4b36-9690-6e603eee0771-utilities\") pod \"redhat-operators-xvx2h\" (UID: \"fb500be3-14fe-4b36-9690-6e603eee0771\") " pod="openshift-marketplace/redhat-operators-xvx2h" Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.216096 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84m8m\" (UniqueName: \"kubernetes.io/projected/fb500be3-14fe-4b36-9690-6e603eee0771-kube-api-access-84m8m\") pod \"redhat-operators-xvx2h\" (UID: \"fb500be3-14fe-4b36-9690-6e603eee0771\") " pod="openshift-marketplace/redhat-operators-xvx2h" Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.216175 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb500be3-14fe-4b36-9690-6e603eee0771-catalog-content\") pod \"redhat-operators-xvx2h\" (UID: \"fb500be3-14fe-4b36-9690-6e603eee0771\") " pod="openshift-marketplace/redhat-operators-xvx2h" Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.231943 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-56c495df99-9m7wq" Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.316993 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb500be3-14fe-4b36-9690-6e603eee0771-catalog-content\") pod \"redhat-operators-xvx2h\" (UID: \"fb500be3-14fe-4b36-9690-6e603eee0771\") " pod="openshift-marketplace/redhat-operators-xvx2h" Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.317059 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb500be3-14fe-4b36-9690-6e603eee0771-utilities\") pod \"redhat-operators-xvx2h\" (UID: \"fb500be3-14fe-4b36-9690-6e603eee0771\") " pod="openshift-marketplace/redhat-operators-xvx2h" Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.317088 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84m8m\" (UniqueName: \"kubernetes.io/projected/fb500be3-14fe-4b36-9690-6e603eee0771-kube-api-access-84m8m\") pod \"redhat-operators-xvx2h\" (UID: \"fb500be3-14fe-4b36-9690-6e603eee0771\") " pod="openshift-marketplace/redhat-operators-xvx2h" Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.317920 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb500be3-14fe-4b36-9690-6e603eee0771-catalog-content\") pod \"redhat-operators-xvx2h\" (UID: \"fb500be3-14fe-4b36-9690-6e603eee0771\") " pod="openshift-marketplace/redhat-operators-xvx2h" Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.317960 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb500be3-14fe-4b36-9690-6e603eee0771-utilities\") pod \"redhat-operators-xvx2h\" (UID: \"fb500be3-14fe-4b36-9690-6e603eee0771\") " pod="openshift-marketplace/redhat-operators-xvx2h" Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.341249 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84m8m\" (UniqueName: \"kubernetes.io/projected/fb500be3-14fe-4b36-9690-6e603eee0771-kube-api-access-84m8m\") pod \"redhat-operators-xvx2h\" (UID: \"fb500be3-14fe-4b36-9690-6e603eee0771\") " pod="openshift-marketplace/redhat-operators-xvx2h" Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.386493 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xvx2h" Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.473071 4947 generic.go:334] "Generic (PLEG): container finished" podID="32bcb09c-3f5f-40c7-ab5c-a7ae0c467f7e" containerID="a9951da3de99472a2674f5d21b54591bd850b8ebb3461d24f02d30eaf4df094e" exitCode=0 Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.473134 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8jm8l" event={"ID":"32bcb09c-3f5f-40c7-ab5c-a7ae0c467f7e","Type":"ContainerDied","Data":"a9951da3de99472a2674f5d21b54591bd850b8ebb3461d24f02d30eaf4df094e"} Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.473160 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8jm8l" event={"ID":"32bcb09c-3f5f-40c7-ab5c-a7ae0c467f7e","Type":"ContainerStarted","Data":"411799d770ae4786425325ce786dafdf78af3044fb216fb313ae6f84ee4a057b"} Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.478535 4947 generic.go:334] "Generic (PLEG): container finished" podID="afb2b7b2-3432-44f3-adb2-f347d656aac2" containerID="2f8122bf601423b5fbbac371b758673f4c84d8a8bbce55fc78d88510c8ce3eb5" exitCode=0 Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.478593 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k262s" event={"ID":"afb2b7b2-3432-44f3-adb2-f347d656aac2","Type":"ContainerDied","Data":"2f8122bf601423b5fbbac371b758673f4c84d8a8bbce55fc78d88510c8ce3eb5"} Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.701686 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-56c495df99-9m7wq"] Nov 29 06:41:34 crc kubenswrapper[4947]: W1129 06:41:34.707755 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b268a85_c922_4882_b3c1_5ae3664cfee6.slice/crio-499f3d292af1352fc36c6d0ce49c904e83a86ac90e965ea0d4b797350346661c WatchSource:0}: Error finding container 499f3d292af1352fc36c6d0ce49c904e83a86ac90e965ea0d4b797350346661c: Status 404 returned error can't find the container with id 499f3d292af1352fc36c6d0ce49c904e83a86ac90e965ea0d4b797350346661c Nov 29 06:41:34 crc kubenswrapper[4947]: I1129 06:41:34.884062 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xvx2h"] Nov 29 06:41:35 crc kubenswrapper[4947]: I1129 06:41:35.486245 4947 generic.go:334] "Generic (PLEG): container finished" podID="3d9d3e66-fc0e-4abd-8992-3e364ae72745" containerID="35b5de5e888d125f6861802d4e798eea40269af439410e05694a14a4f0169973" exitCode=0 Nov 29 06:41:35 crc kubenswrapper[4947]: I1129 06:41:35.486328 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bqjz7" event={"ID":"3d9d3e66-fc0e-4abd-8992-3e364ae72745","Type":"ContainerDied","Data":"35b5de5e888d125f6861802d4e798eea40269af439410e05694a14a4f0169973"} Nov 29 06:41:35 crc kubenswrapper[4947]: I1129 06:41:35.488000 4947 generic.go:334] "Generic (PLEG): container finished" podID="fb500be3-14fe-4b36-9690-6e603eee0771" containerID="c3c287263add84a9f17eb37df74b8a603a83222b84bdf8ad601b58275c823c8e" exitCode=0 Nov 29 06:41:35 crc kubenswrapper[4947]: I1129 06:41:35.488054 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xvx2h" event={"ID":"fb500be3-14fe-4b36-9690-6e603eee0771","Type":"ContainerDied","Data":"c3c287263add84a9f17eb37df74b8a603a83222b84bdf8ad601b58275c823c8e"} Nov 29 06:41:35 crc kubenswrapper[4947]: I1129 06:41:35.488076 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xvx2h" event={"ID":"fb500be3-14fe-4b36-9690-6e603eee0771","Type":"ContainerStarted","Data":"6dfa389f67410e6456667d9cc287c89a0dbdae98f0549383f8fad151141ed61e"} Nov 29 06:41:35 crc kubenswrapper[4947]: I1129 06:41:35.494270 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k262s" event={"ID":"afb2b7b2-3432-44f3-adb2-f347d656aac2","Type":"ContainerStarted","Data":"822d8081848d365d4fde3bb2d2c9671dac12214c2fe9045ecb7525bdd223d529"} Nov 29 06:41:35 crc kubenswrapper[4947]: I1129 06:41:35.495699 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-56c495df99-9m7wq" event={"ID":"4b268a85-c922-4882-b3c1-5ae3664cfee6","Type":"ContainerStarted","Data":"08521e1b1a1b992aa69d5bccd6d89d0a68247075b202b62b346bf9e1de2f07ad"} Nov 29 06:41:35 crc kubenswrapper[4947]: I1129 06:41:35.495734 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-56c495df99-9m7wq" event={"ID":"4b268a85-c922-4882-b3c1-5ae3664cfee6","Type":"ContainerStarted","Data":"499f3d292af1352fc36c6d0ce49c904e83a86ac90e965ea0d4b797350346661c"} Nov 29 06:41:35 crc kubenswrapper[4947]: I1129 06:41:35.496214 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-56c495df99-9m7wq" Nov 29 06:41:35 crc kubenswrapper[4947]: I1129 06:41:35.525807 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-56c495df99-9m7wq" podStartSLOduration=31.525785322 podStartE2EDuration="31.525785322s" podCreationTimestamp="2025-11-29 06:41:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:41:35.524376016 +0000 UTC m=+446.568758097" watchObservedRunningTime="2025-11-29 06:41:35.525785322 +0000 UTC m=+446.570167403" Nov 29 06:41:35 crc kubenswrapper[4947]: I1129 06:41:35.558210 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-k262s" podStartSLOduration=3.091536427 podStartE2EDuration="5.558189606s" podCreationTimestamp="2025-11-29 06:41:30 +0000 UTC" firstStartedPulling="2025-11-29 06:41:32.44678692 +0000 UTC m=+443.491169001" lastFinishedPulling="2025-11-29 06:41:34.913440089 +0000 UTC m=+445.957822180" observedRunningTime="2025-11-29 06:41:35.558030772 +0000 UTC m=+446.602412863" watchObservedRunningTime="2025-11-29 06:41:35.558189606 +0000 UTC m=+446.602571687" Nov 29 06:41:35 crc kubenswrapper[4947]: I1129 06:41:35.602962 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-56c495df99-9m7wq" Nov 29 06:41:36 crc kubenswrapper[4947]: I1129 06:41:36.502398 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bqjz7" event={"ID":"3d9d3e66-fc0e-4abd-8992-3e364ae72745","Type":"ContainerStarted","Data":"0fc75f6ab359ef25e3d45a02261ddba31ef3e1d0a16d38b3dbb4837bda69d7b2"} Nov 29 06:41:36 crc kubenswrapper[4947]: I1129 06:41:36.505379 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xvx2h" event={"ID":"fb500be3-14fe-4b36-9690-6e603eee0771","Type":"ContainerStarted","Data":"913c8586c0ba1a7d0d89e2f29bbb446a67332a273626c5d78ab3bdfd63503332"} Nov 29 06:41:36 crc kubenswrapper[4947]: I1129 06:41:36.507136 4947 generic.go:334] "Generic (PLEG): container finished" podID="32bcb09c-3f5f-40c7-ab5c-a7ae0c467f7e" containerID="ad61af1a66a3830a4f4f019e2df5a83a8a023783b3197e091c23424e806f83cd" exitCode=0 Nov 29 06:41:36 crc kubenswrapper[4947]: I1129 06:41:36.507321 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8jm8l" event={"ID":"32bcb09c-3f5f-40c7-ab5c-a7ae0c467f7e","Type":"ContainerDied","Data":"ad61af1a66a3830a4f4f019e2df5a83a8a023783b3197e091c23424e806f83cd"} Nov 29 06:41:36 crc kubenswrapper[4947]: I1129 06:41:36.543766 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bqjz7" podStartSLOduration=2.695177647 podStartE2EDuration="5.543748341s" podCreationTimestamp="2025-11-29 06:41:31 +0000 UTC" firstStartedPulling="2025-11-29 06:41:33.460016897 +0000 UTC m=+444.504398988" lastFinishedPulling="2025-11-29 06:41:36.308587601 +0000 UTC m=+447.352969682" observedRunningTime="2025-11-29 06:41:36.51960545 +0000 UTC m=+447.563987541" watchObservedRunningTime="2025-11-29 06:41:36.543748341 +0000 UTC m=+447.588130422" Nov 29 06:41:37 crc kubenswrapper[4947]: I1129 06:41:37.513245 4947 generic.go:334] "Generic (PLEG): container finished" podID="fb500be3-14fe-4b36-9690-6e603eee0771" containerID="913c8586c0ba1a7d0d89e2f29bbb446a67332a273626c5d78ab3bdfd63503332" exitCode=0 Nov 29 06:41:37 crc kubenswrapper[4947]: I1129 06:41:37.513294 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xvx2h" event={"ID":"fb500be3-14fe-4b36-9690-6e603eee0771","Type":"ContainerDied","Data":"913c8586c0ba1a7d0d89e2f29bbb446a67332a273626c5d78ab3bdfd63503332"} Nov 29 06:41:37 crc kubenswrapper[4947]: I1129 06:41:37.516181 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8jm8l" event={"ID":"32bcb09c-3f5f-40c7-ab5c-a7ae0c467f7e","Type":"ContainerStarted","Data":"4c9247ddf51a70859bda3eb7951a88a786c5a6a66fd5115d1c16b8c44a7b21bb"} Nov 29 06:41:37 crc kubenswrapper[4947]: I1129 06:41:37.558403 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8jm8l" podStartSLOduration=2.098548602 podStartE2EDuration="4.558362534s" podCreationTimestamp="2025-11-29 06:41:33 +0000 UTC" firstStartedPulling="2025-11-29 06:41:34.474728873 +0000 UTC m=+445.519110954" lastFinishedPulling="2025-11-29 06:41:36.934542785 +0000 UTC m=+447.978924886" observedRunningTime="2025-11-29 06:41:37.553588611 +0000 UTC m=+448.597970702" watchObservedRunningTime="2025-11-29 06:41:37.558362534 +0000 UTC m=+448.602744605" Nov 29 06:41:38 crc kubenswrapper[4947]: I1129 06:41:38.524700 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xvx2h" event={"ID":"fb500be3-14fe-4b36-9690-6e603eee0771","Type":"ContainerStarted","Data":"40ffcad495287a15c288e4e90e056abc6640627b3842c6ce8b7e508d77bde8fb"} Nov 29 06:41:38 crc kubenswrapper[4947]: I1129 06:41:38.543907 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xvx2h" podStartSLOduration=2.103797662 podStartE2EDuration="4.543886168s" podCreationTimestamp="2025-11-29 06:41:34 +0000 UTC" firstStartedPulling="2025-11-29 06:41:35.489684353 +0000 UTC m=+446.534066434" lastFinishedPulling="2025-11-29 06:41:37.929772859 +0000 UTC m=+448.974154940" observedRunningTime="2025-11-29 06:41:38.542650777 +0000 UTC m=+449.587032858" watchObservedRunningTime="2025-11-29 06:41:38.543886168 +0000 UTC m=+449.588268249" Nov 29 06:41:40 crc kubenswrapper[4947]: I1129 06:41:40.998047 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-k262s" Nov 29 06:41:40 crc kubenswrapper[4947]: I1129 06:41:40.998533 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-k262s" Nov 29 06:41:41 crc kubenswrapper[4947]: I1129 06:41:41.040604 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-k262s" Nov 29 06:41:41 crc kubenswrapper[4947]: I1129 06:41:41.579089 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-k262s" Nov 29 06:41:42 crc kubenswrapper[4947]: I1129 06:41:42.008167 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bqjz7" Nov 29 06:41:42 crc kubenswrapper[4947]: I1129 06:41:42.008286 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bqjz7" Nov 29 06:41:42 crc kubenswrapper[4947]: I1129 06:41:42.048009 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bqjz7" Nov 29 06:41:42 crc kubenswrapper[4947]: I1129 06:41:42.596051 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bqjz7" Nov 29 06:41:43 crc kubenswrapper[4947]: I1129 06:41:43.380591 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8jm8l" Nov 29 06:41:43 crc kubenswrapper[4947]: I1129 06:41:43.380652 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8jm8l" Nov 29 06:41:43 crc kubenswrapper[4947]: I1129 06:41:43.427594 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8jm8l" Nov 29 06:41:43 crc kubenswrapper[4947]: I1129 06:41:43.595288 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8jm8l" Nov 29 06:41:44 crc kubenswrapper[4947]: I1129 06:41:44.387491 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xvx2h" Nov 29 06:41:44 crc kubenswrapper[4947]: I1129 06:41:44.387839 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xvx2h" Nov 29 06:41:44 crc kubenswrapper[4947]: I1129 06:41:44.428885 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xvx2h" Nov 29 06:41:44 crc kubenswrapper[4947]: I1129 06:41:44.593354 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xvx2h" Nov 29 06:41:45 crc kubenswrapper[4947]: I1129 06:41:45.122163 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d97b4688c-m9zfq"] Nov 29 06:41:45 crc kubenswrapper[4947]: I1129 06:41:45.122456 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-d97b4688c-m9zfq" podUID="07acab73-b7a9-4e96-bc80-91f6f11d4dd2" containerName="route-controller-manager" containerID="cri-o://09070bd63016a2c395f8a86021cd12a92af269aa0044a6eba5cdd366fb5e5509" gracePeriod=30 Nov 29 06:41:47 crc kubenswrapper[4947]: I1129 06:41:47.215101 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-c47r7" Nov 29 06:41:47 crc kubenswrapper[4947]: I1129 06:41:47.279799 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-cjcrf"] Nov 29 06:41:48 crc kubenswrapper[4947]: I1129 06:41:48.531064 4947 patch_prober.go:28] interesting pod/route-controller-manager-d97b4688c-m9zfq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: connect: connection refused" start-of-body= Nov 29 06:41:48 crc kubenswrapper[4947]: I1129 06:41:48.531144 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-d97b4688c-m9zfq" podUID="07acab73-b7a9-4e96-bc80-91f6f11d4dd2" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: connect: connection refused" Nov 29 06:41:48 crc kubenswrapper[4947]: I1129 06:41:48.614474 4947 patch_prober.go:28] interesting pod/router-default-5444994796-6ndbx container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 29 06:41:48 crc kubenswrapper[4947]: I1129 06:41:48.614571 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-6ndbx" podUID="a4cf8b6f-d7b3-4cc2-adf5-2494f5d21b71" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 29 06:41:50 crc kubenswrapper[4947]: I1129 06:41:50.097881 4947 generic.go:334] "Generic (PLEG): container finished" podID="07acab73-b7a9-4e96-bc80-91f6f11d4dd2" containerID="09070bd63016a2c395f8a86021cd12a92af269aa0044a6eba5cdd366fb5e5509" exitCode=0 Nov 29 06:41:50 crc kubenswrapper[4947]: I1129 06:41:50.097913 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d97b4688c-m9zfq" event={"ID":"07acab73-b7a9-4e96-bc80-91f6f11d4dd2","Type":"ContainerDied","Data":"09070bd63016a2c395f8a86021cd12a92af269aa0044a6eba5cdd366fb5e5509"} Nov 29 06:41:50 crc kubenswrapper[4947]: I1129 06:41:50.390168 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d97b4688c-m9zfq" Nov 29 06:41:50 crc kubenswrapper[4947]: I1129 06:41:50.417501 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76df7fdf8-m568r"] Nov 29 06:41:50 crc kubenswrapper[4947]: E1129 06:41:50.417779 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07acab73-b7a9-4e96-bc80-91f6f11d4dd2" containerName="route-controller-manager" Nov 29 06:41:50 crc kubenswrapper[4947]: I1129 06:41:50.417795 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="07acab73-b7a9-4e96-bc80-91f6f11d4dd2" containerName="route-controller-manager" Nov 29 06:41:50 crc kubenswrapper[4947]: I1129 06:41:50.417927 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="07acab73-b7a9-4e96-bc80-91f6f11d4dd2" containerName="route-controller-manager" Nov 29 06:41:50 crc kubenswrapper[4947]: I1129 06:41:50.418537 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76df7fdf8-m568r" Nov 29 06:41:50 crc kubenswrapper[4947]: I1129 06:41:50.434256 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76df7fdf8-m568r"] Nov 29 06:41:50 crc kubenswrapper[4947]: I1129 06:41:50.448407 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07acab73-b7a9-4e96-bc80-91f6f11d4dd2-client-ca\") pod \"07acab73-b7a9-4e96-bc80-91f6f11d4dd2\" (UID: \"07acab73-b7a9-4e96-bc80-91f6f11d4dd2\") " Nov 29 06:41:50 crc kubenswrapper[4947]: I1129 06:41:50.448589 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07acab73-b7a9-4e96-bc80-91f6f11d4dd2-serving-cert\") pod \"07acab73-b7a9-4e96-bc80-91f6f11d4dd2\" (UID: \"07acab73-b7a9-4e96-bc80-91f6f11d4dd2\") " Nov 29 06:41:50 crc kubenswrapper[4947]: I1129 06:41:50.448631 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r99kf\" (UniqueName: \"kubernetes.io/projected/07acab73-b7a9-4e96-bc80-91f6f11d4dd2-kube-api-access-r99kf\") pod \"07acab73-b7a9-4e96-bc80-91f6f11d4dd2\" (UID: \"07acab73-b7a9-4e96-bc80-91f6f11d4dd2\") " Nov 29 06:41:50 crc kubenswrapper[4947]: I1129 06:41:50.448680 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07acab73-b7a9-4e96-bc80-91f6f11d4dd2-config\") pod \"07acab73-b7a9-4e96-bc80-91f6f11d4dd2\" (UID: \"07acab73-b7a9-4e96-bc80-91f6f11d4dd2\") " Nov 29 06:41:50 crc kubenswrapper[4947]: I1129 06:41:50.449093 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29715bcd-4ed5-45bf-9ab5-5f0e7f7d49b2-config\") pod \"route-controller-manager-76df7fdf8-m568r\" (UID: \"29715bcd-4ed5-45bf-9ab5-5f0e7f7d49b2\") " pod="openshift-route-controller-manager/route-controller-manager-76df7fdf8-m568r" Nov 29 06:41:50 crc kubenswrapper[4947]: I1129 06:41:50.449186 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29715bcd-4ed5-45bf-9ab5-5f0e7f7d49b2-serving-cert\") pod \"route-controller-manager-76df7fdf8-m568r\" (UID: \"29715bcd-4ed5-45bf-9ab5-5f0e7f7d49b2\") " pod="openshift-route-controller-manager/route-controller-manager-76df7fdf8-m568r" Nov 29 06:41:50 crc kubenswrapper[4947]: I1129 06:41:50.449206 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j55d\" (UniqueName: \"kubernetes.io/projected/29715bcd-4ed5-45bf-9ab5-5f0e7f7d49b2-kube-api-access-7j55d\") pod \"route-controller-manager-76df7fdf8-m568r\" (UID: \"29715bcd-4ed5-45bf-9ab5-5f0e7f7d49b2\") " pod="openshift-route-controller-manager/route-controller-manager-76df7fdf8-m568r" Nov 29 06:41:50 crc kubenswrapper[4947]: I1129 06:41:50.449296 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/29715bcd-4ed5-45bf-9ab5-5f0e7f7d49b2-client-ca\") pod \"route-controller-manager-76df7fdf8-m568r\" (UID: \"29715bcd-4ed5-45bf-9ab5-5f0e7f7d49b2\") " pod="openshift-route-controller-manager/route-controller-manager-76df7fdf8-m568r" Nov 29 06:41:50 crc kubenswrapper[4947]: I1129 06:41:50.450243 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07acab73-b7a9-4e96-bc80-91f6f11d4dd2-client-ca" (OuterVolumeSpecName: "client-ca") pod "07acab73-b7a9-4e96-bc80-91f6f11d4dd2" (UID: "07acab73-b7a9-4e96-bc80-91f6f11d4dd2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:41:50 crc kubenswrapper[4947]: I1129 06:41:50.451002 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07acab73-b7a9-4e96-bc80-91f6f11d4dd2-config" (OuterVolumeSpecName: "config") pod "07acab73-b7a9-4e96-bc80-91f6f11d4dd2" (UID: "07acab73-b7a9-4e96-bc80-91f6f11d4dd2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:41:50 crc kubenswrapper[4947]: I1129 06:41:50.458156 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07acab73-b7a9-4e96-bc80-91f6f11d4dd2-kube-api-access-r99kf" (OuterVolumeSpecName: "kube-api-access-r99kf") pod "07acab73-b7a9-4e96-bc80-91f6f11d4dd2" (UID: "07acab73-b7a9-4e96-bc80-91f6f11d4dd2"). InnerVolumeSpecName "kube-api-access-r99kf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:41:50 crc kubenswrapper[4947]: I1129 06:41:50.459739 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07acab73-b7a9-4e96-bc80-91f6f11d4dd2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "07acab73-b7a9-4e96-bc80-91f6f11d4dd2" (UID: "07acab73-b7a9-4e96-bc80-91f6f11d4dd2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:41:50 crc kubenswrapper[4947]: I1129 06:41:50.550479 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29715bcd-4ed5-45bf-9ab5-5f0e7f7d49b2-serving-cert\") pod \"route-controller-manager-76df7fdf8-m568r\" (UID: \"29715bcd-4ed5-45bf-9ab5-5f0e7f7d49b2\") " pod="openshift-route-controller-manager/route-controller-manager-76df7fdf8-m568r" Nov 29 06:41:50 crc kubenswrapper[4947]: I1129 06:41:50.550569 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j55d\" (UniqueName: \"kubernetes.io/projected/29715bcd-4ed5-45bf-9ab5-5f0e7f7d49b2-kube-api-access-7j55d\") pod \"route-controller-manager-76df7fdf8-m568r\" (UID: \"29715bcd-4ed5-45bf-9ab5-5f0e7f7d49b2\") " pod="openshift-route-controller-manager/route-controller-manager-76df7fdf8-m568r" Nov 29 06:41:50 crc kubenswrapper[4947]: I1129 06:41:50.550630 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/29715bcd-4ed5-45bf-9ab5-5f0e7f7d49b2-client-ca\") pod \"route-controller-manager-76df7fdf8-m568r\" (UID: \"29715bcd-4ed5-45bf-9ab5-5f0e7f7d49b2\") " pod="openshift-route-controller-manager/route-controller-manager-76df7fdf8-m568r" Nov 29 06:41:50 crc kubenswrapper[4947]: I1129 06:41:50.550665 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29715bcd-4ed5-45bf-9ab5-5f0e7f7d49b2-config\") pod \"route-controller-manager-76df7fdf8-m568r\" (UID: \"29715bcd-4ed5-45bf-9ab5-5f0e7f7d49b2\") " pod="openshift-route-controller-manager/route-controller-manager-76df7fdf8-m568r" Nov 29 06:41:50 crc kubenswrapper[4947]: I1129 06:41:50.550742 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07acab73-b7a9-4e96-bc80-91f6f11d4dd2-config\") on node \"crc\" DevicePath \"\"" Nov 29 06:41:50 crc kubenswrapper[4947]: I1129 06:41:50.550758 4947 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07acab73-b7a9-4e96-bc80-91f6f11d4dd2-client-ca\") on node \"crc\" DevicePath \"\"" Nov 29 06:41:50 crc kubenswrapper[4947]: I1129 06:41:50.550786 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07acab73-b7a9-4e96-bc80-91f6f11d4dd2-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 06:41:50 crc kubenswrapper[4947]: I1129 06:41:50.550800 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r99kf\" (UniqueName: \"kubernetes.io/projected/07acab73-b7a9-4e96-bc80-91f6f11d4dd2-kube-api-access-r99kf\") on node \"crc\" DevicePath \"\"" Nov 29 06:41:50 crc kubenswrapper[4947]: I1129 06:41:50.552129 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/29715bcd-4ed5-45bf-9ab5-5f0e7f7d49b2-client-ca\") pod \"route-controller-manager-76df7fdf8-m568r\" (UID: \"29715bcd-4ed5-45bf-9ab5-5f0e7f7d49b2\") " pod="openshift-route-controller-manager/route-controller-manager-76df7fdf8-m568r" Nov 29 06:41:50 crc kubenswrapper[4947]: I1129 06:41:50.552657 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29715bcd-4ed5-45bf-9ab5-5f0e7f7d49b2-config\") pod \"route-controller-manager-76df7fdf8-m568r\" (UID: \"29715bcd-4ed5-45bf-9ab5-5f0e7f7d49b2\") " pod="openshift-route-controller-manager/route-controller-manager-76df7fdf8-m568r" Nov 29 06:41:50 crc kubenswrapper[4947]: I1129 06:41:50.553961 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29715bcd-4ed5-45bf-9ab5-5f0e7f7d49b2-serving-cert\") pod \"route-controller-manager-76df7fdf8-m568r\" (UID: \"29715bcd-4ed5-45bf-9ab5-5f0e7f7d49b2\") " pod="openshift-route-controller-manager/route-controller-manager-76df7fdf8-m568r" Nov 29 06:41:50 crc kubenswrapper[4947]: I1129 06:41:50.570440 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j55d\" (UniqueName: \"kubernetes.io/projected/29715bcd-4ed5-45bf-9ab5-5f0e7f7d49b2-kube-api-access-7j55d\") pod \"route-controller-manager-76df7fdf8-m568r\" (UID: \"29715bcd-4ed5-45bf-9ab5-5f0e7f7d49b2\") " pod="openshift-route-controller-manager/route-controller-manager-76df7fdf8-m568r" Nov 29 06:41:50 crc kubenswrapper[4947]: I1129 06:41:50.757376 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76df7fdf8-m568r" Nov 29 06:41:51 crc kubenswrapper[4947]: I1129 06:41:51.106080 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d97b4688c-m9zfq" event={"ID":"07acab73-b7a9-4e96-bc80-91f6f11d4dd2","Type":"ContainerDied","Data":"752197ea1a2f0a94992f4df0629a8d1f622d76f8720cd151cd8ef11fff6f1b3c"} Nov 29 06:41:51 crc kubenswrapper[4947]: I1129 06:41:51.106463 4947 scope.go:117] "RemoveContainer" containerID="09070bd63016a2c395f8a86021cd12a92af269aa0044a6eba5cdd366fb5e5509" Nov 29 06:41:51 crc kubenswrapper[4947]: I1129 06:41:51.106314 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d97b4688c-m9zfq" Nov 29 06:41:51 crc kubenswrapper[4947]: I1129 06:41:51.151723 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d97b4688c-m9zfq"] Nov 29 06:41:51 crc kubenswrapper[4947]: I1129 06:41:51.154612 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d97b4688c-m9zfq"] Nov 29 06:41:51 crc kubenswrapper[4947]: I1129 06:41:51.187474 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07acab73-b7a9-4e96-bc80-91f6f11d4dd2" path="/var/lib/kubelet/pods/07acab73-b7a9-4e96-bc80-91f6f11d4dd2/volumes" Nov 29 06:41:51 crc kubenswrapper[4947]: I1129 06:41:51.195774 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76df7fdf8-m568r"] Nov 29 06:41:51 crc kubenswrapper[4947]: W1129 06:41:51.201447 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29715bcd_4ed5_45bf_9ab5_5f0e7f7d49b2.slice/crio-aa5c812a4dd6de54017bd9826fa31b423827141e1d1e3a1e3a89d6aab042312c WatchSource:0}: Error finding container aa5c812a4dd6de54017bd9826fa31b423827141e1d1e3a1e3a89d6aab042312c: Status 404 returned error can't find the container with id aa5c812a4dd6de54017bd9826fa31b423827141e1d1e3a1e3a89d6aab042312c Nov 29 06:41:52 crc kubenswrapper[4947]: I1129 06:41:52.114626 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76df7fdf8-m568r" event={"ID":"29715bcd-4ed5-45bf-9ab5-5f0e7f7d49b2","Type":"ContainerStarted","Data":"5c5c42f7e97d939be5cd013e7f14f33526a0c626103a5f9f5e4d3ca63886df39"} Nov 29 06:41:52 crc kubenswrapper[4947]: I1129 06:41:52.115059 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-76df7fdf8-m568r" Nov 29 06:41:52 crc kubenswrapper[4947]: I1129 06:41:52.115081 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76df7fdf8-m568r" event={"ID":"29715bcd-4ed5-45bf-9ab5-5f0e7f7d49b2","Type":"ContainerStarted","Data":"aa5c812a4dd6de54017bd9826fa31b423827141e1d1e3a1e3a89d6aab042312c"} Nov 29 06:41:52 crc kubenswrapper[4947]: I1129 06:41:52.140664 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-76df7fdf8-m568r" podStartSLOduration=7.140637711 podStartE2EDuration="7.140637711s" podCreationTimestamp="2025-11-29 06:41:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:41:52.135304233 +0000 UTC m=+463.179686334" watchObservedRunningTime="2025-11-29 06:41:52.140637711 +0000 UTC m=+463.185019802" Nov 29 06:41:52 crc kubenswrapper[4947]: I1129 06:41:52.271527 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-76df7fdf8-m568r" Nov 29 06:42:12 crc kubenswrapper[4947]: I1129 06:42:12.321649 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" podUID="819051f4-236d-42d3-b3cf-c82103136dce" containerName="registry" containerID="cri-o://9fa2f1f31d39edaf36810c348cb6d2c8f08a391489acff1804cdc85a7b24264e" gracePeriod=30 Nov 29 06:42:12 crc kubenswrapper[4947]: I1129 06:42:12.748649 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:42:12 crc kubenswrapper[4947]: I1129 06:42:12.796110 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"819051f4-236d-42d3-b3cf-c82103136dce\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " Nov 29 06:42:12 crc kubenswrapper[4947]: I1129 06:42:12.796493 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/819051f4-236d-42d3-b3cf-c82103136dce-trusted-ca\") pod \"819051f4-236d-42d3-b3cf-c82103136dce\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " Nov 29 06:42:12 crc kubenswrapper[4947]: I1129 06:42:12.796539 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/819051f4-236d-42d3-b3cf-c82103136dce-registry-tls\") pod \"819051f4-236d-42d3-b3cf-c82103136dce\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " Nov 29 06:42:12 crc kubenswrapper[4947]: I1129 06:42:12.796568 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/819051f4-236d-42d3-b3cf-c82103136dce-installation-pull-secrets\") pod \"819051f4-236d-42d3-b3cf-c82103136dce\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " Nov 29 06:42:12 crc kubenswrapper[4947]: I1129 06:42:12.796599 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j95cb\" (UniqueName: \"kubernetes.io/projected/819051f4-236d-42d3-b3cf-c82103136dce-kube-api-access-j95cb\") pod \"819051f4-236d-42d3-b3cf-c82103136dce\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " Nov 29 06:42:12 crc kubenswrapper[4947]: I1129 06:42:12.796623 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/819051f4-236d-42d3-b3cf-c82103136dce-ca-trust-extracted\") pod \"819051f4-236d-42d3-b3cf-c82103136dce\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " Nov 29 06:42:12 crc kubenswrapper[4947]: I1129 06:42:12.796647 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/819051f4-236d-42d3-b3cf-c82103136dce-bound-sa-token\") pod \"819051f4-236d-42d3-b3cf-c82103136dce\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " Nov 29 06:42:12 crc kubenswrapper[4947]: I1129 06:42:12.796673 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/819051f4-236d-42d3-b3cf-c82103136dce-registry-certificates\") pod \"819051f4-236d-42d3-b3cf-c82103136dce\" (UID: \"819051f4-236d-42d3-b3cf-c82103136dce\") " Nov 29 06:42:12 crc kubenswrapper[4947]: I1129 06:42:12.797680 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/819051f4-236d-42d3-b3cf-c82103136dce-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "819051f4-236d-42d3-b3cf-c82103136dce" (UID: "819051f4-236d-42d3-b3cf-c82103136dce"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:42:12 crc kubenswrapper[4947]: I1129 06:42:12.797745 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/819051f4-236d-42d3-b3cf-c82103136dce-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "819051f4-236d-42d3-b3cf-c82103136dce" (UID: "819051f4-236d-42d3-b3cf-c82103136dce"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:42:12 crc kubenswrapper[4947]: I1129 06:42:12.802244 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/819051f4-236d-42d3-b3cf-c82103136dce-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "819051f4-236d-42d3-b3cf-c82103136dce" (UID: "819051f4-236d-42d3-b3cf-c82103136dce"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:42:12 crc kubenswrapper[4947]: I1129 06:42:12.803153 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/819051f4-236d-42d3-b3cf-c82103136dce-kube-api-access-j95cb" (OuterVolumeSpecName: "kube-api-access-j95cb") pod "819051f4-236d-42d3-b3cf-c82103136dce" (UID: "819051f4-236d-42d3-b3cf-c82103136dce"). InnerVolumeSpecName "kube-api-access-j95cb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:42:12 crc kubenswrapper[4947]: I1129 06:42:12.805208 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/819051f4-236d-42d3-b3cf-c82103136dce-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "819051f4-236d-42d3-b3cf-c82103136dce" (UID: "819051f4-236d-42d3-b3cf-c82103136dce"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:42:12 crc kubenswrapper[4947]: I1129 06:42:12.805315 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/819051f4-236d-42d3-b3cf-c82103136dce-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "819051f4-236d-42d3-b3cf-c82103136dce" (UID: "819051f4-236d-42d3-b3cf-c82103136dce"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:42:12 crc kubenswrapper[4947]: I1129 06:42:12.808581 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "819051f4-236d-42d3-b3cf-c82103136dce" (UID: "819051f4-236d-42d3-b3cf-c82103136dce"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 29 06:42:12 crc kubenswrapper[4947]: I1129 06:42:12.814369 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/819051f4-236d-42d3-b3cf-c82103136dce-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "819051f4-236d-42d3-b3cf-c82103136dce" (UID: "819051f4-236d-42d3-b3cf-c82103136dce"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:42:12 crc kubenswrapper[4947]: I1129 06:42:12.897987 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j95cb\" (UniqueName: \"kubernetes.io/projected/819051f4-236d-42d3-b3cf-c82103136dce-kube-api-access-j95cb\") on node \"crc\" DevicePath \"\"" Nov 29 06:42:12 crc kubenswrapper[4947]: I1129 06:42:12.898038 4947 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/819051f4-236d-42d3-b3cf-c82103136dce-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 29 06:42:12 crc kubenswrapper[4947]: I1129 06:42:12.898048 4947 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/819051f4-236d-42d3-b3cf-c82103136dce-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 29 06:42:12 crc kubenswrapper[4947]: I1129 06:42:12.898060 4947 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/819051f4-236d-42d3-b3cf-c82103136dce-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 29 06:42:12 crc kubenswrapper[4947]: I1129 06:42:12.898070 4947 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/819051f4-236d-42d3-b3cf-c82103136dce-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 29 06:42:12 crc kubenswrapper[4947]: I1129 06:42:12.898081 4947 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/819051f4-236d-42d3-b3cf-c82103136dce-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 29 06:42:12 crc kubenswrapper[4947]: I1129 06:42:12.898090 4947 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/819051f4-236d-42d3-b3cf-c82103136dce-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 29 06:42:13 crc kubenswrapper[4947]: I1129 06:42:13.229772 4947 generic.go:334] "Generic (PLEG): container finished" podID="819051f4-236d-42d3-b3cf-c82103136dce" containerID="9fa2f1f31d39edaf36810c348cb6d2c8f08a391489acff1804cdc85a7b24264e" exitCode=0 Nov 29 06:42:13 crc kubenswrapper[4947]: I1129 06:42:13.229838 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" event={"ID":"819051f4-236d-42d3-b3cf-c82103136dce","Type":"ContainerDied","Data":"9fa2f1f31d39edaf36810c348cb6d2c8f08a391489acff1804cdc85a7b24264e"} Nov 29 06:42:13 crc kubenswrapper[4947]: I1129 06:42:13.229877 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" event={"ID":"819051f4-236d-42d3-b3cf-c82103136dce","Type":"ContainerDied","Data":"450be68be89a39c2a7e124829fb8cdeeb4dcb3c23bb75b99843614c39c019ec7"} Nov 29 06:42:13 crc kubenswrapper[4947]: I1129 06:42:13.229897 4947 scope.go:117] "RemoveContainer" containerID="9fa2f1f31d39edaf36810c348cb6d2c8f08a391489acff1804cdc85a7b24264e" Nov 29 06:42:13 crc kubenswrapper[4947]: I1129 06:42:13.229935 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-cjcrf" Nov 29 06:42:13 crc kubenswrapper[4947]: I1129 06:42:13.250398 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-cjcrf"] Nov 29 06:42:13 crc kubenswrapper[4947]: I1129 06:42:13.253887 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-cjcrf"] Nov 29 06:42:13 crc kubenswrapper[4947]: I1129 06:42:13.257072 4947 scope.go:117] "RemoveContainer" containerID="9fa2f1f31d39edaf36810c348cb6d2c8f08a391489acff1804cdc85a7b24264e" Nov 29 06:42:13 crc kubenswrapper[4947]: E1129 06:42:13.257573 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fa2f1f31d39edaf36810c348cb6d2c8f08a391489acff1804cdc85a7b24264e\": container with ID starting with 9fa2f1f31d39edaf36810c348cb6d2c8f08a391489acff1804cdc85a7b24264e not found: ID does not exist" containerID="9fa2f1f31d39edaf36810c348cb6d2c8f08a391489acff1804cdc85a7b24264e" Nov 29 06:42:13 crc kubenswrapper[4947]: I1129 06:42:13.257607 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fa2f1f31d39edaf36810c348cb6d2c8f08a391489acff1804cdc85a7b24264e"} err="failed to get container status \"9fa2f1f31d39edaf36810c348cb6d2c8f08a391489acff1804cdc85a7b24264e\": rpc error: code = NotFound desc = could not find container \"9fa2f1f31d39edaf36810c348cb6d2c8f08a391489acff1804cdc85a7b24264e\": container with ID starting with 9fa2f1f31d39edaf36810c348cb6d2c8f08a391489acff1804cdc85a7b24264e not found: ID does not exist" Nov 29 06:42:15 crc kubenswrapper[4947]: I1129 06:42:15.188204 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="819051f4-236d-42d3-b3cf-c82103136dce" path="/var/lib/kubelet/pods/819051f4-236d-42d3-b3cf-c82103136dce/volumes" Nov 29 06:43:12 crc kubenswrapper[4947]: I1129 06:43:12.471655 4947 scope.go:117] "RemoveContainer" containerID="31f5d71eb410e2949ad9f4965932711bd2c39c456909db5a679aa8c8f5bbb0b6" Nov 29 06:43:52 crc kubenswrapper[4947]: I1129 06:43:52.987785 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 06:43:52 crc kubenswrapper[4947]: I1129 06:43:52.988556 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 06:44:12 crc kubenswrapper[4947]: I1129 06:44:12.536003 4947 scope.go:117] "RemoveContainer" containerID="f23b2903b37d6385814ddc74168550bc6d78b9de063a891462687bc2bc1a7d35" Nov 29 06:44:22 crc kubenswrapper[4947]: I1129 06:44:22.988076 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 06:44:22 crc kubenswrapper[4947]: I1129 06:44:22.989312 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 06:44:52 crc kubenswrapper[4947]: I1129 06:44:52.987372 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 06:44:52 crc kubenswrapper[4947]: I1129 06:44:52.988085 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 06:44:52 crc kubenswrapper[4947]: I1129 06:44:52.988137 4947 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" Nov 29 06:44:52 crc kubenswrapper[4947]: I1129 06:44:52.988698 4947 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"381c0e9d0a59dd5856ec3a6931be38d490899e1db40040b972c52a0ea9ed0855"} pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 06:44:52 crc kubenswrapper[4947]: I1129 06:44:52.988744 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" containerID="cri-o://381c0e9d0a59dd5856ec3a6931be38d490899e1db40040b972c52a0ea9ed0855" gracePeriod=600 Nov 29 06:44:53 crc kubenswrapper[4947]: I1129 06:44:53.396373 4947 generic.go:334] "Generic (PLEG): container finished" podID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerID="381c0e9d0a59dd5856ec3a6931be38d490899e1db40040b972c52a0ea9ed0855" exitCode=0 Nov 29 06:44:53 crc kubenswrapper[4947]: I1129 06:44:53.396411 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" event={"ID":"5f4d791f-bb61-4aaa-a09c-3007b59645a7","Type":"ContainerDied","Data":"381c0e9d0a59dd5856ec3a6931be38d490899e1db40040b972c52a0ea9ed0855"} Nov 29 06:44:53 crc kubenswrapper[4947]: I1129 06:44:53.396443 4947 scope.go:117] "RemoveContainer" containerID="6742510082cd58dfd52c8f7fa3778bd9aaaffe372801b3a708a086461b8d5abd" Nov 29 06:44:55 crc kubenswrapper[4947]: I1129 06:44:55.410087 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" event={"ID":"5f4d791f-bb61-4aaa-a09c-3007b59645a7","Type":"ContainerStarted","Data":"e3f38270dbfc41785276b23821b9697dddfbb4108ac42aabfc9b7652679ef1e3"} Nov 29 06:45:00 crc kubenswrapper[4947]: I1129 06:45:00.166383 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406645-8z2x4"] Nov 29 06:45:00 crc kubenswrapper[4947]: E1129 06:45:00.166968 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="819051f4-236d-42d3-b3cf-c82103136dce" containerName="registry" Nov 29 06:45:00 crc kubenswrapper[4947]: I1129 06:45:00.166987 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="819051f4-236d-42d3-b3cf-c82103136dce" containerName="registry" Nov 29 06:45:00 crc kubenswrapper[4947]: I1129 06:45:00.167110 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="819051f4-236d-42d3-b3cf-c82103136dce" containerName="registry" Nov 29 06:45:00 crc kubenswrapper[4947]: I1129 06:45:00.167591 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406645-8z2x4" Nov 29 06:45:00 crc kubenswrapper[4947]: I1129 06:45:00.169274 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 29 06:45:00 crc kubenswrapper[4947]: I1129 06:45:00.169661 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 29 06:45:00 crc kubenswrapper[4947]: I1129 06:45:00.177593 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406645-8z2x4"] Nov 29 06:45:00 crc kubenswrapper[4947]: I1129 06:45:00.268864 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwcc8\" (UniqueName: \"kubernetes.io/projected/28fefedc-ca81-4a45-b82d-59283c409bc8-kube-api-access-jwcc8\") pod \"collect-profiles-29406645-8z2x4\" (UID: \"28fefedc-ca81-4a45-b82d-59283c409bc8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406645-8z2x4" Nov 29 06:45:00 crc kubenswrapper[4947]: I1129 06:45:00.268926 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28fefedc-ca81-4a45-b82d-59283c409bc8-config-volume\") pod \"collect-profiles-29406645-8z2x4\" (UID: \"28fefedc-ca81-4a45-b82d-59283c409bc8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406645-8z2x4" Nov 29 06:45:00 crc kubenswrapper[4947]: I1129 06:45:00.268966 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/28fefedc-ca81-4a45-b82d-59283c409bc8-secret-volume\") pod \"collect-profiles-29406645-8z2x4\" (UID: \"28fefedc-ca81-4a45-b82d-59283c409bc8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406645-8z2x4" Nov 29 06:45:00 crc kubenswrapper[4947]: I1129 06:45:00.370504 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwcc8\" (UniqueName: \"kubernetes.io/projected/28fefedc-ca81-4a45-b82d-59283c409bc8-kube-api-access-jwcc8\") pod \"collect-profiles-29406645-8z2x4\" (UID: \"28fefedc-ca81-4a45-b82d-59283c409bc8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406645-8z2x4" Nov 29 06:45:00 crc kubenswrapper[4947]: I1129 06:45:00.371137 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28fefedc-ca81-4a45-b82d-59283c409bc8-config-volume\") pod \"collect-profiles-29406645-8z2x4\" (UID: \"28fefedc-ca81-4a45-b82d-59283c409bc8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406645-8z2x4" Nov 29 06:45:00 crc kubenswrapper[4947]: I1129 06:45:00.372596 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/28fefedc-ca81-4a45-b82d-59283c409bc8-secret-volume\") pod \"collect-profiles-29406645-8z2x4\" (UID: \"28fefedc-ca81-4a45-b82d-59283c409bc8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406645-8z2x4" Nov 29 06:45:00 crc kubenswrapper[4947]: I1129 06:45:00.373303 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28fefedc-ca81-4a45-b82d-59283c409bc8-config-volume\") pod \"collect-profiles-29406645-8z2x4\" (UID: \"28fefedc-ca81-4a45-b82d-59283c409bc8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406645-8z2x4" Nov 29 06:45:00 crc kubenswrapper[4947]: I1129 06:45:00.380002 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/28fefedc-ca81-4a45-b82d-59283c409bc8-secret-volume\") pod \"collect-profiles-29406645-8z2x4\" (UID: \"28fefedc-ca81-4a45-b82d-59283c409bc8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406645-8z2x4" Nov 29 06:45:00 crc kubenswrapper[4947]: I1129 06:45:00.398181 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwcc8\" (UniqueName: \"kubernetes.io/projected/28fefedc-ca81-4a45-b82d-59283c409bc8-kube-api-access-jwcc8\") pod \"collect-profiles-29406645-8z2x4\" (UID: \"28fefedc-ca81-4a45-b82d-59283c409bc8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406645-8z2x4" Nov 29 06:45:00 crc kubenswrapper[4947]: I1129 06:45:00.491689 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406645-8z2x4" Nov 29 06:45:00 crc kubenswrapper[4947]: I1129 06:45:00.688134 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406645-8z2x4"] Nov 29 06:45:01 crc kubenswrapper[4947]: I1129 06:45:01.449067 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406645-8z2x4" event={"ID":"28fefedc-ca81-4a45-b82d-59283c409bc8","Type":"ContainerStarted","Data":"dd813fa78e705664a9f1bb11db47e1fd39a32077145d5f51b0aa8c6f31f904d1"} Nov 29 06:45:03 crc kubenswrapper[4947]: I1129 06:45:03.464115 4947 generic.go:334] "Generic (PLEG): container finished" podID="28fefedc-ca81-4a45-b82d-59283c409bc8" containerID="b11232dc16c361fdc972ac683f79ca7e1d88081cdec6cc576bf8dff38c8d6158" exitCode=0 Nov 29 06:45:03 crc kubenswrapper[4947]: I1129 06:45:03.464202 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406645-8z2x4" event={"ID":"28fefedc-ca81-4a45-b82d-59283c409bc8","Type":"ContainerDied","Data":"b11232dc16c361fdc972ac683f79ca7e1d88081cdec6cc576bf8dff38c8d6158"} Nov 29 06:45:04 crc kubenswrapper[4947]: I1129 06:45:04.688552 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406645-8z2x4" Nov 29 06:45:04 crc kubenswrapper[4947]: I1129 06:45:04.841664 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/28fefedc-ca81-4a45-b82d-59283c409bc8-secret-volume\") pod \"28fefedc-ca81-4a45-b82d-59283c409bc8\" (UID: \"28fefedc-ca81-4a45-b82d-59283c409bc8\") " Nov 29 06:45:04 crc kubenswrapper[4947]: I1129 06:45:04.841775 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwcc8\" (UniqueName: \"kubernetes.io/projected/28fefedc-ca81-4a45-b82d-59283c409bc8-kube-api-access-jwcc8\") pod \"28fefedc-ca81-4a45-b82d-59283c409bc8\" (UID: \"28fefedc-ca81-4a45-b82d-59283c409bc8\") " Nov 29 06:45:04 crc kubenswrapper[4947]: I1129 06:45:04.842603 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28fefedc-ca81-4a45-b82d-59283c409bc8-config-volume\") pod \"28fefedc-ca81-4a45-b82d-59283c409bc8\" (UID: \"28fefedc-ca81-4a45-b82d-59283c409bc8\") " Nov 29 06:45:04 crc kubenswrapper[4947]: I1129 06:45:04.843299 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28fefedc-ca81-4a45-b82d-59283c409bc8-config-volume" (OuterVolumeSpecName: "config-volume") pod "28fefedc-ca81-4a45-b82d-59283c409bc8" (UID: "28fefedc-ca81-4a45-b82d-59283c409bc8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:45:04 crc kubenswrapper[4947]: I1129 06:45:04.848322 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28fefedc-ca81-4a45-b82d-59283c409bc8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "28fefedc-ca81-4a45-b82d-59283c409bc8" (UID: "28fefedc-ca81-4a45-b82d-59283c409bc8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:45:04 crc kubenswrapper[4947]: I1129 06:45:04.849453 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28fefedc-ca81-4a45-b82d-59283c409bc8-kube-api-access-jwcc8" (OuterVolumeSpecName: "kube-api-access-jwcc8") pod "28fefedc-ca81-4a45-b82d-59283c409bc8" (UID: "28fefedc-ca81-4a45-b82d-59283c409bc8"). InnerVolumeSpecName "kube-api-access-jwcc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:45:04 crc kubenswrapper[4947]: I1129 06:45:04.944995 4947 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/28fefedc-ca81-4a45-b82d-59283c409bc8-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 29 06:45:04 crc kubenswrapper[4947]: I1129 06:45:04.945072 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwcc8\" (UniqueName: \"kubernetes.io/projected/28fefedc-ca81-4a45-b82d-59283c409bc8-kube-api-access-jwcc8\") on node \"crc\" DevicePath \"\"" Nov 29 06:45:04 crc kubenswrapper[4947]: I1129 06:45:04.945086 4947 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28fefedc-ca81-4a45-b82d-59283c409bc8-config-volume\") on node \"crc\" DevicePath \"\"" Nov 29 06:45:05 crc kubenswrapper[4947]: I1129 06:45:05.477906 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406645-8z2x4" event={"ID":"28fefedc-ca81-4a45-b82d-59283c409bc8","Type":"ContainerDied","Data":"dd813fa78e705664a9f1bb11db47e1fd39a32077145d5f51b0aa8c6f31f904d1"} Nov 29 06:45:05 crc kubenswrapper[4947]: I1129 06:45:05.477962 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd813fa78e705664a9f1bb11db47e1fd39a32077145d5f51b0aa8c6f31f904d1" Nov 29 06:45:05 crc kubenswrapper[4947]: I1129 06:45:05.477983 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406645-8z2x4" Nov 29 06:45:12 crc kubenswrapper[4947]: I1129 06:45:12.570299 4947 scope.go:117] "RemoveContainer" containerID="d80950f3cbe94e5e3050599144d1cebbe62ec504c014ce28ba34c389c2765f48" Nov 29 06:46:41 crc kubenswrapper[4947]: I1129 06:46:41.626260 4947 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 29 06:46:48 crc kubenswrapper[4947]: I1129 06:46:48.722281 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-qs54j"] Nov 29 06:46:48 crc kubenswrapper[4947]: E1129 06:46:48.723044 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28fefedc-ca81-4a45-b82d-59283c409bc8" containerName="collect-profiles" Nov 29 06:46:48 crc kubenswrapper[4947]: I1129 06:46:48.723057 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="28fefedc-ca81-4a45-b82d-59283c409bc8" containerName="collect-profiles" Nov 29 06:46:48 crc kubenswrapper[4947]: I1129 06:46:48.723152 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="28fefedc-ca81-4a45-b82d-59283c409bc8" containerName="collect-profiles" Nov 29 06:46:48 crc kubenswrapper[4947]: I1129 06:46:48.723555 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-qs54j" Nov 29 06:46:48 crc kubenswrapper[4947]: I1129 06:46:48.725120 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 29 06:46:48 crc kubenswrapper[4947]: I1129 06:46:48.725868 4947 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-qrc4g" Nov 29 06:46:48 crc kubenswrapper[4947]: I1129 06:46:48.726985 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-qs54j"] Nov 29 06:46:48 crc kubenswrapper[4947]: I1129 06:46:48.731651 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 29 06:46:48 crc kubenswrapper[4947]: I1129 06:46:48.733277 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-7lq6x"] Nov 29 06:46:48 crc kubenswrapper[4947]: I1129 06:46:48.733958 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-7lq6x" Nov 29 06:46:48 crc kubenswrapper[4947]: I1129 06:46:48.738894 4947 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-sd9h7" Nov 29 06:46:48 crc kubenswrapper[4947]: I1129 06:46:48.756498 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-gwf74"] Nov 29 06:46:48 crc kubenswrapper[4947]: I1129 06:46:48.758174 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-gwf74" Nov 29 06:46:48 crc kubenswrapper[4947]: I1129 06:46:48.761064 4947 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-kp4xz" Nov 29 06:46:48 crc kubenswrapper[4947]: I1129 06:46:48.762299 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-7lq6x"] Nov 29 06:46:48 crc kubenswrapper[4947]: I1129 06:46:48.797546 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-gwf74"] Nov 29 06:46:48 crc kubenswrapper[4947]: I1129 06:46:48.879604 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhsvm\" (UniqueName: \"kubernetes.io/projected/6a644d1d-1b54-443f-9b88-8540fca13140-kube-api-access-qhsvm\") pod \"cert-manager-5b446d88c5-7lq6x\" (UID: \"6a644d1d-1b54-443f-9b88-8540fca13140\") " pod="cert-manager/cert-manager-5b446d88c5-7lq6x" Nov 29 06:46:48 crc kubenswrapper[4947]: I1129 06:46:48.879656 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4tn8\" (UniqueName: \"kubernetes.io/projected/1a5dba8a-b8b9-48d7-9bd7-cf9873deaaec-kube-api-access-b4tn8\") pod \"cert-manager-webhook-5655c58dd6-gwf74\" (UID: \"1a5dba8a-b8b9-48d7-9bd7-cf9873deaaec\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-gwf74" Nov 29 06:46:48 crc kubenswrapper[4947]: I1129 06:46:48.879728 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfhp8\" (UniqueName: \"kubernetes.io/projected/dd5aa62c-e1b1-4ca2-b931-4599eacd883c-kube-api-access-bfhp8\") pod \"cert-manager-cainjector-7f985d654d-qs54j\" (UID: \"dd5aa62c-e1b1-4ca2-b931-4599eacd883c\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-qs54j" Nov 29 06:46:48 crc kubenswrapper[4947]: I1129 06:46:48.981175 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhsvm\" (UniqueName: \"kubernetes.io/projected/6a644d1d-1b54-443f-9b88-8540fca13140-kube-api-access-qhsvm\") pod \"cert-manager-5b446d88c5-7lq6x\" (UID: \"6a644d1d-1b54-443f-9b88-8540fca13140\") " pod="cert-manager/cert-manager-5b446d88c5-7lq6x" Nov 29 06:46:48 crc kubenswrapper[4947]: I1129 06:46:48.981247 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4tn8\" (UniqueName: \"kubernetes.io/projected/1a5dba8a-b8b9-48d7-9bd7-cf9873deaaec-kube-api-access-b4tn8\") pod \"cert-manager-webhook-5655c58dd6-gwf74\" (UID: \"1a5dba8a-b8b9-48d7-9bd7-cf9873deaaec\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-gwf74" Nov 29 06:46:48 crc kubenswrapper[4947]: I1129 06:46:48.981287 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfhp8\" (UniqueName: \"kubernetes.io/projected/dd5aa62c-e1b1-4ca2-b931-4599eacd883c-kube-api-access-bfhp8\") pod \"cert-manager-cainjector-7f985d654d-qs54j\" (UID: \"dd5aa62c-e1b1-4ca2-b931-4599eacd883c\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-qs54j" Nov 29 06:46:49 crc kubenswrapper[4947]: I1129 06:46:49.004521 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfhp8\" (UniqueName: \"kubernetes.io/projected/dd5aa62c-e1b1-4ca2-b931-4599eacd883c-kube-api-access-bfhp8\") pod \"cert-manager-cainjector-7f985d654d-qs54j\" (UID: \"dd5aa62c-e1b1-4ca2-b931-4599eacd883c\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-qs54j" Nov 29 06:46:49 crc kubenswrapper[4947]: I1129 06:46:49.004990 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhsvm\" (UniqueName: \"kubernetes.io/projected/6a644d1d-1b54-443f-9b88-8540fca13140-kube-api-access-qhsvm\") pod \"cert-manager-5b446d88c5-7lq6x\" (UID: \"6a644d1d-1b54-443f-9b88-8540fca13140\") " pod="cert-manager/cert-manager-5b446d88c5-7lq6x" Nov 29 06:46:49 crc kubenswrapper[4947]: I1129 06:46:49.007455 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4tn8\" (UniqueName: \"kubernetes.io/projected/1a5dba8a-b8b9-48d7-9bd7-cf9873deaaec-kube-api-access-b4tn8\") pod \"cert-manager-webhook-5655c58dd6-gwf74\" (UID: \"1a5dba8a-b8b9-48d7-9bd7-cf9873deaaec\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-gwf74" Nov 29 06:46:49 crc kubenswrapper[4947]: I1129 06:46:49.051574 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-qs54j" Nov 29 06:46:49 crc kubenswrapper[4947]: I1129 06:46:49.062483 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-7lq6x" Nov 29 06:46:49 crc kubenswrapper[4947]: I1129 06:46:49.079712 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-gwf74" Nov 29 06:46:49 crc kubenswrapper[4947]: I1129 06:46:49.251738 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-qs54j"] Nov 29 06:46:49 crc kubenswrapper[4947]: I1129 06:46:49.259586 4947 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 06:46:49 crc kubenswrapper[4947]: I1129 06:46:49.301559 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-7lq6x"] Nov 29 06:46:49 crc kubenswrapper[4947]: W1129 06:46:49.307840 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a644d1d_1b54_443f_9b88_8540fca13140.slice/crio-bcbd902ddaa56a08c170fb21570eb32f53ac3e16a844186875c8fc54b7d497ee WatchSource:0}: Error finding container bcbd902ddaa56a08c170fb21570eb32f53ac3e16a844186875c8fc54b7d497ee: Status 404 returned error can't find the container with id bcbd902ddaa56a08c170fb21570eb32f53ac3e16a844186875c8fc54b7d497ee Nov 29 06:46:49 crc kubenswrapper[4947]: I1129 06:46:49.339975 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-gwf74"] Nov 29 06:46:49 crc kubenswrapper[4947]: W1129 06:46:49.341553 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a5dba8a_b8b9_48d7_9bd7_cf9873deaaec.slice/crio-794ee1a972af52b36483ad3e4dd33f2c4cb33c8847000f467b4de4d491a1a8e0 WatchSource:0}: Error finding container 794ee1a972af52b36483ad3e4dd33f2c4cb33c8847000f467b4de4d491a1a8e0: Status 404 returned error can't find the container with id 794ee1a972af52b36483ad3e4dd33f2c4cb33c8847000f467b4de4d491a1a8e0 Nov 29 06:46:50 crc kubenswrapper[4947]: I1129 06:46:50.167967 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-gwf74" event={"ID":"1a5dba8a-b8b9-48d7-9bd7-cf9873deaaec","Type":"ContainerStarted","Data":"794ee1a972af52b36483ad3e4dd33f2c4cb33c8847000f467b4de4d491a1a8e0"} Nov 29 06:46:50 crc kubenswrapper[4947]: I1129 06:46:50.169039 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-qs54j" event={"ID":"dd5aa62c-e1b1-4ca2-b931-4599eacd883c","Type":"ContainerStarted","Data":"a555b9b1c3261a066dbd5c52170c3faaab3ce69a00be4f47a538d78e75446ead"} Nov 29 06:46:50 crc kubenswrapper[4947]: I1129 06:46:50.171111 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-7lq6x" event={"ID":"6a644d1d-1b54-443f-9b88-8540fca13140","Type":"ContainerStarted","Data":"bcbd902ddaa56a08c170fb21570eb32f53ac3e16a844186875c8fc54b7d497ee"} Nov 29 06:46:59 crc kubenswrapper[4947]: I1129 06:46:59.220590 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-z4rxq"] Nov 29 06:46:59 crc kubenswrapper[4947]: I1129 06:46:59.221531 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" podUID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" containerName="ovn-controller" containerID="cri-o://9e5372e6932fe776f4e4d18124f45a05be3f374919ce6df4426d89a023353836" gracePeriod=30 Nov 29 06:46:59 crc kubenswrapper[4947]: I1129 06:46:59.221595 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" podUID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" containerName="northd" containerID="cri-o://03aa7fc02c19732bcc2f291ae3e1eb8ee53ac2f8e4b6138a24c73eb69201fa5e" gracePeriod=30 Nov 29 06:46:59 crc kubenswrapper[4947]: I1129 06:46:59.221663 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" podUID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://ffeb939b1928f4e71530473d93e5b142eb70a09d929bee5bea36c407c0fb44a2" gracePeriod=30 Nov 29 06:46:59 crc kubenswrapper[4947]: I1129 06:46:59.221708 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" podUID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" containerName="kube-rbac-proxy-node" containerID="cri-o://26f82f9be693d04a2b2536773e664c7e2525ea3cf90f6135e5ed8a41e0def93c" gracePeriod=30 Nov 29 06:46:59 crc kubenswrapper[4947]: I1129 06:46:59.221745 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" podUID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" containerName="ovn-acl-logging" containerID="cri-o://ba937f20fd42fdc9c935084319c2d3c455813e35002ed22a4be631cc74fab26c" gracePeriod=30 Nov 29 06:46:59 crc kubenswrapper[4947]: I1129 06:46:59.221778 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" podUID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" containerName="sbdb" containerID="cri-o://fadf23d78dff09f3edcb6159171e1f55f5a84b42d3dbc6804a2e1764d6c71797" gracePeriod=30 Nov 29 06:46:59 crc kubenswrapper[4947]: I1129 06:46:59.221792 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" podUID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" containerName="nbdb" containerID="cri-o://b624f494cbf18c5d0c15ee7d074273ebb685465bc73aeebae77758840c958415" gracePeriod=30 Nov 29 06:46:59 crc kubenswrapper[4947]: I1129 06:46:59.268069 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" podUID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" containerName="ovnkube-controller" containerID="cri-o://5567c06c44d06a87e61444e9e77b7096c805189d10806a57b4c6cfd436cbce42" gracePeriod=30 Nov 29 06:46:59 crc kubenswrapper[4947]: I1129 06:46:59.961419 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z4rxq_dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0/ovnkube-controller/3.log" Nov 29 06:46:59 crc kubenswrapper[4947]: I1129 06:46:59.965951 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z4rxq_dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0/ovn-acl-logging/0.log" Nov 29 06:46:59 crc kubenswrapper[4947]: I1129 06:46:59.966634 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z4rxq_dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0/ovn-controller/0.log" Nov 29 06:46:59 crc kubenswrapper[4947]: I1129 06:46:59.967696 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.033006 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-69kjm"] Nov 29 06:47:00 crc kubenswrapper[4947]: E1129 06:47:00.033274 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" containerName="kube-rbac-proxy-ovn-metrics" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.033289 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" containerName="kube-rbac-proxy-ovn-metrics" Nov 29 06:47:00 crc kubenswrapper[4947]: E1129 06:47:00.033301 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" containerName="kubecfg-setup" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.033309 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" containerName="kubecfg-setup" Nov 29 06:47:00 crc kubenswrapper[4947]: E1129 06:47:00.033325 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" containerName="ovnkube-controller" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.033332 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" containerName="ovnkube-controller" Nov 29 06:47:00 crc kubenswrapper[4947]: E1129 06:47:00.033340 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" containerName="nbdb" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.033346 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" containerName="nbdb" Nov 29 06:47:00 crc kubenswrapper[4947]: E1129 06:47:00.033352 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" containerName="sbdb" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.033358 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" containerName="sbdb" Nov 29 06:47:00 crc kubenswrapper[4947]: E1129 06:47:00.033539 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" containerName="ovn-controller" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.033546 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" containerName="ovn-controller" Nov 29 06:47:00 crc kubenswrapper[4947]: E1129 06:47:00.033556 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" containerName="ovnkube-controller" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.033562 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" containerName="ovnkube-controller" Nov 29 06:47:00 crc kubenswrapper[4947]: E1129 06:47:00.033570 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" containerName="ovnkube-controller" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.033575 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" containerName="ovnkube-controller" Nov 29 06:47:00 crc kubenswrapper[4947]: E1129 06:47:00.033586 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" containerName="northd" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.033593 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" containerName="northd" Nov 29 06:47:00 crc kubenswrapper[4947]: E1129 06:47:00.033601 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" containerName="ovnkube-controller" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.033607 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" containerName="ovnkube-controller" Nov 29 06:47:00 crc kubenswrapper[4947]: E1129 06:47:00.033620 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" containerName="kube-rbac-proxy-node" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.033626 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" containerName="kube-rbac-proxy-node" Nov 29 06:47:00 crc kubenswrapper[4947]: E1129 06:47:00.033632 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" containerName="ovn-acl-logging" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.033638 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" containerName="ovn-acl-logging" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.033730 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" containerName="ovnkube-controller" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.033739 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" containerName="ovn-controller" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.033745 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" containerName="northd" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.033751 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" containerName="sbdb" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.033759 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" containerName="nbdb" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.033769 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" containerName="ovnkube-controller" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.033778 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" containerName="ovnkube-controller" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.033785 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" containerName="kube-rbac-proxy-node" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.033793 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" containerName="ovn-acl-logging" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.033800 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" containerName="ovnkube-controller" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.033806 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" containerName="kube-rbac-proxy-ovn-metrics" Nov 29 06:47:00 crc kubenswrapper[4947]: E1129 06:47:00.033896 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" containerName="ovnkube-controller" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.033903 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" containerName="ovnkube-controller" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.033986 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" containerName="ovnkube-controller" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.035526 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.065095 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-host-cni-netd\") pod \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.065193 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-host-kubelet\") pod \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.065256 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-ovnkube-script-lib\") pod \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.065297 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-node-log\") pod \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.065321 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" (UID: "dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.065343 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-ovnkube-config\") pod \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.065380 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" (UID: "dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.065465 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-host-run-ovn-kubernetes\") pod \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.065524 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-host-cni-bin\") pod \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.065636 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-run-ovn\") pod \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.065854 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-node-log" (OuterVolumeSpecName: "node-log") pod "dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" (UID: "dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.065935 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" (UID: "dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.065944 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8702f77b-770f-4eac-a154-c2de370c22c2-node-log\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.065967 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" (UID: "dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.065991 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" (UID: "dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.066026 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wjpx\" (UniqueName: \"kubernetes.io/projected/8702f77b-770f-4eac-a154-c2de370c22c2-kube-api-access-2wjpx\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.066123 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" (UID: "dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.066154 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" (UID: "dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.066180 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8702f77b-770f-4eac-a154-c2de370c22c2-ovnkube-config\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.066275 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8702f77b-770f-4eac-a154-c2de370c22c2-host-run-ovn-kubernetes\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.066317 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8702f77b-770f-4eac-a154-c2de370c22c2-host-kubelet\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.066357 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8702f77b-770f-4eac-a154-c2de370c22c2-log-socket\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.066392 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8702f77b-770f-4eac-a154-c2de370c22c2-ovnkube-script-lib\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.066421 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8702f77b-770f-4eac-a154-c2de370c22c2-host-slash\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.066447 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8702f77b-770f-4eac-a154-c2de370c22c2-run-ovn\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.066512 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8702f77b-770f-4eac-a154-c2de370c22c2-run-systemd\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.066562 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8702f77b-770f-4eac-a154-c2de370c22c2-host-cni-bin\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.066586 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8702f77b-770f-4eac-a154-c2de370c22c2-ovn-node-metrics-cert\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.066621 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8702f77b-770f-4eac-a154-c2de370c22c2-run-openvswitch\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.066640 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8702f77b-770f-4eac-a154-c2de370c22c2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.066659 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8702f77b-770f-4eac-a154-c2de370c22c2-etc-openvswitch\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.066679 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8702f77b-770f-4eac-a154-c2de370c22c2-host-run-netns\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.066728 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8702f77b-770f-4eac-a154-c2de370c22c2-systemd-units\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.066749 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8702f77b-770f-4eac-a154-c2de370c22c2-var-lib-openvswitch\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.066767 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8702f77b-770f-4eac-a154-c2de370c22c2-env-overrides\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.066807 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8702f77b-770f-4eac-a154-c2de370c22c2-host-cni-netd\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.066912 4947 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.066941 4947 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-node-log\") on node \"crc\" DevicePath \"\"" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.066954 4947 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.066968 4947 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.066982 4947 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-host-cni-bin\") on node \"crc\" DevicePath \"\"" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.066995 4947 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.067007 4947 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-host-cni-netd\") on node \"crc\" DevicePath \"\"" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.067020 4947 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-host-kubelet\") on node \"crc\" DevicePath \"\"" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.167509 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-systemd-units\") pod \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.167582 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grbf6\" (UniqueName: \"kubernetes.io/projected/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-kube-api-access-grbf6\") pod \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.167628 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-run-systemd\") pod \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.167676 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-etc-openvswitch\") pod \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.167667 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" (UID: "dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.167706 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-host-run-netns\") pod \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.167748 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-run-openvswitch\") pod \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.167782 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-env-overrides\") pod \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.167784 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" (UID: "dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.167811 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-log-socket\") pod \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.167830 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" (UID: "dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.167839 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-host-slash\") pod \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.167870 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-var-lib-openvswitch\") pod \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.167865 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" (UID: "dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.167912 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.167949 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-ovn-node-metrics-cert\") pod \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\" (UID: \"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0\") " Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.167909 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-log-socket" (OuterVolumeSpecName: "log-socket") pod "dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" (UID: "dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.167967 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" (UID: "dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.167935 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-host-slash" (OuterVolumeSpecName: "host-slash") pod "dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" (UID: "dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.167946 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" (UID: "dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.168082 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8702f77b-770f-4eac-a154-c2de370c22c2-systemd-units\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.168123 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8702f77b-770f-4eac-a154-c2de370c22c2-var-lib-openvswitch\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.168154 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8702f77b-770f-4eac-a154-c2de370c22c2-env-overrides\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.168195 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8702f77b-770f-4eac-a154-c2de370c22c2-host-cni-netd\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.168266 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8702f77b-770f-4eac-a154-c2de370c22c2-var-lib-openvswitch\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.168286 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8702f77b-770f-4eac-a154-c2de370c22c2-node-log\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.168325 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wjpx\" (UniqueName: \"kubernetes.io/projected/8702f77b-770f-4eac-a154-c2de370c22c2-kube-api-access-2wjpx\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.168342 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" (UID: "dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.168359 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8702f77b-770f-4eac-a154-c2de370c22c2-host-cni-netd\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.168370 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8702f77b-770f-4eac-a154-c2de370c22c2-ovnkube-config\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.168409 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8702f77b-770f-4eac-a154-c2de370c22c2-host-run-ovn-kubernetes\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.168452 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8702f77b-770f-4eac-a154-c2de370c22c2-host-kubelet\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.168494 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8702f77b-770f-4eac-a154-c2de370c22c2-log-socket\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.168541 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8702f77b-770f-4eac-a154-c2de370c22c2-ovnkube-script-lib\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.168589 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8702f77b-770f-4eac-a154-c2de370c22c2-host-slash\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.168622 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8702f77b-770f-4eac-a154-c2de370c22c2-run-ovn\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.168660 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8702f77b-770f-4eac-a154-c2de370c22c2-run-systemd\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.168696 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8702f77b-770f-4eac-a154-c2de370c22c2-ovn-node-metrics-cert\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.168726 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8702f77b-770f-4eac-a154-c2de370c22c2-host-cni-bin\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.168759 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8702f77b-770f-4eac-a154-c2de370c22c2-run-openvswitch\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.168789 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8702f77b-770f-4eac-a154-c2de370c22c2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.168820 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8702f77b-770f-4eac-a154-c2de370c22c2-etc-openvswitch\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.168856 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8702f77b-770f-4eac-a154-c2de370c22c2-host-run-netns\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.168900 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8702f77b-770f-4eac-a154-c2de370c22c2-env-overrides\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.168929 4947 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-systemd-units\") on node \"crc\" DevicePath \"\"" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.168953 4947 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.168962 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8702f77b-770f-4eac-a154-c2de370c22c2-host-run-ovn-kubernetes\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.168984 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8702f77b-770f-4eac-a154-c2de370c22c2-run-ovn\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.169018 4947 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-host-run-netns\") on node \"crc\" DevicePath \"\"" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.169019 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8702f77b-770f-4eac-a154-c2de370c22c2-run-systemd\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.169088 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8702f77b-770f-4eac-a154-c2de370c22c2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.169133 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8702f77b-770f-4eac-a154-c2de370c22c2-etc-openvswitch\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.169175 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8702f77b-770f-4eac-a154-c2de370c22c2-host-run-netns\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.169204 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8702f77b-770f-4eac-a154-c2de370c22c2-host-kubelet\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.169260 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8702f77b-770f-4eac-a154-c2de370c22c2-log-socket\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.169286 4947 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-run-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.169365 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8702f77b-770f-4eac-a154-c2de370c22c2-ovnkube-config\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.168327 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8702f77b-770f-4eac-a154-c2de370c22c2-systemd-units\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.169455 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8702f77b-770f-4eac-a154-c2de370c22c2-node-log\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.169506 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8702f77b-770f-4eac-a154-c2de370c22c2-host-cni-bin\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.169535 4947 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.169577 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8702f77b-770f-4eac-a154-c2de370c22c2-run-openvswitch\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.169603 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8702f77b-770f-4eac-a154-c2de370c22c2-host-slash\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.169626 4947 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-log-socket\") on node \"crc\" DevicePath \"\"" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.169646 4947 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-host-slash\") on node \"crc\" DevicePath \"\"" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.169664 4947 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.169687 4947 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.170484 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8702f77b-770f-4eac-a154-c2de370c22c2-ovnkube-script-lib\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.173172 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" (UID: "dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.173421 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8702f77b-770f-4eac-a154-c2de370c22c2-ovn-node-metrics-cert\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.174282 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-kube-api-access-grbf6" (OuterVolumeSpecName: "kube-api-access-grbf6") pod "dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" (UID: "dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0"). InnerVolumeSpecName "kube-api-access-grbf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.196955 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" (UID: "dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.200980 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wjpx\" (UniqueName: \"kubernetes.io/projected/8702f77b-770f-4eac-a154-c2de370c22c2-kube-api-access-2wjpx\") pod \"ovnkube-node-69kjm\" (UID: \"8702f77b-770f-4eac-a154-c2de370c22c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.236904 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xlg45_2cbb3532-a15b-4cca-bde1-aa1ae20698f1/kube-multus/2.log" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.237343 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xlg45_2cbb3532-a15b-4cca-bde1-aa1ae20698f1/kube-multus/1.log" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.237382 4947 generic.go:334] "Generic (PLEG): container finished" podID="2cbb3532-a15b-4cca-bde1-aa1ae20698f1" containerID="835a800714641bae786d619e7b11ef925de7bab3829365dde0a3e2934199065e" exitCode=2 Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.237437 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xlg45" event={"ID":"2cbb3532-a15b-4cca-bde1-aa1ae20698f1","Type":"ContainerDied","Data":"835a800714641bae786d619e7b11ef925de7bab3829365dde0a3e2934199065e"} Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.237472 4947 scope.go:117] "RemoveContainer" containerID="63ddd0da1118c2e86da1aea51f8248927f80bc1d21790723952b4d59f294cd76" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.238007 4947 scope.go:117] "RemoveContainer" containerID="835a800714641bae786d619e7b11ef925de7bab3829365dde0a3e2934199065e" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.240180 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z4rxq_dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0/ovnkube-controller/3.log" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.243530 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z4rxq_dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0/ovn-acl-logging/0.log" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.247038 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z4rxq_dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0/ovn-controller/0.log" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.247685 4947 generic.go:334] "Generic (PLEG): container finished" podID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" containerID="5567c06c44d06a87e61444e9e77b7096c805189d10806a57b4c6cfd436cbce42" exitCode=0 Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.247767 4947 generic.go:334] "Generic (PLEG): container finished" podID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" containerID="fadf23d78dff09f3edcb6159171e1f55f5a84b42d3dbc6804a2e1764d6c71797" exitCode=0 Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.247784 4947 generic.go:334] "Generic (PLEG): container finished" podID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" containerID="b624f494cbf18c5d0c15ee7d074273ebb685465bc73aeebae77758840c958415" exitCode=0 Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.247799 4947 generic.go:334] "Generic (PLEG): container finished" podID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" containerID="03aa7fc02c19732bcc2f291ae3e1eb8ee53ac2f8e4b6138a24c73eb69201fa5e" exitCode=0 Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.247812 4947 generic.go:334] "Generic (PLEG): container finished" podID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" containerID="ffeb939b1928f4e71530473d93e5b142eb70a09d929bee5bea36c407c0fb44a2" exitCode=0 Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.247826 4947 generic.go:334] "Generic (PLEG): container finished" podID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" containerID="26f82f9be693d04a2b2536773e664c7e2525ea3cf90f6135e5ed8a41e0def93c" exitCode=0 Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.247839 4947 generic.go:334] "Generic (PLEG): container finished" podID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" containerID="ba937f20fd42fdc9c935084319c2d3c455813e35002ed22a4be631cc74fab26c" exitCode=143 Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.247853 4947 generic.go:334] "Generic (PLEG): container finished" podID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" containerID="9e5372e6932fe776f4e4d18124f45a05be3f374919ce6df4426d89a023353836" exitCode=143 Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.247887 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" event={"ID":"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0","Type":"ContainerDied","Data":"5567c06c44d06a87e61444e9e77b7096c805189d10806a57b4c6cfd436cbce42"} Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.247929 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" event={"ID":"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0","Type":"ContainerDied","Data":"fadf23d78dff09f3edcb6159171e1f55f5a84b42d3dbc6804a2e1764d6c71797"} Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.247954 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" event={"ID":"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0","Type":"ContainerDied","Data":"b624f494cbf18c5d0c15ee7d074273ebb685465bc73aeebae77758840c958415"} Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.247975 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" event={"ID":"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0","Type":"ContainerDied","Data":"03aa7fc02c19732bcc2f291ae3e1eb8ee53ac2f8e4b6138a24c73eb69201fa5e"} Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.247995 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" event={"ID":"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0","Type":"ContainerDied","Data":"ffeb939b1928f4e71530473d93e5b142eb70a09d929bee5bea36c407c0fb44a2"} Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.248018 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" event={"ID":"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0","Type":"ContainerDied","Data":"26f82f9be693d04a2b2536773e664c7e2525ea3cf90f6135e5ed8a41e0def93c"} Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.248038 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5567c06c44d06a87e61444e9e77b7096c805189d10806a57b4c6cfd436cbce42"} Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.248056 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f688195e20b1f49c08f3c1b7e2348e4ffdaa7417f17c1a626fde1c81232d1920"} Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.248069 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fadf23d78dff09f3edcb6159171e1f55f5a84b42d3dbc6804a2e1764d6c71797"} Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.248081 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b624f494cbf18c5d0c15ee7d074273ebb685465bc73aeebae77758840c958415"} Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.248092 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"03aa7fc02c19732bcc2f291ae3e1eb8ee53ac2f8e4b6138a24c73eb69201fa5e"} Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.248103 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ffeb939b1928f4e71530473d93e5b142eb70a09d929bee5bea36c407c0fb44a2"} Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.248115 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"26f82f9be693d04a2b2536773e664c7e2525ea3cf90f6135e5ed8a41e0def93c"} Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.248126 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ba937f20fd42fdc9c935084319c2d3c455813e35002ed22a4be631cc74fab26c"} Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.248139 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9e5372e6932fe776f4e4d18124f45a05be3f374919ce6df4426d89a023353836"} Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.248169 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1"} Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.248194 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" event={"ID":"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0","Type":"ContainerDied","Data":"ba937f20fd42fdc9c935084319c2d3c455813e35002ed22a4be631cc74fab26c"} Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.248214 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5567c06c44d06a87e61444e9e77b7096c805189d10806a57b4c6cfd436cbce42"} Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.248275 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f688195e20b1f49c08f3c1b7e2348e4ffdaa7417f17c1a626fde1c81232d1920"} Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.248288 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fadf23d78dff09f3edcb6159171e1f55f5a84b42d3dbc6804a2e1764d6c71797"} Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.248299 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b624f494cbf18c5d0c15ee7d074273ebb685465bc73aeebae77758840c958415"} Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.248312 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"03aa7fc02c19732bcc2f291ae3e1eb8ee53ac2f8e4b6138a24c73eb69201fa5e"} Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.248325 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ffeb939b1928f4e71530473d93e5b142eb70a09d929bee5bea36c407c0fb44a2"} Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.248336 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"26f82f9be693d04a2b2536773e664c7e2525ea3cf90f6135e5ed8a41e0def93c"} Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.248347 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ba937f20fd42fdc9c935084319c2d3c455813e35002ed22a4be631cc74fab26c"} Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.248358 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9e5372e6932fe776f4e4d18124f45a05be3f374919ce6df4426d89a023353836"} Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.248369 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1"} Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.248398 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" event={"ID":"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0","Type":"ContainerDied","Data":"9e5372e6932fe776f4e4d18124f45a05be3f374919ce6df4426d89a023353836"} Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.248416 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5567c06c44d06a87e61444e9e77b7096c805189d10806a57b4c6cfd436cbce42"} Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.248430 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f688195e20b1f49c08f3c1b7e2348e4ffdaa7417f17c1a626fde1c81232d1920"} Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.248451 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fadf23d78dff09f3edcb6159171e1f55f5a84b42d3dbc6804a2e1764d6c71797"} Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.248462 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b624f494cbf18c5d0c15ee7d074273ebb685465bc73aeebae77758840c958415"} Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.248474 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"03aa7fc02c19732bcc2f291ae3e1eb8ee53ac2f8e4b6138a24c73eb69201fa5e"} Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.248484 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ffeb939b1928f4e71530473d93e5b142eb70a09d929bee5bea36c407c0fb44a2"} Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.248496 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"26f82f9be693d04a2b2536773e664c7e2525ea3cf90f6135e5ed8a41e0def93c"} Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.248507 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ba937f20fd42fdc9c935084319c2d3c455813e35002ed22a4be631cc74fab26c"} Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.248519 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9e5372e6932fe776f4e4d18124f45a05be3f374919ce6df4426d89a023353836"} Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.248530 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1"} Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.248548 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" event={"ID":"dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0","Type":"ContainerDied","Data":"8d5ea209885265916fa06511390b1479de47f6dd6633c4c2ae8ff7c6685d1d4d"} Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.248565 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5567c06c44d06a87e61444e9e77b7096c805189d10806a57b4c6cfd436cbce42"} Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.248578 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f688195e20b1f49c08f3c1b7e2348e4ffdaa7417f17c1a626fde1c81232d1920"} Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.248589 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fadf23d78dff09f3edcb6159171e1f55f5a84b42d3dbc6804a2e1764d6c71797"} Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.248600 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b624f494cbf18c5d0c15ee7d074273ebb685465bc73aeebae77758840c958415"} Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.248611 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"03aa7fc02c19732bcc2f291ae3e1eb8ee53ac2f8e4b6138a24c73eb69201fa5e"} Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.248622 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ffeb939b1928f4e71530473d93e5b142eb70a09d929bee5bea36c407c0fb44a2"} Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.248633 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"26f82f9be693d04a2b2536773e664c7e2525ea3cf90f6135e5ed8a41e0def93c"} Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.248644 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ba937f20fd42fdc9c935084319c2d3c455813e35002ed22a4be631cc74fab26c"} Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.248667 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9e5372e6932fe776f4e4d18124f45a05be3f374919ce6df4426d89a023353836"} Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.248687 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1"} Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.248818 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-z4rxq" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.270782 4947 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.270825 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grbf6\" (UniqueName: \"kubernetes.io/projected/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-kube-api-access-grbf6\") on node \"crc\" DevicePath \"\"" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.270837 4947 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0-run-systemd\") on node \"crc\" DevicePath \"\"" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.301276 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-z4rxq"] Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.312452 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-z4rxq"] Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.319566 4947 scope.go:117] "RemoveContainer" containerID="5567c06c44d06a87e61444e9e77b7096c805189d10806a57b4c6cfd436cbce42" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.336771 4947 scope.go:117] "RemoveContainer" containerID="f688195e20b1f49c08f3c1b7e2348e4ffdaa7417f17c1a626fde1c81232d1920" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.360391 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:00 crc kubenswrapper[4947]: I1129 06:47:00.509533 4947 scope.go:117] "RemoveContainer" containerID="fadf23d78dff09f3edcb6159171e1f55f5a84b42d3dbc6804a2e1764d6c71797" Nov 29 06:47:01 crc kubenswrapper[4947]: I1129 06:47:01.027182 4947 scope.go:117] "RemoveContainer" containerID="b624f494cbf18c5d0c15ee7d074273ebb685465bc73aeebae77758840c958415" Nov 29 06:47:01 crc kubenswrapper[4947]: I1129 06:47:01.186480 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0" path="/var/lib/kubelet/pods/dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0/volumes" Nov 29 06:47:01 crc kubenswrapper[4947]: I1129 06:47:01.255565 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xlg45_2cbb3532-a15b-4cca-bde1-aa1ae20698f1/kube-multus/2.log" Nov 29 06:47:01 crc kubenswrapper[4947]: I1129 06:47:01.255664 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xlg45" event={"ID":"2cbb3532-a15b-4cca-bde1-aa1ae20698f1","Type":"ContainerStarted","Data":"02ca4fc565722daf46040da0ea34ca29b3d01d13f852f7016ba5a4414a4062e7"} Nov 29 06:47:01 crc kubenswrapper[4947]: I1129 06:47:01.261585 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z4rxq_dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0/ovn-acl-logging/0.log" Nov 29 06:47:01 crc kubenswrapper[4947]: I1129 06:47:01.262123 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z4rxq_dcbb4a27-57b5-4ac3-b69b-4f644d6f1be0/ovn-controller/0.log" Nov 29 06:47:01 crc kubenswrapper[4947]: I1129 06:47:01.915820 4947 scope.go:117] "RemoveContainer" containerID="03aa7fc02c19732bcc2f291ae3e1eb8ee53ac2f8e4b6138a24c73eb69201fa5e" Nov 29 06:47:01 crc kubenswrapper[4947]: W1129 06:47:01.919643 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8702f77b_770f_4eac_a154_c2de370c22c2.slice/crio-4b9286c2f9390393a7390b64ee3e2aa2090adaaf50d84d3cd38e4f1a73c54fd4 WatchSource:0}: Error finding container 4b9286c2f9390393a7390b64ee3e2aa2090adaaf50d84d3cd38e4f1a73c54fd4: Status 404 returned error can't find the container with id 4b9286c2f9390393a7390b64ee3e2aa2090adaaf50d84d3cd38e4f1a73c54fd4 Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.031465 4947 scope.go:117] "RemoveContainer" containerID="ffeb939b1928f4e71530473d93e5b142eb70a09d929bee5bea36c407c0fb44a2" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.089483 4947 scope.go:117] "RemoveContainer" containerID="26f82f9be693d04a2b2536773e664c7e2525ea3cf90f6135e5ed8a41e0def93c" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.107135 4947 scope.go:117] "RemoveContainer" containerID="ba937f20fd42fdc9c935084319c2d3c455813e35002ed22a4be631cc74fab26c" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.160970 4947 scope.go:117] "RemoveContainer" containerID="9e5372e6932fe776f4e4d18124f45a05be3f374919ce6df4426d89a023353836" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.182681 4947 scope.go:117] "RemoveContainer" containerID="965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.208694 4947 scope.go:117] "RemoveContainer" containerID="5567c06c44d06a87e61444e9e77b7096c805189d10806a57b4c6cfd436cbce42" Nov 29 06:47:02 crc kubenswrapper[4947]: E1129 06:47:02.209241 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5567c06c44d06a87e61444e9e77b7096c805189d10806a57b4c6cfd436cbce42\": container with ID starting with 5567c06c44d06a87e61444e9e77b7096c805189d10806a57b4c6cfd436cbce42 not found: ID does not exist" containerID="5567c06c44d06a87e61444e9e77b7096c805189d10806a57b4c6cfd436cbce42" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.209278 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5567c06c44d06a87e61444e9e77b7096c805189d10806a57b4c6cfd436cbce42"} err="failed to get container status \"5567c06c44d06a87e61444e9e77b7096c805189d10806a57b4c6cfd436cbce42\": rpc error: code = NotFound desc = could not find container \"5567c06c44d06a87e61444e9e77b7096c805189d10806a57b4c6cfd436cbce42\": container with ID starting with 5567c06c44d06a87e61444e9e77b7096c805189d10806a57b4c6cfd436cbce42 not found: ID does not exist" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.209309 4947 scope.go:117] "RemoveContainer" containerID="f688195e20b1f49c08f3c1b7e2348e4ffdaa7417f17c1a626fde1c81232d1920" Nov 29 06:47:02 crc kubenswrapper[4947]: E1129 06:47:02.209928 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f688195e20b1f49c08f3c1b7e2348e4ffdaa7417f17c1a626fde1c81232d1920\": container with ID starting with f688195e20b1f49c08f3c1b7e2348e4ffdaa7417f17c1a626fde1c81232d1920 not found: ID does not exist" containerID="f688195e20b1f49c08f3c1b7e2348e4ffdaa7417f17c1a626fde1c81232d1920" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.210120 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f688195e20b1f49c08f3c1b7e2348e4ffdaa7417f17c1a626fde1c81232d1920"} err="failed to get container status \"f688195e20b1f49c08f3c1b7e2348e4ffdaa7417f17c1a626fde1c81232d1920\": rpc error: code = NotFound desc = could not find container \"f688195e20b1f49c08f3c1b7e2348e4ffdaa7417f17c1a626fde1c81232d1920\": container with ID starting with f688195e20b1f49c08f3c1b7e2348e4ffdaa7417f17c1a626fde1c81232d1920 not found: ID does not exist" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.210170 4947 scope.go:117] "RemoveContainer" containerID="fadf23d78dff09f3edcb6159171e1f55f5a84b42d3dbc6804a2e1764d6c71797" Nov 29 06:47:02 crc kubenswrapper[4947]: E1129 06:47:02.210539 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fadf23d78dff09f3edcb6159171e1f55f5a84b42d3dbc6804a2e1764d6c71797\": container with ID starting with fadf23d78dff09f3edcb6159171e1f55f5a84b42d3dbc6804a2e1764d6c71797 not found: ID does not exist" containerID="fadf23d78dff09f3edcb6159171e1f55f5a84b42d3dbc6804a2e1764d6c71797" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.210564 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fadf23d78dff09f3edcb6159171e1f55f5a84b42d3dbc6804a2e1764d6c71797"} err="failed to get container status \"fadf23d78dff09f3edcb6159171e1f55f5a84b42d3dbc6804a2e1764d6c71797\": rpc error: code = NotFound desc = could not find container \"fadf23d78dff09f3edcb6159171e1f55f5a84b42d3dbc6804a2e1764d6c71797\": container with ID starting with fadf23d78dff09f3edcb6159171e1f55f5a84b42d3dbc6804a2e1764d6c71797 not found: ID does not exist" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.210579 4947 scope.go:117] "RemoveContainer" containerID="b624f494cbf18c5d0c15ee7d074273ebb685465bc73aeebae77758840c958415" Nov 29 06:47:02 crc kubenswrapper[4947]: E1129 06:47:02.210977 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b624f494cbf18c5d0c15ee7d074273ebb685465bc73aeebae77758840c958415\": container with ID starting with b624f494cbf18c5d0c15ee7d074273ebb685465bc73aeebae77758840c958415 not found: ID does not exist" containerID="b624f494cbf18c5d0c15ee7d074273ebb685465bc73aeebae77758840c958415" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.211005 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b624f494cbf18c5d0c15ee7d074273ebb685465bc73aeebae77758840c958415"} err="failed to get container status \"b624f494cbf18c5d0c15ee7d074273ebb685465bc73aeebae77758840c958415\": rpc error: code = NotFound desc = could not find container \"b624f494cbf18c5d0c15ee7d074273ebb685465bc73aeebae77758840c958415\": container with ID starting with b624f494cbf18c5d0c15ee7d074273ebb685465bc73aeebae77758840c958415 not found: ID does not exist" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.211017 4947 scope.go:117] "RemoveContainer" containerID="03aa7fc02c19732bcc2f291ae3e1eb8ee53ac2f8e4b6138a24c73eb69201fa5e" Nov 29 06:47:02 crc kubenswrapper[4947]: E1129 06:47:02.211467 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03aa7fc02c19732bcc2f291ae3e1eb8ee53ac2f8e4b6138a24c73eb69201fa5e\": container with ID starting with 03aa7fc02c19732bcc2f291ae3e1eb8ee53ac2f8e4b6138a24c73eb69201fa5e not found: ID does not exist" containerID="03aa7fc02c19732bcc2f291ae3e1eb8ee53ac2f8e4b6138a24c73eb69201fa5e" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.211500 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03aa7fc02c19732bcc2f291ae3e1eb8ee53ac2f8e4b6138a24c73eb69201fa5e"} err="failed to get container status \"03aa7fc02c19732bcc2f291ae3e1eb8ee53ac2f8e4b6138a24c73eb69201fa5e\": rpc error: code = NotFound desc = could not find container \"03aa7fc02c19732bcc2f291ae3e1eb8ee53ac2f8e4b6138a24c73eb69201fa5e\": container with ID starting with 03aa7fc02c19732bcc2f291ae3e1eb8ee53ac2f8e4b6138a24c73eb69201fa5e not found: ID does not exist" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.211522 4947 scope.go:117] "RemoveContainer" containerID="ffeb939b1928f4e71530473d93e5b142eb70a09d929bee5bea36c407c0fb44a2" Nov 29 06:47:02 crc kubenswrapper[4947]: E1129 06:47:02.211925 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffeb939b1928f4e71530473d93e5b142eb70a09d929bee5bea36c407c0fb44a2\": container with ID starting with ffeb939b1928f4e71530473d93e5b142eb70a09d929bee5bea36c407c0fb44a2 not found: ID does not exist" containerID="ffeb939b1928f4e71530473d93e5b142eb70a09d929bee5bea36c407c0fb44a2" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.211953 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffeb939b1928f4e71530473d93e5b142eb70a09d929bee5bea36c407c0fb44a2"} err="failed to get container status \"ffeb939b1928f4e71530473d93e5b142eb70a09d929bee5bea36c407c0fb44a2\": rpc error: code = NotFound desc = could not find container \"ffeb939b1928f4e71530473d93e5b142eb70a09d929bee5bea36c407c0fb44a2\": container with ID starting with ffeb939b1928f4e71530473d93e5b142eb70a09d929bee5bea36c407c0fb44a2 not found: ID does not exist" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.211970 4947 scope.go:117] "RemoveContainer" containerID="26f82f9be693d04a2b2536773e664c7e2525ea3cf90f6135e5ed8a41e0def93c" Nov 29 06:47:02 crc kubenswrapper[4947]: E1129 06:47:02.212209 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26f82f9be693d04a2b2536773e664c7e2525ea3cf90f6135e5ed8a41e0def93c\": container with ID starting with 26f82f9be693d04a2b2536773e664c7e2525ea3cf90f6135e5ed8a41e0def93c not found: ID does not exist" containerID="26f82f9be693d04a2b2536773e664c7e2525ea3cf90f6135e5ed8a41e0def93c" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.212266 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26f82f9be693d04a2b2536773e664c7e2525ea3cf90f6135e5ed8a41e0def93c"} err="failed to get container status \"26f82f9be693d04a2b2536773e664c7e2525ea3cf90f6135e5ed8a41e0def93c\": rpc error: code = NotFound desc = could not find container \"26f82f9be693d04a2b2536773e664c7e2525ea3cf90f6135e5ed8a41e0def93c\": container with ID starting with 26f82f9be693d04a2b2536773e664c7e2525ea3cf90f6135e5ed8a41e0def93c not found: ID does not exist" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.212287 4947 scope.go:117] "RemoveContainer" containerID="ba937f20fd42fdc9c935084319c2d3c455813e35002ed22a4be631cc74fab26c" Nov 29 06:47:02 crc kubenswrapper[4947]: E1129 06:47:02.212628 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba937f20fd42fdc9c935084319c2d3c455813e35002ed22a4be631cc74fab26c\": container with ID starting with ba937f20fd42fdc9c935084319c2d3c455813e35002ed22a4be631cc74fab26c not found: ID does not exist" containerID="ba937f20fd42fdc9c935084319c2d3c455813e35002ed22a4be631cc74fab26c" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.212653 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba937f20fd42fdc9c935084319c2d3c455813e35002ed22a4be631cc74fab26c"} err="failed to get container status \"ba937f20fd42fdc9c935084319c2d3c455813e35002ed22a4be631cc74fab26c\": rpc error: code = NotFound desc = could not find container \"ba937f20fd42fdc9c935084319c2d3c455813e35002ed22a4be631cc74fab26c\": container with ID starting with ba937f20fd42fdc9c935084319c2d3c455813e35002ed22a4be631cc74fab26c not found: ID does not exist" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.212669 4947 scope.go:117] "RemoveContainer" containerID="9e5372e6932fe776f4e4d18124f45a05be3f374919ce6df4426d89a023353836" Nov 29 06:47:02 crc kubenswrapper[4947]: E1129 06:47:02.212958 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e5372e6932fe776f4e4d18124f45a05be3f374919ce6df4426d89a023353836\": container with ID starting with 9e5372e6932fe776f4e4d18124f45a05be3f374919ce6df4426d89a023353836 not found: ID does not exist" containerID="9e5372e6932fe776f4e4d18124f45a05be3f374919ce6df4426d89a023353836" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.212990 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e5372e6932fe776f4e4d18124f45a05be3f374919ce6df4426d89a023353836"} err="failed to get container status \"9e5372e6932fe776f4e4d18124f45a05be3f374919ce6df4426d89a023353836\": rpc error: code = NotFound desc = could not find container \"9e5372e6932fe776f4e4d18124f45a05be3f374919ce6df4426d89a023353836\": container with ID starting with 9e5372e6932fe776f4e4d18124f45a05be3f374919ce6df4426d89a023353836 not found: ID does not exist" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.213011 4947 scope.go:117] "RemoveContainer" containerID="965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1" Nov 29 06:47:02 crc kubenswrapper[4947]: E1129 06:47:02.213363 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\": container with ID starting with 965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1 not found: ID does not exist" containerID="965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.213385 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1"} err="failed to get container status \"965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\": rpc error: code = NotFound desc = could not find container \"965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\": container with ID starting with 965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1 not found: ID does not exist" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.213399 4947 scope.go:117] "RemoveContainer" containerID="5567c06c44d06a87e61444e9e77b7096c805189d10806a57b4c6cfd436cbce42" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.213674 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5567c06c44d06a87e61444e9e77b7096c805189d10806a57b4c6cfd436cbce42"} err="failed to get container status \"5567c06c44d06a87e61444e9e77b7096c805189d10806a57b4c6cfd436cbce42\": rpc error: code = NotFound desc = could not find container \"5567c06c44d06a87e61444e9e77b7096c805189d10806a57b4c6cfd436cbce42\": container with ID starting with 5567c06c44d06a87e61444e9e77b7096c805189d10806a57b4c6cfd436cbce42 not found: ID does not exist" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.213695 4947 scope.go:117] "RemoveContainer" containerID="f688195e20b1f49c08f3c1b7e2348e4ffdaa7417f17c1a626fde1c81232d1920" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.214005 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f688195e20b1f49c08f3c1b7e2348e4ffdaa7417f17c1a626fde1c81232d1920"} err="failed to get container status \"f688195e20b1f49c08f3c1b7e2348e4ffdaa7417f17c1a626fde1c81232d1920\": rpc error: code = NotFound desc = could not find container \"f688195e20b1f49c08f3c1b7e2348e4ffdaa7417f17c1a626fde1c81232d1920\": container with ID starting with f688195e20b1f49c08f3c1b7e2348e4ffdaa7417f17c1a626fde1c81232d1920 not found: ID does not exist" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.214065 4947 scope.go:117] "RemoveContainer" containerID="fadf23d78dff09f3edcb6159171e1f55f5a84b42d3dbc6804a2e1764d6c71797" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.214496 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fadf23d78dff09f3edcb6159171e1f55f5a84b42d3dbc6804a2e1764d6c71797"} err="failed to get container status \"fadf23d78dff09f3edcb6159171e1f55f5a84b42d3dbc6804a2e1764d6c71797\": rpc error: code = NotFound desc = could not find container \"fadf23d78dff09f3edcb6159171e1f55f5a84b42d3dbc6804a2e1764d6c71797\": container with ID starting with fadf23d78dff09f3edcb6159171e1f55f5a84b42d3dbc6804a2e1764d6c71797 not found: ID does not exist" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.214523 4947 scope.go:117] "RemoveContainer" containerID="b624f494cbf18c5d0c15ee7d074273ebb685465bc73aeebae77758840c958415" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.214871 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b624f494cbf18c5d0c15ee7d074273ebb685465bc73aeebae77758840c958415"} err="failed to get container status \"b624f494cbf18c5d0c15ee7d074273ebb685465bc73aeebae77758840c958415\": rpc error: code = NotFound desc = could not find container \"b624f494cbf18c5d0c15ee7d074273ebb685465bc73aeebae77758840c958415\": container with ID starting with b624f494cbf18c5d0c15ee7d074273ebb685465bc73aeebae77758840c958415 not found: ID does not exist" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.214910 4947 scope.go:117] "RemoveContainer" containerID="03aa7fc02c19732bcc2f291ae3e1eb8ee53ac2f8e4b6138a24c73eb69201fa5e" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.215310 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03aa7fc02c19732bcc2f291ae3e1eb8ee53ac2f8e4b6138a24c73eb69201fa5e"} err="failed to get container status \"03aa7fc02c19732bcc2f291ae3e1eb8ee53ac2f8e4b6138a24c73eb69201fa5e\": rpc error: code = NotFound desc = could not find container \"03aa7fc02c19732bcc2f291ae3e1eb8ee53ac2f8e4b6138a24c73eb69201fa5e\": container with ID starting with 03aa7fc02c19732bcc2f291ae3e1eb8ee53ac2f8e4b6138a24c73eb69201fa5e not found: ID does not exist" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.215335 4947 scope.go:117] "RemoveContainer" containerID="ffeb939b1928f4e71530473d93e5b142eb70a09d929bee5bea36c407c0fb44a2" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.215580 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffeb939b1928f4e71530473d93e5b142eb70a09d929bee5bea36c407c0fb44a2"} err="failed to get container status \"ffeb939b1928f4e71530473d93e5b142eb70a09d929bee5bea36c407c0fb44a2\": rpc error: code = NotFound desc = could not find container \"ffeb939b1928f4e71530473d93e5b142eb70a09d929bee5bea36c407c0fb44a2\": container with ID starting with ffeb939b1928f4e71530473d93e5b142eb70a09d929bee5bea36c407c0fb44a2 not found: ID does not exist" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.215606 4947 scope.go:117] "RemoveContainer" containerID="26f82f9be693d04a2b2536773e664c7e2525ea3cf90f6135e5ed8a41e0def93c" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.215893 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26f82f9be693d04a2b2536773e664c7e2525ea3cf90f6135e5ed8a41e0def93c"} err="failed to get container status \"26f82f9be693d04a2b2536773e664c7e2525ea3cf90f6135e5ed8a41e0def93c\": rpc error: code = NotFound desc = could not find container \"26f82f9be693d04a2b2536773e664c7e2525ea3cf90f6135e5ed8a41e0def93c\": container with ID starting with 26f82f9be693d04a2b2536773e664c7e2525ea3cf90f6135e5ed8a41e0def93c not found: ID does not exist" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.215917 4947 scope.go:117] "RemoveContainer" containerID="ba937f20fd42fdc9c935084319c2d3c455813e35002ed22a4be631cc74fab26c" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.216180 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba937f20fd42fdc9c935084319c2d3c455813e35002ed22a4be631cc74fab26c"} err="failed to get container status \"ba937f20fd42fdc9c935084319c2d3c455813e35002ed22a4be631cc74fab26c\": rpc error: code = NotFound desc = could not find container \"ba937f20fd42fdc9c935084319c2d3c455813e35002ed22a4be631cc74fab26c\": container with ID starting with ba937f20fd42fdc9c935084319c2d3c455813e35002ed22a4be631cc74fab26c not found: ID does not exist" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.216205 4947 scope.go:117] "RemoveContainer" containerID="9e5372e6932fe776f4e4d18124f45a05be3f374919ce6df4426d89a023353836" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.216579 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e5372e6932fe776f4e4d18124f45a05be3f374919ce6df4426d89a023353836"} err="failed to get container status \"9e5372e6932fe776f4e4d18124f45a05be3f374919ce6df4426d89a023353836\": rpc error: code = NotFound desc = could not find container \"9e5372e6932fe776f4e4d18124f45a05be3f374919ce6df4426d89a023353836\": container with ID starting with 9e5372e6932fe776f4e4d18124f45a05be3f374919ce6df4426d89a023353836 not found: ID does not exist" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.216604 4947 scope.go:117] "RemoveContainer" containerID="965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.216878 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1"} err="failed to get container status \"965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\": rpc error: code = NotFound desc = could not find container \"965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\": container with ID starting with 965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1 not found: ID does not exist" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.216903 4947 scope.go:117] "RemoveContainer" containerID="5567c06c44d06a87e61444e9e77b7096c805189d10806a57b4c6cfd436cbce42" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.217186 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5567c06c44d06a87e61444e9e77b7096c805189d10806a57b4c6cfd436cbce42"} err="failed to get container status \"5567c06c44d06a87e61444e9e77b7096c805189d10806a57b4c6cfd436cbce42\": rpc error: code = NotFound desc = could not find container \"5567c06c44d06a87e61444e9e77b7096c805189d10806a57b4c6cfd436cbce42\": container with ID starting with 5567c06c44d06a87e61444e9e77b7096c805189d10806a57b4c6cfd436cbce42 not found: ID does not exist" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.217214 4947 scope.go:117] "RemoveContainer" containerID="f688195e20b1f49c08f3c1b7e2348e4ffdaa7417f17c1a626fde1c81232d1920" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.217639 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f688195e20b1f49c08f3c1b7e2348e4ffdaa7417f17c1a626fde1c81232d1920"} err="failed to get container status \"f688195e20b1f49c08f3c1b7e2348e4ffdaa7417f17c1a626fde1c81232d1920\": rpc error: code = NotFound desc = could not find container \"f688195e20b1f49c08f3c1b7e2348e4ffdaa7417f17c1a626fde1c81232d1920\": container with ID starting with f688195e20b1f49c08f3c1b7e2348e4ffdaa7417f17c1a626fde1c81232d1920 not found: ID does not exist" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.217665 4947 scope.go:117] "RemoveContainer" containerID="fadf23d78dff09f3edcb6159171e1f55f5a84b42d3dbc6804a2e1764d6c71797" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.218051 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fadf23d78dff09f3edcb6159171e1f55f5a84b42d3dbc6804a2e1764d6c71797"} err="failed to get container status \"fadf23d78dff09f3edcb6159171e1f55f5a84b42d3dbc6804a2e1764d6c71797\": rpc error: code = NotFound desc = could not find container \"fadf23d78dff09f3edcb6159171e1f55f5a84b42d3dbc6804a2e1764d6c71797\": container with ID starting with fadf23d78dff09f3edcb6159171e1f55f5a84b42d3dbc6804a2e1764d6c71797 not found: ID does not exist" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.218101 4947 scope.go:117] "RemoveContainer" containerID="b624f494cbf18c5d0c15ee7d074273ebb685465bc73aeebae77758840c958415" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.218505 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b624f494cbf18c5d0c15ee7d074273ebb685465bc73aeebae77758840c958415"} err="failed to get container status \"b624f494cbf18c5d0c15ee7d074273ebb685465bc73aeebae77758840c958415\": rpc error: code = NotFound desc = could not find container \"b624f494cbf18c5d0c15ee7d074273ebb685465bc73aeebae77758840c958415\": container with ID starting with b624f494cbf18c5d0c15ee7d074273ebb685465bc73aeebae77758840c958415 not found: ID does not exist" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.218544 4947 scope.go:117] "RemoveContainer" containerID="03aa7fc02c19732bcc2f291ae3e1eb8ee53ac2f8e4b6138a24c73eb69201fa5e" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.219501 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03aa7fc02c19732bcc2f291ae3e1eb8ee53ac2f8e4b6138a24c73eb69201fa5e"} err="failed to get container status \"03aa7fc02c19732bcc2f291ae3e1eb8ee53ac2f8e4b6138a24c73eb69201fa5e\": rpc error: code = NotFound desc = could not find container \"03aa7fc02c19732bcc2f291ae3e1eb8ee53ac2f8e4b6138a24c73eb69201fa5e\": container with ID starting with 03aa7fc02c19732bcc2f291ae3e1eb8ee53ac2f8e4b6138a24c73eb69201fa5e not found: ID does not exist" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.219533 4947 scope.go:117] "RemoveContainer" containerID="ffeb939b1928f4e71530473d93e5b142eb70a09d929bee5bea36c407c0fb44a2" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.219943 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffeb939b1928f4e71530473d93e5b142eb70a09d929bee5bea36c407c0fb44a2"} err="failed to get container status \"ffeb939b1928f4e71530473d93e5b142eb70a09d929bee5bea36c407c0fb44a2\": rpc error: code = NotFound desc = could not find container \"ffeb939b1928f4e71530473d93e5b142eb70a09d929bee5bea36c407c0fb44a2\": container with ID starting with ffeb939b1928f4e71530473d93e5b142eb70a09d929bee5bea36c407c0fb44a2 not found: ID does not exist" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.219962 4947 scope.go:117] "RemoveContainer" containerID="26f82f9be693d04a2b2536773e664c7e2525ea3cf90f6135e5ed8a41e0def93c" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.220351 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26f82f9be693d04a2b2536773e664c7e2525ea3cf90f6135e5ed8a41e0def93c"} err="failed to get container status \"26f82f9be693d04a2b2536773e664c7e2525ea3cf90f6135e5ed8a41e0def93c\": rpc error: code = NotFound desc = could not find container \"26f82f9be693d04a2b2536773e664c7e2525ea3cf90f6135e5ed8a41e0def93c\": container with ID starting with 26f82f9be693d04a2b2536773e664c7e2525ea3cf90f6135e5ed8a41e0def93c not found: ID does not exist" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.220382 4947 scope.go:117] "RemoveContainer" containerID="ba937f20fd42fdc9c935084319c2d3c455813e35002ed22a4be631cc74fab26c" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.220652 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba937f20fd42fdc9c935084319c2d3c455813e35002ed22a4be631cc74fab26c"} err="failed to get container status \"ba937f20fd42fdc9c935084319c2d3c455813e35002ed22a4be631cc74fab26c\": rpc error: code = NotFound desc = could not find container \"ba937f20fd42fdc9c935084319c2d3c455813e35002ed22a4be631cc74fab26c\": container with ID starting with ba937f20fd42fdc9c935084319c2d3c455813e35002ed22a4be631cc74fab26c not found: ID does not exist" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.220675 4947 scope.go:117] "RemoveContainer" containerID="9e5372e6932fe776f4e4d18124f45a05be3f374919ce6df4426d89a023353836" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.221015 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e5372e6932fe776f4e4d18124f45a05be3f374919ce6df4426d89a023353836"} err="failed to get container status \"9e5372e6932fe776f4e4d18124f45a05be3f374919ce6df4426d89a023353836\": rpc error: code = NotFound desc = could not find container \"9e5372e6932fe776f4e4d18124f45a05be3f374919ce6df4426d89a023353836\": container with ID starting with 9e5372e6932fe776f4e4d18124f45a05be3f374919ce6df4426d89a023353836 not found: ID does not exist" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.221045 4947 scope.go:117] "RemoveContainer" containerID="965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.221434 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1"} err="failed to get container status \"965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\": rpc error: code = NotFound desc = could not find container \"965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\": container with ID starting with 965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1 not found: ID does not exist" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.221461 4947 scope.go:117] "RemoveContainer" containerID="5567c06c44d06a87e61444e9e77b7096c805189d10806a57b4c6cfd436cbce42" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.221797 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5567c06c44d06a87e61444e9e77b7096c805189d10806a57b4c6cfd436cbce42"} err="failed to get container status \"5567c06c44d06a87e61444e9e77b7096c805189d10806a57b4c6cfd436cbce42\": rpc error: code = NotFound desc = could not find container \"5567c06c44d06a87e61444e9e77b7096c805189d10806a57b4c6cfd436cbce42\": container with ID starting with 5567c06c44d06a87e61444e9e77b7096c805189d10806a57b4c6cfd436cbce42 not found: ID does not exist" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.221821 4947 scope.go:117] "RemoveContainer" containerID="f688195e20b1f49c08f3c1b7e2348e4ffdaa7417f17c1a626fde1c81232d1920" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.222147 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f688195e20b1f49c08f3c1b7e2348e4ffdaa7417f17c1a626fde1c81232d1920"} err="failed to get container status \"f688195e20b1f49c08f3c1b7e2348e4ffdaa7417f17c1a626fde1c81232d1920\": rpc error: code = NotFound desc = could not find container \"f688195e20b1f49c08f3c1b7e2348e4ffdaa7417f17c1a626fde1c81232d1920\": container with ID starting with f688195e20b1f49c08f3c1b7e2348e4ffdaa7417f17c1a626fde1c81232d1920 not found: ID does not exist" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.222188 4947 scope.go:117] "RemoveContainer" containerID="fadf23d78dff09f3edcb6159171e1f55f5a84b42d3dbc6804a2e1764d6c71797" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.222532 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fadf23d78dff09f3edcb6159171e1f55f5a84b42d3dbc6804a2e1764d6c71797"} err="failed to get container status \"fadf23d78dff09f3edcb6159171e1f55f5a84b42d3dbc6804a2e1764d6c71797\": rpc error: code = NotFound desc = could not find container \"fadf23d78dff09f3edcb6159171e1f55f5a84b42d3dbc6804a2e1764d6c71797\": container with ID starting with fadf23d78dff09f3edcb6159171e1f55f5a84b42d3dbc6804a2e1764d6c71797 not found: ID does not exist" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.222560 4947 scope.go:117] "RemoveContainer" containerID="b624f494cbf18c5d0c15ee7d074273ebb685465bc73aeebae77758840c958415" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.223584 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b624f494cbf18c5d0c15ee7d074273ebb685465bc73aeebae77758840c958415"} err="failed to get container status \"b624f494cbf18c5d0c15ee7d074273ebb685465bc73aeebae77758840c958415\": rpc error: code = NotFound desc = could not find container \"b624f494cbf18c5d0c15ee7d074273ebb685465bc73aeebae77758840c958415\": container with ID starting with b624f494cbf18c5d0c15ee7d074273ebb685465bc73aeebae77758840c958415 not found: ID does not exist" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.223615 4947 scope.go:117] "RemoveContainer" containerID="03aa7fc02c19732bcc2f291ae3e1eb8ee53ac2f8e4b6138a24c73eb69201fa5e" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.223961 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03aa7fc02c19732bcc2f291ae3e1eb8ee53ac2f8e4b6138a24c73eb69201fa5e"} err="failed to get container status \"03aa7fc02c19732bcc2f291ae3e1eb8ee53ac2f8e4b6138a24c73eb69201fa5e\": rpc error: code = NotFound desc = could not find container \"03aa7fc02c19732bcc2f291ae3e1eb8ee53ac2f8e4b6138a24c73eb69201fa5e\": container with ID starting with 03aa7fc02c19732bcc2f291ae3e1eb8ee53ac2f8e4b6138a24c73eb69201fa5e not found: ID does not exist" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.223987 4947 scope.go:117] "RemoveContainer" containerID="ffeb939b1928f4e71530473d93e5b142eb70a09d929bee5bea36c407c0fb44a2" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.224272 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffeb939b1928f4e71530473d93e5b142eb70a09d929bee5bea36c407c0fb44a2"} err="failed to get container status \"ffeb939b1928f4e71530473d93e5b142eb70a09d929bee5bea36c407c0fb44a2\": rpc error: code = NotFound desc = could not find container \"ffeb939b1928f4e71530473d93e5b142eb70a09d929bee5bea36c407c0fb44a2\": container with ID starting with ffeb939b1928f4e71530473d93e5b142eb70a09d929bee5bea36c407c0fb44a2 not found: ID does not exist" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.224307 4947 scope.go:117] "RemoveContainer" containerID="26f82f9be693d04a2b2536773e664c7e2525ea3cf90f6135e5ed8a41e0def93c" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.224548 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26f82f9be693d04a2b2536773e664c7e2525ea3cf90f6135e5ed8a41e0def93c"} err="failed to get container status \"26f82f9be693d04a2b2536773e664c7e2525ea3cf90f6135e5ed8a41e0def93c\": rpc error: code = NotFound desc = could not find container \"26f82f9be693d04a2b2536773e664c7e2525ea3cf90f6135e5ed8a41e0def93c\": container with ID starting with 26f82f9be693d04a2b2536773e664c7e2525ea3cf90f6135e5ed8a41e0def93c not found: ID does not exist" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.224566 4947 scope.go:117] "RemoveContainer" containerID="ba937f20fd42fdc9c935084319c2d3c455813e35002ed22a4be631cc74fab26c" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.224772 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba937f20fd42fdc9c935084319c2d3c455813e35002ed22a4be631cc74fab26c"} err="failed to get container status \"ba937f20fd42fdc9c935084319c2d3c455813e35002ed22a4be631cc74fab26c\": rpc error: code = NotFound desc = could not find container \"ba937f20fd42fdc9c935084319c2d3c455813e35002ed22a4be631cc74fab26c\": container with ID starting with ba937f20fd42fdc9c935084319c2d3c455813e35002ed22a4be631cc74fab26c not found: ID does not exist" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.224801 4947 scope.go:117] "RemoveContainer" containerID="9e5372e6932fe776f4e4d18124f45a05be3f374919ce6df4426d89a023353836" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.225864 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e5372e6932fe776f4e4d18124f45a05be3f374919ce6df4426d89a023353836"} err="failed to get container status \"9e5372e6932fe776f4e4d18124f45a05be3f374919ce6df4426d89a023353836\": rpc error: code = NotFound desc = could not find container \"9e5372e6932fe776f4e4d18124f45a05be3f374919ce6df4426d89a023353836\": container with ID starting with 9e5372e6932fe776f4e4d18124f45a05be3f374919ce6df4426d89a023353836 not found: ID does not exist" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.225899 4947 scope.go:117] "RemoveContainer" containerID="965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.226347 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1"} err="failed to get container status \"965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\": rpc error: code = NotFound desc = could not find container \"965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1\": container with ID starting with 965374cc19d92fa4bf2de4392f05033083478852dfaf4851c66d2bbefd190be1 not found: ID does not exist" Nov 29 06:47:02 crc kubenswrapper[4947]: I1129 06:47:02.268856 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" event={"ID":"8702f77b-770f-4eac-a154-c2de370c22c2","Type":"ContainerStarted","Data":"4b9286c2f9390393a7390b64ee3e2aa2090adaaf50d84d3cd38e4f1a73c54fd4"} Nov 29 06:47:03 crc kubenswrapper[4947]: I1129 06:47:03.278360 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-gwf74" event={"ID":"1a5dba8a-b8b9-48d7-9bd7-cf9873deaaec","Type":"ContainerStarted","Data":"49a7c0981b48a2ffcd7c0213f3ec0fcc76df58e2ecea3b94c388e702aade4fbd"} Nov 29 06:47:03 crc kubenswrapper[4947]: I1129 06:47:03.278495 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-gwf74" Nov 29 06:47:03 crc kubenswrapper[4947]: I1129 06:47:03.279741 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-qs54j" event={"ID":"dd5aa62c-e1b1-4ca2-b931-4599eacd883c","Type":"ContainerStarted","Data":"18dc6624ab1308b35b47cd7dc16b69fa7b6fcd669abaaceb0db5d4e9ba6196b6"} Nov 29 06:47:03 crc kubenswrapper[4947]: I1129 06:47:03.281601 4947 generic.go:334] "Generic (PLEG): container finished" podID="8702f77b-770f-4eac-a154-c2de370c22c2" containerID="3902237e5f0204663daf039cd644a48a894f079e2add805d0beb8f45000f827b" exitCode=0 Nov 29 06:47:03 crc kubenswrapper[4947]: I1129 06:47:03.281705 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" event={"ID":"8702f77b-770f-4eac-a154-c2de370c22c2","Type":"ContainerDied","Data":"3902237e5f0204663daf039cd644a48a894f079e2add805d0beb8f45000f827b"} Nov 29 06:47:03 crc kubenswrapper[4947]: I1129 06:47:03.283706 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-7lq6x" event={"ID":"6a644d1d-1b54-443f-9b88-8540fca13140","Type":"ContainerStarted","Data":"0d7669f9ab1b0fc795e7d2dc7084f6460258a6eb46d75a11bf686b2949e60b5f"} Nov 29 06:47:03 crc kubenswrapper[4947]: I1129 06:47:03.300519 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-gwf74" podStartSLOduration=2.631455263 podStartE2EDuration="15.300496396s" podCreationTimestamp="2025-11-29 06:46:48 +0000 UTC" firstStartedPulling="2025-11-29 06:46:49.344314963 +0000 UTC m=+760.388697044" lastFinishedPulling="2025-11-29 06:47:02.013356096 +0000 UTC m=+773.057738177" observedRunningTime="2025-11-29 06:47:03.295374619 +0000 UTC m=+774.339756700" watchObservedRunningTime="2025-11-29 06:47:03.300496396 +0000 UTC m=+774.344878477" Nov 29 06:47:03 crc kubenswrapper[4947]: I1129 06:47:03.316929 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-qs54j" podStartSLOduration=2.486559211 podStartE2EDuration="15.316909882s" podCreationTimestamp="2025-11-29 06:46:48 +0000 UTC" firstStartedPulling="2025-11-29 06:46:49.259154508 +0000 UTC m=+760.303536599" lastFinishedPulling="2025-11-29 06:47:02.089505189 +0000 UTC m=+773.133887270" observedRunningTime="2025-11-29 06:47:03.311748714 +0000 UTC m=+774.356130795" watchObservedRunningTime="2025-11-29 06:47:03.316909882 +0000 UTC m=+774.361291963" Nov 29 06:47:03 crc kubenswrapper[4947]: I1129 06:47:03.332403 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-7lq6x" podStartSLOduration=2.483636559 podStartE2EDuration="15.332379214s" podCreationTimestamp="2025-11-29 06:46:48 +0000 UTC" firstStartedPulling="2025-11-29 06:46:49.312986409 +0000 UTC m=+760.357368500" lastFinishedPulling="2025-11-29 06:47:02.161729074 +0000 UTC m=+773.206111155" observedRunningTime="2025-11-29 06:47:03.328196321 +0000 UTC m=+774.372578422" watchObservedRunningTime="2025-11-29 06:47:03.332379214 +0000 UTC m=+774.376761295" Nov 29 06:47:04 crc kubenswrapper[4947]: I1129 06:47:04.291918 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" event={"ID":"8702f77b-770f-4eac-a154-c2de370c22c2","Type":"ContainerStarted","Data":"4897094b4356aa01fde4f055155e172b50b97653ebea9fe9228ba0ea569b08a5"} Nov 29 06:47:04 crc kubenswrapper[4947]: I1129 06:47:04.291970 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" event={"ID":"8702f77b-770f-4eac-a154-c2de370c22c2","Type":"ContainerStarted","Data":"6f6b8aa5b7a572ee51c177c56d6d13c8c76f98106716c4eabdb2f511d4ccfacb"} Nov 29 06:47:04 crc kubenswrapper[4947]: I1129 06:47:04.291982 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" event={"ID":"8702f77b-770f-4eac-a154-c2de370c22c2","Type":"ContainerStarted","Data":"7fdbfdcaa18300a2d93b544f888694f43c9c950bef6ce7d1423ce84b0d217740"} Nov 29 06:47:04 crc kubenswrapper[4947]: I1129 06:47:04.291995 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" event={"ID":"8702f77b-770f-4eac-a154-c2de370c22c2","Type":"ContainerStarted","Data":"57520f7ac41820da7405bcc9283deddc6bf04403ea4d303577e048c7b70d90e1"} Nov 29 06:47:04 crc kubenswrapper[4947]: I1129 06:47:04.292006 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" event={"ID":"8702f77b-770f-4eac-a154-c2de370c22c2","Type":"ContainerStarted","Data":"fbd34e5ec9c8542547fbe7fbef50054233d7bf047a8c0b80690f5c44cadb28c3"} Nov 29 06:47:04 crc kubenswrapper[4947]: I1129 06:47:04.292016 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" event={"ID":"8702f77b-770f-4eac-a154-c2de370c22c2","Type":"ContainerStarted","Data":"f9c308b6f502b40294ebd649b73827b96cac95ee0bf48c6b1d11b82f8d1a0833"} Nov 29 06:47:08 crc kubenswrapper[4947]: I1129 06:47:08.320760 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" event={"ID":"8702f77b-770f-4eac-a154-c2de370c22c2","Type":"ContainerStarted","Data":"b4376bcaf4ccbfdd9c2944530caf85ae993ce275a2a4fa60f9c91b988c0b95fc"} Nov 29 06:47:08 crc kubenswrapper[4947]: I1129 06:47:08.321412 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" event={"ID":"8702f77b-770f-4eac-a154-c2de370c22c2","Type":"ContainerStarted","Data":"1d19453bd16055814c91d98849eecde390a5457abee5e5ae5893fd724a50c38e"} Nov 29 06:47:08 crc kubenswrapper[4947]: I1129 06:47:08.321460 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:08 crc kubenswrapper[4947]: I1129 06:47:08.321534 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:08 crc kubenswrapper[4947]: I1129 06:47:08.321590 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:08 crc kubenswrapper[4947]: I1129 06:47:08.348922 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" podStartSLOduration=8.348902097 podStartE2EDuration="8.348902097s" podCreationTimestamp="2025-11-29 06:47:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:47:08.346234001 +0000 UTC m=+779.390616092" watchObservedRunningTime="2025-11-29 06:47:08.348902097 +0000 UTC m=+779.393284178" Nov 29 06:47:08 crc kubenswrapper[4947]: I1129 06:47:08.357181 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:08 crc kubenswrapper[4947]: I1129 06:47:08.362279 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:09 crc kubenswrapper[4947]: I1129 06:47:09.082809 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-gwf74" Nov 29 06:47:22 crc kubenswrapper[4947]: I1129 06:47:22.987536 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 06:47:22 crc kubenswrapper[4947]: I1129 06:47:22.988134 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 06:47:30 crc kubenswrapper[4947]: I1129 06:47:30.393379 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-69kjm" Nov 29 06:47:51 crc kubenswrapper[4947]: I1129 06:47:51.669119 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvrkrr"] Nov 29 06:47:51 crc kubenswrapper[4947]: I1129 06:47:51.672578 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvrkrr" Nov 29 06:47:51 crc kubenswrapper[4947]: I1129 06:47:51.684596 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 29 06:47:51 crc kubenswrapper[4947]: I1129 06:47:51.690076 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvrkrr"] Nov 29 06:47:51 crc kubenswrapper[4947]: I1129 06:47:51.768627 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8cbbb392-26e7-49a5-bd3f-992f9e5158cb-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvrkrr\" (UID: \"8cbbb392-26e7-49a5-bd3f-992f9e5158cb\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvrkrr" Nov 29 06:47:51 crc kubenswrapper[4947]: I1129 06:47:51.768681 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8cbbb392-26e7-49a5-bd3f-992f9e5158cb-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvrkrr\" (UID: \"8cbbb392-26e7-49a5-bd3f-992f9e5158cb\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvrkrr" Nov 29 06:47:51 crc kubenswrapper[4947]: I1129 06:47:51.768763 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdv52\" (UniqueName: \"kubernetes.io/projected/8cbbb392-26e7-49a5-bd3f-992f9e5158cb-kube-api-access-zdv52\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvrkrr\" (UID: \"8cbbb392-26e7-49a5-bd3f-992f9e5158cb\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvrkrr" Nov 29 06:47:51 crc kubenswrapper[4947]: I1129 06:47:51.870592 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8cbbb392-26e7-49a5-bd3f-992f9e5158cb-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvrkrr\" (UID: \"8cbbb392-26e7-49a5-bd3f-992f9e5158cb\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvrkrr" Nov 29 06:47:51 crc kubenswrapper[4947]: I1129 06:47:51.870650 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8cbbb392-26e7-49a5-bd3f-992f9e5158cb-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvrkrr\" (UID: \"8cbbb392-26e7-49a5-bd3f-992f9e5158cb\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvrkrr" Nov 29 06:47:51 crc kubenswrapper[4947]: I1129 06:47:51.870694 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdv52\" (UniqueName: \"kubernetes.io/projected/8cbbb392-26e7-49a5-bd3f-992f9e5158cb-kube-api-access-zdv52\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvrkrr\" (UID: \"8cbbb392-26e7-49a5-bd3f-992f9e5158cb\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvrkrr" Nov 29 06:47:51 crc kubenswrapper[4947]: I1129 06:47:51.871844 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8cbbb392-26e7-49a5-bd3f-992f9e5158cb-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvrkrr\" (UID: \"8cbbb392-26e7-49a5-bd3f-992f9e5158cb\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvrkrr" Nov 29 06:47:51 crc kubenswrapper[4947]: I1129 06:47:51.871878 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8cbbb392-26e7-49a5-bd3f-992f9e5158cb-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvrkrr\" (UID: \"8cbbb392-26e7-49a5-bd3f-992f9e5158cb\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvrkrr" Nov 29 06:47:51 crc kubenswrapper[4947]: I1129 06:47:51.891663 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdv52\" (UniqueName: \"kubernetes.io/projected/8cbbb392-26e7-49a5-bd3f-992f9e5158cb-kube-api-access-zdv52\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvrkrr\" (UID: \"8cbbb392-26e7-49a5-bd3f-992f9e5158cb\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvrkrr" Nov 29 06:47:52 crc kubenswrapper[4947]: I1129 06:47:52.007683 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvrkrr" Nov 29 06:47:52 crc kubenswrapper[4947]: I1129 06:47:52.476872 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvrkrr"] Nov 29 06:47:52 crc kubenswrapper[4947]: I1129 06:47:52.589400 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvrkrr" event={"ID":"8cbbb392-26e7-49a5-bd3f-992f9e5158cb","Type":"ContainerStarted","Data":"800be9320dd96df936d2523a518dc0335a048e07164b9f4e1195e2cf3163f8b4"} Nov 29 06:47:52 crc kubenswrapper[4947]: I1129 06:47:52.987452 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 06:47:52 crc kubenswrapper[4947]: I1129 06:47:52.987554 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 06:47:53 crc kubenswrapper[4947]: I1129 06:47:53.464640 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4qxkr"] Nov 29 06:47:53 crc kubenswrapper[4947]: I1129 06:47:53.465970 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4qxkr" Nov 29 06:47:53 crc kubenswrapper[4947]: I1129 06:47:53.469277 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4qxkr"] Nov 29 06:47:53 crc kubenswrapper[4947]: I1129 06:47:53.597365 4947 generic.go:334] "Generic (PLEG): container finished" podID="8cbbb392-26e7-49a5-bd3f-992f9e5158cb" containerID="8452cb9b2dcf28143b8b2bb5a05dfe7cec44c46b61b7a62690bcec1d50bba67a" exitCode=0 Nov 29 06:47:53 crc kubenswrapper[4947]: I1129 06:47:53.597416 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvrkrr" event={"ID":"8cbbb392-26e7-49a5-bd3f-992f9e5158cb","Type":"ContainerDied","Data":"8452cb9b2dcf28143b8b2bb5a05dfe7cec44c46b61b7a62690bcec1d50bba67a"} Nov 29 06:47:53 crc kubenswrapper[4947]: I1129 06:47:53.629902 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcxvk\" (UniqueName: \"kubernetes.io/projected/1e9bb984-69ef-4959-a6ab-ee4ee83d6857-kube-api-access-dcxvk\") pod \"redhat-operators-4qxkr\" (UID: \"1e9bb984-69ef-4959-a6ab-ee4ee83d6857\") " pod="openshift-marketplace/redhat-operators-4qxkr" Nov 29 06:47:53 crc kubenswrapper[4947]: I1129 06:47:53.630356 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e9bb984-69ef-4959-a6ab-ee4ee83d6857-catalog-content\") pod \"redhat-operators-4qxkr\" (UID: \"1e9bb984-69ef-4959-a6ab-ee4ee83d6857\") " pod="openshift-marketplace/redhat-operators-4qxkr" Nov 29 06:47:53 crc kubenswrapper[4947]: I1129 06:47:53.630452 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e9bb984-69ef-4959-a6ab-ee4ee83d6857-utilities\") pod \"redhat-operators-4qxkr\" (UID: \"1e9bb984-69ef-4959-a6ab-ee4ee83d6857\") " pod="openshift-marketplace/redhat-operators-4qxkr" Nov 29 06:47:53 crc kubenswrapper[4947]: I1129 06:47:53.731532 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e9bb984-69ef-4959-a6ab-ee4ee83d6857-utilities\") pod \"redhat-operators-4qxkr\" (UID: \"1e9bb984-69ef-4959-a6ab-ee4ee83d6857\") " pod="openshift-marketplace/redhat-operators-4qxkr" Nov 29 06:47:53 crc kubenswrapper[4947]: I1129 06:47:53.731606 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcxvk\" (UniqueName: \"kubernetes.io/projected/1e9bb984-69ef-4959-a6ab-ee4ee83d6857-kube-api-access-dcxvk\") pod \"redhat-operators-4qxkr\" (UID: \"1e9bb984-69ef-4959-a6ab-ee4ee83d6857\") " pod="openshift-marketplace/redhat-operators-4qxkr" Nov 29 06:47:53 crc kubenswrapper[4947]: I1129 06:47:53.731635 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e9bb984-69ef-4959-a6ab-ee4ee83d6857-catalog-content\") pod \"redhat-operators-4qxkr\" (UID: \"1e9bb984-69ef-4959-a6ab-ee4ee83d6857\") " pod="openshift-marketplace/redhat-operators-4qxkr" Nov 29 06:47:53 crc kubenswrapper[4947]: I1129 06:47:53.732101 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e9bb984-69ef-4959-a6ab-ee4ee83d6857-utilities\") pod \"redhat-operators-4qxkr\" (UID: \"1e9bb984-69ef-4959-a6ab-ee4ee83d6857\") " pod="openshift-marketplace/redhat-operators-4qxkr" Nov 29 06:47:53 crc kubenswrapper[4947]: I1129 06:47:53.732204 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e9bb984-69ef-4959-a6ab-ee4ee83d6857-catalog-content\") pod \"redhat-operators-4qxkr\" (UID: \"1e9bb984-69ef-4959-a6ab-ee4ee83d6857\") " pod="openshift-marketplace/redhat-operators-4qxkr" Nov 29 06:47:53 crc kubenswrapper[4947]: I1129 06:47:53.750825 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcxvk\" (UniqueName: \"kubernetes.io/projected/1e9bb984-69ef-4959-a6ab-ee4ee83d6857-kube-api-access-dcxvk\") pod \"redhat-operators-4qxkr\" (UID: \"1e9bb984-69ef-4959-a6ab-ee4ee83d6857\") " pod="openshift-marketplace/redhat-operators-4qxkr" Nov 29 06:47:53 crc kubenswrapper[4947]: I1129 06:47:53.787643 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4qxkr" Nov 29 06:47:53 crc kubenswrapper[4947]: I1129 06:47:53.974291 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4qxkr"] Nov 29 06:47:54 crc kubenswrapper[4947]: I1129 06:47:54.604061 4947 generic.go:334] "Generic (PLEG): container finished" podID="1e9bb984-69ef-4959-a6ab-ee4ee83d6857" containerID="1032580b517af64784189942ec0d443be25fad7924315653dc145309aa677676" exitCode=0 Nov 29 06:47:54 crc kubenswrapper[4947]: I1129 06:47:54.604122 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qxkr" event={"ID":"1e9bb984-69ef-4959-a6ab-ee4ee83d6857","Type":"ContainerDied","Data":"1032580b517af64784189942ec0d443be25fad7924315653dc145309aa677676"} Nov 29 06:47:54 crc kubenswrapper[4947]: I1129 06:47:54.605652 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qxkr" event={"ID":"1e9bb984-69ef-4959-a6ab-ee4ee83d6857","Type":"ContainerStarted","Data":"68527c6f79d7ee63213482282792f7ed312fa339c342e2aea62e7d43ad373e05"} Nov 29 06:47:55 crc kubenswrapper[4947]: I1129 06:47:55.614464 4947 generic.go:334] "Generic (PLEG): container finished" podID="8cbbb392-26e7-49a5-bd3f-992f9e5158cb" containerID="35e171c2cc52863a565dabe945118d46fe8b739baded820363bab47f81a83377" exitCode=0 Nov 29 06:47:55 crc kubenswrapper[4947]: I1129 06:47:55.614583 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvrkrr" event={"ID":"8cbbb392-26e7-49a5-bd3f-992f9e5158cb","Type":"ContainerDied","Data":"35e171c2cc52863a565dabe945118d46fe8b739baded820363bab47f81a83377"} Nov 29 06:47:55 crc kubenswrapper[4947]: I1129 06:47:55.616746 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qxkr" event={"ID":"1e9bb984-69ef-4959-a6ab-ee4ee83d6857","Type":"ContainerStarted","Data":"7b53f2cbab385757be16ecbf24bc247847cd3faca6fab82544deed3f791a06b3"} Nov 29 06:47:56 crc kubenswrapper[4947]: I1129 06:47:56.625262 4947 generic.go:334] "Generic (PLEG): container finished" podID="1e9bb984-69ef-4959-a6ab-ee4ee83d6857" containerID="7b53f2cbab385757be16ecbf24bc247847cd3faca6fab82544deed3f791a06b3" exitCode=0 Nov 29 06:47:56 crc kubenswrapper[4947]: I1129 06:47:56.625332 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qxkr" event={"ID":"1e9bb984-69ef-4959-a6ab-ee4ee83d6857","Type":"ContainerDied","Data":"7b53f2cbab385757be16ecbf24bc247847cd3faca6fab82544deed3f791a06b3"} Nov 29 06:47:56 crc kubenswrapper[4947]: I1129 06:47:56.630021 4947 generic.go:334] "Generic (PLEG): container finished" podID="8cbbb392-26e7-49a5-bd3f-992f9e5158cb" containerID="a476802fc6db8c4ae254d77725709e64febbd00206f5c9553003537ba39adce3" exitCode=0 Nov 29 06:47:56 crc kubenswrapper[4947]: I1129 06:47:56.630048 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvrkrr" event={"ID":"8cbbb392-26e7-49a5-bd3f-992f9e5158cb","Type":"ContainerDied","Data":"a476802fc6db8c4ae254d77725709e64febbd00206f5c9553003537ba39adce3"} Nov 29 06:47:57 crc kubenswrapper[4947]: I1129 06:47:57.641926 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qxkr" event={"ID":"1e9bb984-69ef-4959-a6ab-ee4ee83d6857","Type":"ContainerStarted","Data":"7bf5e333c411bbce026a34fcdb59e4f7b3fda9819e7613045a9da5c5b8a10f1a"} Nov 29 06:47:57 crc kubenswrapper[4947]: I1129 06:47:57.662979 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4qxkr" podStartSLOduration=1.945030123 podStartE2EDuration="4.662956812s" podCreationTimestamp="2025-11-29 06:47:53 +0000 UTC" firstStartedPulling="2025-11-29 06:47:54.60552399 +0000 UTC m=+825.649906081" lastFinishedPulling="2025-11-29 06:47:57.323450689 +0000 UTC m=+828.367832770" observedRunningTime="2025-11-29 06:47:57.661729762 +0000 UTC m=+828.706111873" watchObservedRunningTime="2025-11-29 06:47:57.662956812 +0000 UTC m=+828.707338903" Nov 29 06:47:57 crc kubenswrapper[4947]: I1129 06:47:57.881705 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvrkrr" Nov 29 06:47:57 crc kubenswrapper[4947]: I1129 06:47:57.990787 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdv52\" (UniqueName: \"kubernetes.io/projected/8cbbb392-26e7-49a5-bd3f-992f9e5158cb-kube-api-access-zdv52\") pod \"8cbbb392-26e7-49a5-bd3f-992f9e5158cb\" (UID: \"8cbbb392-26e7-49a5-bd3f-992f9e5158cb\") " Nov 29 06:47:57 crc kubenswrapper[4947]: I1129 06:47:57.990853 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8cbbb392-26e7-49a5-bd3f-992f9e5158cb-util\") pod \"8cbbb392-26e7-49a5-bd3f-992f9e5158cb\" (UID: \"8cbbb392-26e7-49a5-bd3f-992f9e5158cb\") " Nov 29 06:47:57 crc kubenswrapper[4947]: I1129 06:47:57.990912 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8cbbb392-26e7-49a5-bd3f-992f9e5158cb-bundle\") pod \"8cbbb392-26e7-49a5-bd3f-992f9e5158cb\" (UID: \"8cbbb392-26e7-49a5-bd3f-992f9e5158cb\") " Nov 29 06:47:57 crc kubenswrapper[4947]: I1129 06:47:57.991799 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cbbb392-26e7-49a5-bd3f-992f9e5158cb-bundle" (OuterVolumeSpecName: "bundle") pod "8cbbb392-26e7-49a5-bd3f-992f9e5158cb" (UID: "8cbbb392-26e7-49a5-bd3f-992f9e5158cb"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:47:57 crc kubenswrapper[4947]: I1129 06:47:57.998552 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cbbb392-26e7-49a5-bd3f-992f9e5158cb-kube-api-access-zdv52" (OuterVolumeSpecName: "kube-api-access-zdv52") pod "8cbbb392-26e7-49a5-bd3f-992f9e5158cb" (UID: "8cbbb392-26e7-49a5-bd3f-992f9e5158cb"). InnerVolumeSpecName "kube-api-access-zdv52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:47:58 crc kubenswrapper[4947]: I1129 06:47:58.065971 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cbbb392-26e7-49a5-bd3f-992f9e5158cb-util" (OuterVolumeSpecName: "util") pod "8cbbb392-26e7-49a5-bd3f-992f9e5158cb" (UID: "8cbbb392-26e7-49a5-bd3f-992f9e5158cb"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:47:58 crc kubenswrapper[4947]: I1129 06:47:58.093070 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdv52\" (UniqueName: \"kubernetes.io/projected/8cbbb392-26e7-49a5-bd3f-992f9e5158cb-kube-api-access-zdv52\") on node \"crc\" DevicePath \"\"" Nov 29 06:47:58 crc kubenswrapper[4947]: I1129 06:47:58.093127 4947 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8cbbb392-26e7-49a5-bd3f-992f9e5158cb-util\") on node \"crc\" DevicePath \"\"" Nov 29 06:47:58 crc kubenswrapper[4947]: I1129 06:47:58.093140 4947 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8cbbb392-26e7-49a5-bd3f-992f9e5158cb-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 06:47:58 crc kubenswrapper[4947]: I1129 06:47:58.650634 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvrkrr" event={"ID":"8cbbb392-26e7-49a5-bd3f-992f9e5158cb","Type":"ContainerDied","Data":"800be9320dd96df936d2523a518dc0335a048e07164b9f4e1195e2cf3163f8b4"} Nov 29 06:47:58 crc kubenswrapper[4947]: I1129 06:47:58.650717 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="800be9320dd96df936d2523a518dc0335a048e07164b9f4e1195e2cf3163f8b4" Nov 29 06:47:58 crc kubenswrapper[4947]: I1129 06:47:58.650666 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvrkrr" Nov 29 06:48:03 crc kubenswrapper[4947]: I1129 06:48:03.101108 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-wrff6"] Nov 29 06:48:03 crc kubenswrapper[4947]: E1129 06:48:03.101908 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cbbb392-26e7-49a5-bd3f-992f9e5158cb" containerName="util" Nov 29 06:48:03 crc kubenswrapper[4947]: I1129 06:48:03.101925 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cbbb392-26e7-49a5-bd3f-992f9e5158cb" containerName="util" Nov 29 06:48:03 crc kubenswrapper[4947]: E1129 06:48:03.101939 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cbbb392-26e7-49a5-bd3f-992f9e5158cb" containerName="extract" Nov 29 06:48:03 crc kubenswrapper[4947]: I1129 06:48:03.101946 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cbbb392-26e7-49a5-bd3f-992f9e5158cb" containerName="extract" Nov 29 06:48:03 crc kubenswrapper[4947]: E1129 06:48:03.101964 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cbbb392-26e7-49a5-bd3f-992f9e5158cb" containerName="pull" Nov 29 06:48:03 crc kubenswrapper[4947]: I1129 06:48:03.101972 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cbbb392-26e7-49a5-bd3f-992f9e5158cb" containerName="pull" Nov 29 06:48:03 crc kubenswrapper[4947]: I1129 06:48:03.102094 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cbbb392-26e7-49a5-bd3f-992f9e5158cb" containerName="extract" Nov 29 06:48:03 crc kubenswrapper[4947]: I1129 06:48:03.102567 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-wrff6" Nov 29 06:48:03 crc kubenswrapper[4947]: I1129 06:48:03.104374 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-d7t92" Nov 29 06:48:03 crc kubenswrapper[4947]: I1129 06:48:03.105010 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 29 06:48:03 crc kubenswrapper[4947]: I1129 06:48:03.105058 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 29 06:48:03 crc kubenswrapper[4947]: I1129 06:48:03.110403 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-wrff6"] Nov 29 06:48:03 crc kubenswrapper[4947]: I1129 06:48:03.256852 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpm82\" (UniqueName: \"kubernetes.io/projected/e1d1db66-4ff5-4958-b66a-788459bdfe64-kube-api-access-mpm82\") pod \"nmstate-operator-5b5b58f5c8-wrff6\" (UID: \"e1d1db66-4ff5-4958-b66a-788459bdfe64\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-wrff6" Nov 29 06:48:03 crc kubenswrapper[4947]: I1129 06:48:03.358022 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpm82\" (UniqueName: \"kubernetes.io/projected/e1d1db66-4ff5-4958-b66a-788459bdfe64-kube-api-access-mpm82\") pod \"nmstate-operator-5b5b58f5c8-wrff6\" (UID: \"e1d1db66-4ff5-4958-b66a-788459bdfe64\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-wrff6" Nov 29 06:48:03 crc kubenswrapper[4947]: I1129 06:48:03.391482 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpm82\" (UniqueName: \"kubernetes.io/projected/e1d1db66-4ff5-4958-b66a-788459bdfe64-kube-api-access-mpm82\") pod \"nmstate-operator-5b5b58f5c8-wrff6\" (UID: \"e1d1db66-4ff5-4958-b66a-788459bdfe64\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-wrff6" Nov 29 06:48:03 crc kubenswrapper[4947]: I1129 06:48:03.419156 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-wrff6" Nov 29 06:48:03 crc kubenswrapper[4947]: I1129 06:48:03.652809 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-wrff6"] Nov 29 06:48:03 crc kubenswrapper[4947]: I1129 06:48:03.673802 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-wrff6" event={"ID":"e1d1db66-4ff5-4958-b66a-788459bdfe64","Type":"ContainerStarted","Data":"31404f345c14325241f4e8bc7fe08d18e5ca5ea8e74583dbf3f3ecddd0fdef40"} Nov 29 06:48:03 crc kubenswrapper[4947]: I1129 06:48:03.788748 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4qxkr" Nov 29 06:48:03 crc kubenswrapper[4947]: I1129 06:48:03.788822 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4qxkr" Nov 29 06:48:03 crc kubenswrapper[4947]: I1129 06:48:03.853860 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4qxkr" Nov 29 06:48:04 crc kubenswrapper[4947]: I1129 06:48:04.745178 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4qxkr" Nov 29 06:48:06 crc kubenswrapper[4947]: I1129 06:48:06.650186 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4qxkr"] Nov 29 06:48:06 crc kubenswrapper[4947]: I1129 06:48:06.700610 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4qxkr" podUID="1e9bb984-69ef-4959-a6ab-ee4ee83d6857" containerName="registry-server" containerID="cri-o://7bf5e333c411bbce026a34fcdb59e4f7b3fda9819e7613045a9da5c5b8a10f1a" gracePeriod=2 Nov 29 06:48:08 crc kubenswrapper[4947]: I1129 06:48:08.714570 4947 generic.go:334] "Generic (PLEG): container finished" podID="1e9bb984-69ef-4959-a6ab-ee4ee83d6857" containerID="7bf5e333c411bbce026a34fcdb59e4f7b3fda9819e7613045a9da5c5b8a10f1a" exitCode=0 Nov 29 06:48:08 crc kubenswrapper[4947]: I1129 06:48:08.714854 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qxkr" event={"ID":"1e9bb984-69ef-4959-a6ab-ee4ee83d6857","Type":"ContainerDied","Data":"7bf5e333c411bbce026a34fcdb59e4f7b3fda9819e7613045a9da5c5b8a10f1a"} Nov 29 06:48:10 crc kubenswrapper[4947]: I1129 06:48:10.343467 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4qxkr" Nov 29 06:48:10 crc kubenswrapper[4947]: I1129 06:48:10.522843 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e9bb984-69ef-4959-a6ab-ee4ee83d6857-utilities\") pod \"1e9bb984-69ef-4959-a6ab-ee4ee83d6857\" (UID: \"1e9bb984-69ef-4959-a6ab-ee4ee83d6857\") " Nov 29 06:48:10 crc kubenswrapper[4947]: I1129 06:48:10.523323 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e9bb984-69ef-4959-a6ab-ee4ee83d6857-catalog-content\") pod \"1e9bb984-69ef-4959-a6ab-ee4ee83d6857\" (UID: \"1e9bb984-69ef-4959-a6ab-ee4ee83d6857\") " Nov 29 06:48:10 crc kubenswrapper[4947]: I1129 06:48:10.523355 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcxvk\" (UniqueName: \"kubernetes.io/projected/1e9bb984-69ef-4959-a6ab-ee4ee83d6857-kube-api-access-dcxvk\") pod \"1e9bb984-69ef-4959-a6ab-ee4ee83d6857\" (UID: \"1e9bb984-69ef-4959-a6ab-ee4ee83d6857\") " Nov 29 06:48:10 crc kubenswrapper[4947]: I1129 06:48:10.523987 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e9bb984-69ef-4959-a6ab-ee4ee83d6857-utilities" (OuterVolumeSpecName: "utilities") pod "1e9bb984-69ef-4959-a6ab-ee4ee83d6857" (UID: "1e9bb984-69ef-4959-a6ab-ee4ee83d6857"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:48:10 crc kubenswrapper[4947]: I1129 06:48:10.529179 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e9bb984-69ef-4959-a6ab-ee4ee83d6857-kube-api-access-dcxvk" (OuterVolumeSpecName: "kube-api-access-dcxvk") pod "1e9bb984-69ef-4959-a6ab-ee4ee83d6857" (UID: "1e9bb984-69ef-4959-a6ab-ee4ee83d6857"). InnerVolumeSpecName "kube-api-access-dcxvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:48:10 crc kubenswrapper[4947]: I1129 06:48:10.624383 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcxvk\" (UniqueName: \"kubernetes.io/projected/1e9bb984-69ef-4959-a6ab-ee4ee83d6857-kube-api-access-dcxvk\") on node \"crc\" DevicePath \"\"" Nov 29 06:48:10 crc kubenswrapper[4947]: I1129 06:48:10.624418 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e9bb984-69ef-4959-a6ab-ee4ee83d6857-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 06:48:10 crc kubenswrapper[4947]: I1129 06:48:10.636277 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e9bb984-69ef-4959-a6ab-ee4ee83d6857-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1e9bb984-69ef-4959-a6ab-ee4ee83d6857" (UID: "1e9bb984-69ef-4959-a6ab-ee4ee83d6857"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:48:10 crc kubenswrapper[4947]: I1129 06:48:10.731363 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e9bb984-69ef-4959-a6ab-ee4ee83d6857-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 06:48:10 crc kubenswrapper[4947]: I1129 06:48:10.735746 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qxkr" event={"ID":"1e9bb984-69ef-4959-a6ab-ee4ee83d6857","Type":"ContainerDied","Data":"68527c6f79d7ee63213482282792f7ed312fa339c342e2aea62e7d43ad373e05"} Nov 29 06:48:10 crc kubenswrapper[4947]: I1129 06:48:10.735826 4947 scope.go:117] "RemoveContainer" containerID="7bf5e333c411bbce026a34fcdb59e4f7b3fda9819e7613045a9da5c5b8a10f1a" Nov 29 06:48:10 crc kubenswrapper[4947]: I1129 06:48:10.735771 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4qxkr" Nov 29 06:48:10 crc kubenswrapper[4947]: I1129 06:48:10.744089 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-wrff6" event={"ID":"e1d1db66-4ff5-4958-b66a-788459bdfe64","Type":"ContainerStarted","Data":"aaa88cf6e9db08479c1d61e3249123ae8dbc7026970a2d3e59e03e1ca02b5d23"} Nov 29 06:48:10 crc kubenswrapper[4947]: I1129 06:48:10.766215 4947 scope.go:117] "RemoveContainer" containerID="7b53f2cbab385757be16ecbf24bc247847cd3faca6fab82544deed3f791a06b3" Nov 29 06:48:10 crc kubenswrapper[4947]: I1129 06:48:10.767100 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-wrff6" podStartSLOduration=1.108121234 podStartE2EDuration="7.767086499s" podCreationTimestamp="2025-11-29 06:48:03 +0000 UTC" firstStartedPulling="2025-11-29 06:48:03.659069704 +0000 UTC m=+834.703451785" lastFinishedPulling="2025-11-29 06:48:10.318034969 +0000 UTC m=+841.362417050" observedRunningTime="2025-11-29 06:48:10.763092558 +0000 UTC m=+841.807474659" watchObservedRunningTime="2025-11-29 06:48:10.767086499 +0000 UTC m=+841.811468580" Nov 29 06:48:10 crc kubenswrapper[4947]: I1129 06:48:10.786208 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4qxkr"] Nov 29 06:48:10 crc kubenswrapper[4947]: I1129 06:48:10.786288 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4qxkr"] Nov 29 06:48:10 crc kubenswrapper[4947]: I1129 06:48:10.801385 4947 scope.go:117] "RemoveContainer" containerID="1032580b517af64784189942ec0d443be25fad7924315653dc145309aa677676" Nov 29 06:48:11 crc kubenswrapper[4947]: I1129 06:48:11.189135 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e9bb984-69ef-4959-a6ab-ee4ee83d6857" path="/var/lib/kubelet/pods/1e9bb984-69ef-4959-a6ab-ee4ee83d6857/volumes" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.147036 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-gkxps"] Nov 29 06:48:12 crc kubenswrapper[4947]: E1129 06:48:12.147740 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e9bb984-69ef-4959-a6ab-ee4ee83d6857" containerName="extract-utilities" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.147762 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e9bb984-69ef-4959-a6ab-ee4ee83d6857" containerName="extract-utilities" Nov 29 06:48:12 crc kubenswrapper[4947]: E1129 06:48:12.147785 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e9bb984-69ef-4959-a6ab-ee4ee83d6857" containerName="extract-content" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.147796 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e9bb984-69ef-4959-a6ab-ee4ee83d6857" containerName="extract-content" Nov 29 06:48:12 crc kubenswrapper[4947]: E1129 06:48:12.147810 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e9bb984-69ef-4959-a6ab-ee4ee83d6857" containerName="registry-server" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.147821 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e9bb984-69ef-4959-a6ab-ee4ee83d6857" containerName="registry-server" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.147994 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e9bb984-69ef-4959-a6ab-ee4ee83d6857" containerName="registry-server" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.148633 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-gkxps" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.149444 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhjq8\" (UniqueName: \"kubernetes.io/projected/b84ff824-fb24-473d-a9df-501bc25d8547-kube-api-access-hhjq8\") pod \"nmstate-webhook-5f6d4c5ccb-gkxps\" (UID: \"b84ff824-fb24-473d-a9df-501bc25d8547\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-gkxps" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.149499 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b84ff824-fb24-473d-a9df-501bc25d8547-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-gkxps\" (UID: \"b84ff824-fb24-473d-a9df-501bc25d8547\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-gkxps" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.155060 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-ds225" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.155720 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-w9rng"] Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.156785 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-w9rng" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.157774 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.197060 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-cb2bc"] Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.204385 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-cb2bc" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.207008 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-w9rng"] Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.220070 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-gkxps"] Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.250663 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpptd\" (UniqueName: \"kubernetes.io/projected/ec3e638f-25fc-45ef-b33a-696d06037f00-kube-api-access-kpptd\") pod \"nmstate-handler-cb2bc\" (UID: \"ec3e638f-25fc-45ef-b33a-696d06037f00\") " pod="openshift-nmstate/nmstate-handler-cb2bc" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.250726 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ec3e638f-25fc-45ef-b33a-696d06037f00-ovs-socket\") pod \"nmstate-handler-cb2bc\" (UID: \"ec3e638f-25fc-45ef-b33a-696d06037f00\") " pod="openshift-nmstate/nmstate-handler-cb2bc" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.250753 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ec3e638f-25fc-45ef-b33a-696d06037f00-dbus-socket\") pod \"nmstate-handler-cb2bc\" (UID: \"ec3e638f-25fc-45ef-b33a-696d06037f00\") " pod="openshift-nmstate/nmstate-handler-cb2bc" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.250805 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhjq8\" (UniqueName: \"kubernetes.io/projected/b84ff824-fb24-473d-a9df-501bc25d8547-kube-api-access-hhjq8\") pod \"nmstate-webhook-5f6d4c5ccb-gkxps\" (UID: \"b84ff824-fb24-473d-a9df-501bc25d8547\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-gkxps" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.250823 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t7mg\" (UniqueName: \"kubernetes.io/projected/06e121e7-c32a-419c-ab90-3ac8cd45fb7c-kube-api-access-2t7mg\") pod \"nmstate-metrics-7f946cbc9-w9rng\" (UID: \"06e121e7-c32a-419c-ab90-3ac8cd45fb7c\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-w9rng" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.250856 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b84ff824-fb24-473d-a9df-501bc25d8547-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-gkxps\" (UID: \"b84ff824-fb24-473d-a9df-501bc25d8547\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-gkxps" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.250882 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ec3e638f-25fc-45ef-b33a-696d06037f00-nmstate-lock\") pod \"nmstate-handler-cb2bc\" (UID: \"ec3e638f-25fc-45ef-b33a-696d06037f00\") " pod="openshift-nmstate/nmstate-handler-cb2bc" Nov 29 06:48:12 crc kubenswrapper[4947]: E1129 06:48:12.251111 4947 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Nov 29 06:48:12 crc kubenswrapper[4947]: E1129 06:48:12.251203 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b84ff824-fb24-473d-a9df-501bc25d8547-tls-key-pair podName:b84ff824-fb24-473d-a9df-501bc25d8547 nodeName:}" failed. No retries permitted until 2025-11-29 06:48:12.751172136 +0000 UTC m=+843.795554217 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/b84ff824-fb24-473d-a9df-501bc25d8547-tls-key-pair") pod "nmstate-webhook-5f6d4c5ccb-gkxps" (UID: "b84ff824-fb24-473d-a9df-501bc25d8547") : secret "openshift-nmstate-webhook" not found Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.297296 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhjq8\" (UniqueName: \"kubernetes.io/projected/b84ff824-fb24-473d-a9df-501bc25d8547-kube-api-access-hhjq8\") pod \"nmstate-webhook-5f6d4c5ccb-gkxps\" (UID: \"b84ff824-fb24-473d-a9df-501bc25d8547\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-gkxps" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.344136 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-xsb6d"] Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.344938 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-xsb6d" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.348828 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.348980 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.348979 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-5ttwb" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.351671 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpptd\" (UniqueName: \"kubernetes.io/projected/ec3e638f-25fc-45ef-b33a-696d06037f00-kube-api-access-kpptd\") pod \"nmstate-handler-cb2bc\" (UID: \"ec3e638f-25fc-45ef-b33a-696d06037f00\") " pod="openshift-nmstate/nmstate-handler-cb2bc" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.351712 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ec3e638f-25fc-45ef-b33a-696d06037f00-ovs-socket\") pod \"nmstate-handler-cb2bc\" (UID: \"ec3e638f-25fc-45ef-b33a-696d06037f00\") " pod="openshift-nmstate/nmstate-handler-cb2bc" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.351738 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ec3e638f-25fc-45ef-b33a-696d06037f00-dbus-socket\") pod \"nmstate-handler-cb2bc\" (UID: \"ec3e638f-25fc-45ef-b33a-696d06037f00\") " pod="openshift-nmstate/nmstate-handler-cb2bc" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.351764 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2t7mg\" (UniqueName: \"kubernetes.io/projected/06e121e7-c32a-419c-ab90-3ac8cd45fb7c-kube-api-access-2t7mg\") pod \"nmstate-metrics-7f946cbc9-w9rng\" (UID: \"06e121e7-c32a-419c-ab90-3ac8cd45fb7c\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-w9rng" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.351892 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ec3e638f-25fc-45ef-b33a-696d06037f00-ovs-socket\") pod \"nmstate-handler-cb2bc\" (UID: \"ec3e638f-25fc-45ef-b33a-696d06037f00\") " pod="openshift-nmstate/nmstate-handler-cb2bc" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.352082 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ec3e638f-25fc-45ef-b33a-696d06037f00-dbus-socket\") pod \"nmstate-handler-cb2bc\" (UID: \"ec3e638f-25fc-45ef-b33a-696d06037f00\") " pod="openshift-nmstate/nmstate-handler-cb2bc" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.352099 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ec3e638f-25fc-45ef-b33a-696d06037f00-nmstate-lock\") pod \"nmstate-handler-cb2bc\" (UID: \"ec3e638f-25fc-45ef-b33a-696d06037f00\") " pod="openshift-nmstate/nmstate-handler-cb2bc" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.352196 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ec3e638f-25fc-45ef-b33a-696d06037f00-nmstate-lock\") pod \"nmstate-handler-cb2bc\" (UID: \"ec3e638f-25fc-45ef-b33a-696d06037f00\") " pod="openshift-nmstate/nmstate-handler-cb2bc" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.354260 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-xsb6d"] Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.369783 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpptd\" (UniqueName: \"kubernetes.io/projected/ec3e638f-25fc-45ef-b33a-696d06037f00-kube-api-access-kpptd\") pod \"nmstate-handler-cb2bc\" (UID: \"ec3e638f-25fc-45ef-b33a-696d06037f00\") " pod="openshift-nmstate/nmstate-handler-cb2bc" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.369861 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2t7mg\" (UniqueName: \"kubernetes.io/projected/06e121e7-c32a-419c-ab90-3ac8cd45fb7c-kube-api-access-2t7mg\") pod \"nmstate-metrics-7f946cbc9-w9rng\" (UID: \"06e121e7-c32a-419c-ab90-3ac8cd45fb7c\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-w9rng" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.453513 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/89a3bce9-0cbd-4794-9af1-2618110280cf-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-xsb6d\" (UID: \"89a3bce9-0cbd-4794-9af1-2618110280cf\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-xsb6d" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.453630 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7w7l\" (UniqueName: \"kubernetes.io/projected/89a3bce9-0cbd-4794-9af1-2618110280cf-kube-api-access-h7w7l\") pod \"nmstate-console-plugin-7fbb5f6569-xsb6d\" (UID: \"89a3bce9-0cbd-4794-9af1-2618110280cf\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-xsb6d" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.453704 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/89a3bce9-0cbd-4794-9af1-2618110280cf-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-xsb6d\" (UID: \"89a3bce9-0cbd-4794-9af1-2618110280cf\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-xsb6d" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.474511 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-w9rng" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.519197 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-cb2bc" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.555720 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/89a3bce9-0cbd-4794-9af1-2618110280cf-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-xsb6d\" (UID: \"89a3bce9-0cbd-4794-9af1-2618110280cf\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-xsb6d" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.556317 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7w7l\" (UniqueName: \"kubernetes.io/projected/89a3bce9-0cbd-4794-9af1-2618110280cf-kube-api-access-h7w7l\") pod \"nmstate-console-plugin-7fbb5f6569-xsb6d\" (UID: \"89a3bce9-0cbd-4794-9af1-2618110280cf\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-xsb6d" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.556427 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/89a3bce9-0cbd-4794-9af1-2618110280cf-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-xsb6d\" (UID: \"89a3bce9-0cbd-4794-9af1-2618110280cf\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-xsb6d" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.555996 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6c5cb9448-4slqr"] Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.557623 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6c5cb9448-4slqr" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.559824 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/89a3bce9-0cbd-4794-9af1-2618110280cf-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-xsb6d\" (UID: \"89a3bce9-0cbd-4794-9af1-2618110280cf\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-xsb6d" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.566058 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/89a3bce9-0cbd-4794-9af1-2618110280cf-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-xsb6d\" (UID: \"89a3bce9-0cbd-4794-9af1-2618110280cf\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-xsb6d" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.574549 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6c5cb9448-4slqr"] Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.587188 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7w7l\" (UniqueName: \"kubernetes.io/projected/89a3bce9-0cbd-4794-9af1-2618110280cf-kube-api-access-h7w7l\") pod \"nmstate-console-plugin-7fbb5f6569-xsb6d\" (UID: \"89a3bce9-0cbd-4794-9af1-2618110280cf\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-xsb6d" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.666826 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-xsb6d" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.757103 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-cb2bc" event={"ID":"ec3e638f-25fc-45ef-b33a-696d06037f00","Type":"ContainerStarted","Data":"013f8af4516c69caff07c955442054c7a9d3e268ebc2ffeaeecc6def387380b2"} Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.761451 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c6a4d69-9482-4704-be87-465ca8abbff0-trusted-ca-bundle\") pod \"console-6c5cb9448-4slqr\" (UID: \"6c6a4d69-9482-4704-be87-465ca8abbff0\") " pod="openshift-console/console-6c5cb9448-4slqr" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.761510 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6c6a4d69-9482-4704-be87-465ca8abbff0-service-ca\") pod \"console-6c5cb9448-4slqr\" (UID: \"6c6a4d69-9482-4704-be87-465ca8abbff0\") " pod="openshift-console/console-6c5cb9448-4slqr" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.761604 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm24s\" (UniqueName: \"kubernetes.io/projected/6c6a4d69-9482-4704-be87-465ca8abbff0-kube-api-access-xm24s\") pod \"console-6c5cb9448-4slqr\" (UID: \"6c6a4d69-9482-4704-be87-465ca8abbff0\") " pod="openshift-console/console-6c5cb9448-4slqr" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.761652 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b84ff824-fb24-473d-a9df-501bc25d8547-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-gkxps\" (UID: \"b84ff824-fb24-473d-a9df-501bc25d8547\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-gkxps" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.761694 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6c6a4d69-9482-4704-be87-465ca8abbff0-console-oauth-config\") pod \"console-6c5cb9448-4slqr\" (UID: \"6c6a4d69-9482-4704-be87-465ca8abbff0\") " pod="openshift-console/console-6c5cb9448-4slqr" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.761726 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6c6a4d69-9482-4704-be87-465ca8abbff0-console-serving-cert\") pod \"console-6c5cb9448-4slqr\" (UID: \"6c6a4d69-9482-4704-be87-465ca8abbff0\") " pod="openshift-console/console-6c5cb9448-4slqr" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.761765 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6c6a4d69-9482-4704-be87-465ca8abbff0-oauth-serving-cert\") pod \"console-6c5cb9448-4slqr\" (UID: \"6c6a4d69-9482-4704-be87-465ca8abbff0\") " pod="openshift-console/console-6c5cb9448-4slqr" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.761817 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6c6a4d69-9482-4704-be87-465ca8abbff0-console-config\") pod \"console-6c5cb9448-4slqr\" (UID: \"6c6a4d69-9482-4704-be87-465ca8abbff0\") " pod="openshift-console/console-6c5cb9448-4slqr" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.765501 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b84ff824-fb24-473d-a9df-501bc25d8547-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-gkxps\" (UID: \"b84ff824-fb24-473d-a9df-501bc25d8547\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-gkxps" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.768296 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-gkxps" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.864499 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6c6a4d69-9482-4704-be87-465ca8abbff0-oauth-serving-cert\") pod \"console-6c5cb9448-4slqr\" (UID: \"6c6a4d69-9482-4704-be87-465ca8abbff0\") " pod="openshift-console/console-6c5cb9448-4slqr" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.864548 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6c6a4d69-9482-4704-be87-465ca8abbff0-console-config\") pod \"console-6c5cb9448-4slqr\" (UID: \"6c6a4d69-9482-4704-be87-465ca8abbff0\") " pod="openshift-console/console-6c5cb9448-4slqr" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.864583 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c6a4d69-9482-4704-be87-465ca8abbff0-trusted-ca-bundle\") pod \"console-6c5cb9448-4slqr\" (UID: \"6c6a4d69-9482-4704-be87-465ca8abbff0\") " pod="openshift-console/console-6c5cb9448-4slqr" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.864602 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6c6a4d69-9482-4704-be87-465ca8abbff0-service-ca\") pod \"console-6c5cb9448-4slqr\" (UID: \"6c6a4d69-9482-4704-be87-465ca8abbff0\") " pod="openshift-console/console-6c5cb9448-4slqr" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.864632 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xm24s\" (UniqueName: \"kubernetes.io/projected/6c6a4d69-9482-4704-be87-465ca8abbff0-kube-api-access-xm24s\") pod \"console-6c5cb9448-4slqr\" (UID: \"6c6a4d69-9482-4704-be87-465ca8abbff0\") " pod="openshift-console/console-6c5cb9448-4slqr" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.864662 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6c6a4d69-9482-4704-be87-465ca8abbff0-console-oauth-config\") pod \"console-6c5cb9448-4slqr\" (UID: \"6c6a4d69-9482-4704-be87-465ca8abbff0\") " pod="openshift-console/console-6c5cb9448-4slqr" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.864683 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6c6a4d69-9482-4704-be87-465ca8abbff0-console-serving-cert\") pod \"console-6c5cb9448-4slqr\" (UID: \"6c6a4d69-9482-4704-be87-465ca8abbff0\") " pod="openshift-console/console-6c5cb9448-4slqr" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.866481 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6c6a4d69-9482-4704-be87-465ca8abbff0-service-ca\") pod \"console-6c5cb9448-4slqr\" (UID: \"6c6a4d69-9482-4704-be87-465ca8abbff0\") " pod="openshift-console/console-6c5cb9448-4slqr" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.866496 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c6a4d69-9482-4704-be87-465ca8abbff0-trusted-ca-bundle\") pod \"console-6c5cb9448-4slqr\" (UID: \"6c6a4d69-9482-4704-be87-465ca8abbff0\") " pod="openshift-console/console-6c5cb9448-4slqr" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.867155 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6c6a4d69-9482-4704-be87-465ca8abbff0-oauth-serving-cert\") pod \"console-6c5cb9448-4slqr\" (UID: \"6c6a4d69-9482-4704-be87-465ca8abbff0\") " pod="openshift-console/console-6c5cb9448-4slqr" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.867925 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6c6a4d69-9482-4704-be87-465ca8abbff0-console-config\") pod \"console-6c5cb9448-4slqr\" (UID: \"6c6a4d69-9482-4704-be87-465ca8abbff0\") " pod="openshift-console/console-6c5cb9448-4slqr" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.868717 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6c6a4d69-9482-4704-be87-465ca8abbff0-console-serving-cert\") pod \"console-6c5cb9448-4slqr\" (UID: \"6c6a4d69-9482-4704-be87-465ca8abbff0\") " pod="openshift-console/console-6c5cb9448-4slqr" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.874346 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6c6a4d69-9482-4704-be87-465ca8abbff0-console-oauth-config\") pod \"console-6c5cb9448-4slqr\" (UID: \"6c6a4d69-9482-4704-be87-465ca8abbff0\") " pod="openshift-console/console-6c5cb9448-4slqr" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.883715 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-xsb6d"] Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.889176 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xm24s\" (UniqueName: \"kubernetes.io/projected/6c6a4d69-9482-4704-be87-465ca8abbff0-kube-api-access-xm24s\") pod \"console-6c5cb9448-4slqr\" (UID: \"6c6a4d69-9482-4704-be87-465ca8abbff0\") " pod="openshift-console/console-6c5cb9448-4slqr" Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.891878 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6c5cb9448-4slqr" Nov 29 06:48:12 crc kubenswrapper[4947]: W1129 06:48:12.908210 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89a3bce9_0cbd_4794_9af1_2618110280cf.slice/crio-1012cd58a11312047a4501753c7a9ec3547f53f1f02c123d9cd072df5f87a67d WatchSource:0}: Error finding container 1012cd58a11312047a4501753c7a9ec3547f53f1f02c123d9cd072df5f87a67d: Status 404 returned error can't find the container with id 1012cd58a11312047a4501753c7a9ec3547f53f1f02c123d9cd072df5f87a67d Nov 29 06:48:12 crc kubenswrapper[4947]: I1129 06:48:12.985324 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-w9rng"] Nov 29 06:48:12 crc kubenswrapper[4947]: W1129 06:48:12.996503 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06e121e7_c32a_419c_ab90_3ac8cd45fb7c.slice/crio-288f20cc8a283f8c289f6f4573cec43123cc9639822dfa9ebb214639518de2ad WatchSource:0}: Error finding container 288f20cc8a283f8c289f6f4573cec43123cc9639822dfa9ebb214639518de2ad: Status 404 returned error can't find the container with id 288f20cc8a283f8c289f6f4573cec43123cc9639822dfa9ebb214639518de2ad Nov 29 06:48:13 crc kubenswrapper[4947]: I1129 06:48:13.007947 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-gkxps"] Nov 29 06:48:13 crc kubenswrapper[4947]: W1129 06:48:13.018616 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb84ff824_fb24_473d_a9df_501bc25d8547.slice/crio-6c3cba22250950135916066d34eb5008e414bf9383acb968242acf40df0333de WatchSource:0}: Error finding container 6c3cba22250950135916066d34eb5008e414bf9383acb968242acf40df0333de: Status 404 returned error can't find the container with id 6c3cba22250950135916066d34eb5008e414bf9383acb968242acf40df0333de Nov 29 06:48:13 crc kubenswrapper[4947]: I1129 06:48:13.116381 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6c5cb9448-4slqr"] Nov 29 06:48:13 crc kubenswrapper[4947]: I1129 06:48:13.766211 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-w9rng" event={"ID":"06e121e7-c32a-419c-ab90-3ac8cd45fb7c","Type":"ContainerStarted","Data":"288f20cc8a283f8c289f6f4573cec43123cc9639822dfa9ebb214639518de2ad"} Nov 29 06:48:13 crc kubenswrapper[4947]: I1129 06:48:13.769666 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6c5cb9448-4slqr" event={"ID":"6c6a4d69-9482-4704-be87-465ca8abbff0","Type":"ContainerStarted","Data":"17f3d7a63432fd337f624eca9c4f5a349c3ef8520af556daf63a7e162cece8f1"} Nov 29 06:48:13 crc kubenswrapper[4947]: I1129 06:48:13.769706 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6c5cb9448-4slqr" event={"ID":"6c6a4d69-9482-4704-be87-465ca8abbff0","Type":"ContainerStarted","Data":"e7ed0a9b724f217995d70bf5657767df72bc1819ba715691caf1f98efe9ffce6"} Nov 29 06:48:13 crc kubenswrapper[4947]: I1129 06:48:13.770793 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-xsb6d" event={"ID":"89a3bce9-0cbd-4794-9af1-2618110280cf","Type":"ContainerStarted","Data":"1012cd58a11312047a4501753c7a9ec3547f53f1f02c123d9cd072df5f87a67d"} Nov 29 06:48:13 crc kubenswrapper[4947]: I1129 06:48:13.771795 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-gkxps" event={"ID":"b84ff824-fb24-473d-a9df-501bc25d8547","Type":"ContainerStarted","Data":"6c3cba22250950135916066d34eb5008e414bf9383acb968242acf40df0333de"} Nov 29 06:48:13 crc kubenswrapper[4947]: I1129 06:48:13.796812 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6c5cb9448-4slqr" podStartSLOduration=1.7967873060000001 podStartE2EDuration="1.796787306s" podCreationTimestamp="2025-11-29 06:48:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:48:13.788541706 +0000 UTC m=+844.832923797" watchObservedRunningTime="2025-11-29 06:48:13.796787306 +0000 UTC m=+844.841169387" Nov 29 06:48:17 crc kubenswrapper[4947]: I1129 06:48:17.803510 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-xsb6d" event={"ID":"89a3bce9-0cbd-4794-9af1-2618110280cf","Type":"ContainerStarted","Data":"19d54e04ff65ec71c94d079913a29931e0f6534ab5a94d288eebbd108ecc0fef"} Nov 29 06:48:17 crc kubenswrapper[4947]: I1129 06:48:17.808474 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-gkxps" event={"ID":"b84ff824-fb24-473d-a9df-501bc25d8547","Type":"ContainerStarted","Data":"510e625174e559e24571d6b723a863be61a6f515df33d9aaff174c587a8f62a7"} Nov 29 06:48:17 crc kubenswrapper[4947]: I1129 06:48:17.808605 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-gkxps" Nov 29 06:48:17 crc kubenswrapper[4947]: I1129 06:48:17.810305 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-cb2bc" event={"ID":"ec3e638f-25fc-45ef-b33a-696d06037f00","Type":"ContainerStarted","Data":"209dd602382b6f29ca347f9f3286b0ddb7ec093104a269a8ac028775bf9443e7"} Nov 29 06:48:17 crc kubenswrapper[4947]: I1129 06:48:17.810524 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-cb2bc" Nov 29 06:48:17 crc kubenswrapper[4947]: I1129 06:48:17.812072 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-w9rng" event={"ID":"06e121e7-c32a-419c-ab90-3ac8cd45fb7c","Type":"ContainerStarted","Data":"094f489913222d0f6e5fdd3eae10dc31d80a1d27a67a857e8d79410fb27b68ab"} Nov 29 06:48:17 crc kubenswrapper[4947]: I1129 06:48:17.827182 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-xsb6d" podStartSLOduration=1.823750422 podStartE2EDuration="5.827159489s" podCreationTimestamp="2025-11-29 06:48:12 +0000 UTC" firstStartedPulling="2025-11-29 06:48:12.912592328 +0000 UTC m=+843.956974409" lastFinishedPulling="2025-11-29 06:48:16.916001385 +0000 UTC m=+847.960383476" observedRunningTime="2025-11-29 06:48:17.819466563 +0000 UTC m=+848.863848674" watchObservedRunningTime="2025-11-29 06:48:17.827159489 +0000 UTC m=+848.871541610" Nov 29 06:48:17 crc kubenswrapper[4947]: I1129 06:48:17.850516 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-gkxps" podStartSLOduration=1.956643605 podStartE2EDuration="5.850480993s" podCreationTimestamp="2025-11-29 06:48:12 +0000 UTC" firstStartedPulling="2025-11-29 06:48:13.021295965 +0000 UTC m=+844.065678036" lastFinishedPulling="2025-11-29 06:48:16.915133303 +0000 UTC m=+847.959515424" observedRunningTime="2025-11-29 06:48:17.837890722 +0000 UTC m=+848.882272833" watchObservedRunningTime="2025-11-29 06:48:17.850480993 +0000 UTC m=+848.894863114" Nov 29 06:48:17 crc kubenswrapper[4947]: I1129 06:48:17.864313 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-cb2bc" podStartSLOduration=1.500382571 podStartE2EDuration="5.864280694s" podCreationTimestamp="2025-11-29 06:48:12 +0000 UTC" firstStartedPulling="2025-11-29 06:48:12.579674394 +0000 UTC m=+843.624056475" lastFinishedPulling="2025-11-29 06:48:16.943572507 +0000 UTC m=+847.987954598" observedRunningTime="2025-11-29 06:48:17.860927318 +0000 UTC m=+848.905309439" watchObservedRunningTime="2025-11-29 06:48:17.864280694 +0000 UTC m=+848.908662775" Nov 29 06:48:19 crc kubenswrapper[4947]: I1129 06:48:19.828410 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-w9rng" event={"ID":"06e121e7-c32a-419c-ab90-3ac8cd45fb7c","Type":"ContainerStarted","Data":"7296437b152f9ce814fbe79508e32b416e7b5db55484b1ea825464fee2b3ff5b"} Nov 29 06:48:19 crc kubenswrapper[4947]: I1129 06:48:19.856840 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-w9rng" podStartSLOduration=1.5797656199999999 podStartE2EDuration="7.856811053s" podCreationTimestamp="2025-11-29 06:48:12 +0000 UTC" firstStartedPulling="2025-11-29 06:48:13.008733065 +0000 UTC m=+844.053115146" lastFinishedPulling="2025-11-29 06:48:19.285778488 +0000 UTC m=+850.330160579" observedRunningTime="2025-11-29 06:48:19.849683812 +0000 UTC m=+850.894065933" watchObservedRunningTime="2025-11-29 06:48:19.856811053 +0000 UTC m=+850.901193164" Nov 29 06:48:22 crc kubenswrapper[4947]: I1129 06:48:22.549025 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-cb2bc" Nov 29 06:48:22 crc kubenswrapper[4947]: I1129 06:48:22.893278 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6c5cb9448-4slqr" Nov 29 06:48:22 crc kubenswrapper[4947]: I1129 06:48:22.893389 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6c5cb9448-4slqr" Nov 29 06:48:22 crc kubenswrapper[4947]: I1129 06:48:22.901943 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6c5cb9448-4slqr" Nov 29 06:48:22 crc kubenswrapper[4947]: I1129 06:48:22.987548 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 06:48:22 crc kubenswrapper[4947]: I1129 06:48:22.987625 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 06:48:22 crc kubenswrapper[4947]: I1129 06:48:22.987676 4947 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" Nov 29 06:48:22 crc kubenswrapper[4947]: I1129 06:48:22.988399 4947 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e3f38270dbfc41785276b23821b9697dddfbb4108ac42aabfc9b7652679ef1e3"} pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 06:48:22 crc kubenswrapper[4947]: I1129 06:48:22.988454 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" containerID="cri-o://e3f38270dbfc41785276b23821b9697dddfbb4108ac42aabfc9b7652679ef1e3" gracePeriod=600 Nov 29 06:48:23 crc kubenswrapper[4947]: I1129 06:48:23.861537 4947 generic.go:334] "Generic (PLEG): container finished" podID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerID="e3f38270dbfc41785276b23821b9697dddfbb4108ac42aabfc9b7652679ef1e3" exitCode=0 Nov 29 06:48:23 crc kubenswrapper[4947]: I1129 06:48:23.863430 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" event={"ID":"5f4d791f-bb61-4aaa-a09c-3007b59645a7","Type":"ContainerDied","Data":"e3f38270dbfc41785276b23821b9697dddfbb4108ac42aabfc9b7652679ef1e3"} Nov 29 06:48:23 crc kubenswrapper[4947]: I1129 06:48:23.863511 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" event={"ID":"5f4d791f-bb61-4aaa-a09c-3007b59645a7","Type":"ContainerStarted","Data":"95afd1d0c4fb1119bc14de336e7d92cb2ee91cd1747056ef7ee978c29db619c9"} Nov 29 06:48:23 crc kubenswrapper[4947]: I1129 06:48:23.863538 4947 scope.go:117] "RemoveContainer" containerID="381c0e9d0a59dd5856ec3a6931be38d490899e1db40040b972c52a0ea9ed0855" Nov 29 06:48:23 crc kubenswrapper[4947]: I1129 06:48:23.868431 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6c5cb9448-4slqr" Nov 29 06:48:23 crc kubenswrapper[4947]: I1129 06:48:23.940832 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-nswtf"] Nov 29 06:48:32 crc kubenswrapper[4947]: I1129 06:48:32.778204 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-gkxps" Nov 29 06:48:47 crc kubenswrapper[4947]: I1129 06:48:47.117333 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wksrm"] Nov 29 06:48:47 crc kubenswrapper[4947]: I1129 06:48:47.119057 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wksrm" Nov 29 06:48:47 crc kubenswrapper[4947]: I1129 06:48:47.120631 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/221f5984-f762-4e06-8026-b933d54eb4d6-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wksrm\" (UID: \"221f5984-f762-4e06-8026-b933d54eb4d6\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wksrm" Nov 29 06:48:47 crc kubenswrapper[4947]: I1129 06:48:47.120752 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/221f5984-f762-4e06-8026-b933d54eb4d6-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wksrm\" (UID: \"221f5984-f762-4e06-8026-b933d54eb4d6\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wksrm" Nov 29 06:48:47 crc kubenswrapper[4947]: I1129 06:48:47.120856 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25jhm\" (UniqueName: \"kubernetes.io/projected/221f5984-f762-4e06-8026-b933d54eb4d6-kube-api-access-25jhm\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wksrm\" (UID: \"221f5984-f762-4e06-8026-b933d54eb4d6\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wksrm" Nov 29 06:48:47 crc kubenswrapper[4947]: I1129 06:48:47.121393 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 29 06:48:47 crc kubenswrapper[4947]: I1129 06:48:47.134509 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wksrm"] Nov 29 06:48:47 crc kubenswrapper[4947]: I1129 06:48:47.223044 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/221f5984-f762-4e06-8026-b933d54eb4d6-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wksrm\" (UID: \"221f5984-f762-4e06-8026-b933d54eb4d6\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wksrm" Nov 29 06:48:47 crc kubenswrapper[4947]: I1129 06:48:47.223284 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/221f5984-f762-4e06-8026-b933d54eb4d6-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wksrm\" (UID: \"221f5984-f762-4e06-8026-b933d54eb4d6\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wksrm" Nov 29 06:48:47 crc kubenswrapper[4947]: I1129 06:48:47.223432 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25jhm\" (UniqueName: \"kubernetes.io/projected/221f5984-f762-4e06-8026-b933d54eb4d6-kube-api-access-25jhm\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wksrm\" (UID: \"221f5984-f762-4e06-8026-b933d54eb4d6\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wksrm" Nov 29 06:48:47 crc kubenswrapper[4947]: I1129 06:48:47.224916 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/221f5984-f762-4e06-8026-b933d54eb4d6-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wksrm\" (UID: \"221f5984-f762-4e06-8026-b933d54eb4d6\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wksrm" Nov 29 06:48:47 crc kubenswrapper[4947]: I1129 06:48:47.225115 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/221f5984-f762-4e06-8026-b933d54eb4d6-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wksrm\" (UID: \"221f5984-f762-4e06-8026-b933d54eb4d6\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wksrm" Nov 29 06:48:47 crc kubenswrapper[4947]: I1129 06:48:47.252610 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25jhm\" (UniqueName: \"kubernetes.io/projected/221f5984-f762-4e06-8026-b933d54eb4d6-kube-api-access-25jhm\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wksrm\" (UID: \"221f5984-f762-4e06-8026-b933d54eb4d6\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wksrm" Nov 29 06:48:47 crc kubenswrapper[4947]: I1129 06:48:47.460256 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wksrm" Nov 29 06:48:47 crc kubenswrapper[4947]: I1129 06:48:47.943294 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wksrm"] Nov 29 06:48:48 crc kubenswrapper[4947]: I1129 06:48:48.050405 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wksrm" event={"ID":"221f5984-f762-4e06-8026-b933d54eb4d6","Type":"ContainerStarted","Data":"2c4a84f1c15cefdf1425ef19502fc21b8ed9319ffa60bf775b3217e728fce21a"} Nov 29 06:48:48 crc kubenswrapper[4947]: I1129 06:48:48.984050 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-nswtf" podUID="711e27d0-dd37-4f6f-adae-5c04bb856f47" containerName="console" containerID="cri-o://3d7a0856b3eed704fd39d030c26cc763b373fa73bc10b6f0f261564436644ef1" gracePeriod=15 Nov 29 06:48:49 crc kubenswrapper[4947]: I1129 06:48:49.939419 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-nswtf_711e27d0-dd37-4f6f-adae-5c04bb856f47/console/0.log" Nov 29 06:48:49 crc kubenswrapper[4947]: I1129 06:48:49.939877 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-nswtf" Nov 29 06:48:49 crc kubenswrapper[4947]: I1129 06:48:49.967748 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/711e27d0-dd37-4f6f-adae-5c04bb856f47-console-config\") pod \"711e27d0-dd37-4f6f-adae-5c04bb856f47\" (UID: \"711e27d0-dd37-4f6f-adae-5c04bb856f47\") " Nov 29 06:48:49 crc kubenswrapper[4947]: I1129 06:48:49.967832 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/711e27d0-dd37-4f6f-adae-5c04bb856f47-service-ca\") pod \"711e27d0-dd37-4f6f-adae-5c04bb856f47\" (UID: \"711e27d0-dd37-4f6f-adae-5c04bb856f47\") " Nov 29 06:48:49 crc kubenswrapper[4947]: I1129 06:48:49.967863 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2fqm\" (UniqueName: \"kubernetes.io/projected/711e27d0-dd37-4f6f-adae-5c04bb856f47-kube-api-access-c2fqm\") pod \"711e27d0-dd37-4f6f-adae-5c04bb856f47\" (UID: \"711e27d0-dd37-4f6f-adae-5c04bb856f47\") " Nov 29 06:48:49 crc kubenswrapper[4947]: I1129 06:48:49.969101 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/711e27d0-dd37-4f6f-adae-5c04bb856f47-console-config" (OuterVolumeSpecName: "console-config") pod "711e27d0-dd37-4f6f-adae-5c04bb856f47" (UID: "711e27d0-dd37-4f6f-adae-5c04bb856f47"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:48:49 crc kubenswrapper[4947]: I1129 06:48:49.969209 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/711e27d0-dd37-4f6f-adae-5c04bb856f47-service-ca" (OuterVolumeSpecName: "service-ca") pod "711e27d0-dd37-4f6f-adae-5c04bb856f47" (UID: "711e27d0-dd37-4f6f-adae-5c04bb856f47"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:48:49 crc kubenswrapper[4947]: I1129 06:48:49.969471 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/711e27d0-dd37-4f6f-adae-5c04bb856f47-console-oauth-config\") pod \"711e27d0-dd37-4f6f-adae-5c04bb856f47\" (UID: \"711e27d0-dd37-4f6f-adae-5c04bb856f47\") " Nov 29 06:48:49 crc kubenswrapper[4947]: I1129 06:48:49.969543 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/711e27d0-dd37-4f6f-adae-5c04bb856f47-trusted-ca-bundle\") pod \"711e27d0-dd37-4f6f-adae-5c04bb856f47\" (UID: \"711e27d0-dd37-4f6f-adae-5c04bb856f47\") " Nov 29 06:48:49 crc kubenswrapper[4947]: I1129 06:48:49.969586 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/711e27d0-dd37-4f6f-adae-5c04bb856f47-oauth-serving-cert\") pod \"711e27d0-dd37-4f6f-adae-5c04bb856f47\" (UID: \"711e27d0-dd37-4f6f-adae-5c04bb856f47\") " Nov 29 06:48:49 crc kubenswrapper[4947]: I1129 06:48:49.969641 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/711e27d0-dd37-4f6f-adae-5c04bb856f47-console-serving-cert\") pod \"711e27d0-dd37-4f6f-adae-5c04bb856f47\" (UID: \"711e27d0-dd37-4f6f-adae-5c04bb856f47\") " Nov 29 06:48:49 crc kubenswrapper[4947]: I1129 06:48:49.969909 4947 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/711e27d0-dd37-4f6f-adae-5c04bb856f47-console-config\") on node \"crc\" DevicePath \"\"" Nov 29 06:48:49 crc kubenswrapper[4947]: I1129 06:48:49.969934 4947 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/711e27d0-dd37-4f6f-adae-5c04bb856f47-service-ca\") on node \"crc\" DevicePath \"\"" Nov 29 06:48:49 crc kubenswrapper[4947]: I1129 06:48:49.970195 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/711e27d0-dd37-4f6f-adae-5c04bb856f47-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "711e27d0-dd37-4f6f-adae-5c04bb856f47" (UID: "711e27d0-dd37-4f6f-adae-5c04bb856f47"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:48:49 crc kubenswrapper[4947]: I1129 06:48:49.970382 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/711e27d0-dd37-4f6f-adae-5c04bb856f47-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "711e27d0-dd37-4f6f-adae-5c04bb856f47" (UID: "711e27d0-dd37-4f6f-adae-5c04bb856f47"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:48:49 crc kubenswrapper[4947]: I1129 06:48:49.977572 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/711e27d0-dd37-4f6f-adae-5c04bb856f47-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "711e27d0-dd37-4f6f-adae-5c04bb856f47" (UID: "711e27d0-dd37-4f6f-adae-5c04bb856f47"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:48:49 crc kubenswrapper[4947]: I1129 06:48:49.977623 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/711e27d0-dd37-4f6f-adae-5c04bb856f47-kube-api-access-c2fqm" (OuterVolumeSpecName: "kube-api-access-c2fqm") pod "711e27d0-dd37-4f6f-adae-5c04bb856f47" (UID: "711e27d0-dd37-4f6f-adae-5c04bb856f47"). InnerVolumeSpecName "kube-api-access-c2fqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:48:49 crc kubenswrapper[4947]: I1129 06:48:49.978376 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/711e27d0-dd37-4f6f-adae-5c04bb856f47-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "711e27d0-dd37-4f6f-adae-5c04bb856f47" (UID: "711e27d0-dd37-4f6f-adae-5c04bb856f47"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:48:50 crc kubenswrapper[4947]: I1129 06:48:50.069411 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-nswtf_711e27d0-dd37-4f6f-adae-5c04bb856f47/console/0.log" Nov 29 06:48:50 crc kubenswrapper[4947]: I1129 06:48:50.069480 4947 generic.go:334] "Generic (PLEG): container finished" podID="711e27d0-dd37-4f6f-adae-5c04bb856f47" containerID="3d7a0856b3eed704fd39d030c26cc763b373fa73bc10b6f0f261564436644ef1" exitCode=2 Nov 29 06:48:50 crc kubenswrapper[4947]: I1129 06:48:50.069563 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-nswtf" event={"ID":"711e27d0-dd37-4f6f-adae-5c04bb856f47","Type":"ContainerDied","Data":"3d7a0856b3eed704fd39d030c26cc763b373fa73bc10b6f0f261564436644ef1"} Nov 29 06:48:50 crc kubenswrapper[4947]: I1129 06:48:50.069602 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-nswtf" event={"ID":"711e27d0-dd37-4f6f-adae-5c04bb856f47","Type":"ContainerDied","Data":"1d39fe89576b447180b030d4a7d83e6ad08d56eab53acfadebcc72f307fc56d1"} Nov 29 06:48:50 crc kubenswrapper[4947]: I1129 06:48:50.069591 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-nswtf" Nov 29 06:48:50 crc kubenswrapper[4947]: I1129 06:48:50.069643 4947 scope.go:117] "RemoveContainer" containerID="3d7a0856b3eed704fd39d030c26cc763b373fa73bc10b6f0f261564436644ef1" Nov 29 06:48:50 crc kubenswrapper[4947]: I1129 06:48:50.070492 4947 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/711e27d0-dd37-4f6f-adae-5c04bb856f47-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 29 06:48:50 crc kubenswrapper[4947]: I1129 06:48:50.070520 4947 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/711e27d0-dd37-4f6f-adae-5c04bb856f47-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 06:48:50 crc kubenswrapper[4947]: I1129 06:48:50.070530 4947 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/711e27d0-dd37-4f6f-adae-5c04bb856f47-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 06:48:50 crc kubenswrapper[4947]: I1129 06:48:50.070601 4947 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/711e27d0-dd37-4f6f-adae-5c04bb856f47-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 06:48:50 crc kubenswrapper[4947]: I1129 06:48:50.070612 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2fqm\" (UniqueName: \"kubernetes.io/projected/711e27d0-dd37-4f6f-adae-5c04bb856f47-kube-api-access-c2fqm\") on node \"crc\" DevicePath \"\"" Nov 29 06:48:50 crc kubenswrapper[4947]: I1129 06:48:50.073112 4947 generic.go:334] "Generic (PLEG): container finished" podID="221f5984-f762-4e06-8026-b933d54eb4d6" containerID="48db30ddf386348cf6e473ce2880ae94650bb979c9d84498abc8d57c94bb39a5" exitCode=0 Nov 29 06:48:50 crc kubenswrapper[4947]: I1129 06:48:50.073173 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wksrm" event={"ID":"221f5984-f762-4e06-8026-b933d54eb4d6","Type":"ContainerDied","Data":"48db30ddf386348cf6e473ce2880ae94650bb979c9d84498abc8d57c94bb39a5"} Nov 29 06:48:50 crc kubenswrapper[4947]: I1129 06:48:50.099554 4947 scope.go:117] "RemoveContainer" containerID="3d7a0856b3eed704fd39d030c26cc763b373fa73bc10b6f0f261564436644ef1" Nov 29 06:48:50 crc kubenswrapper[4947]: E1129 06:48:50.100343 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d7a0856b3eed704fd39d030c26cc763b373fa73bc10b6f0f261564436644ef1\": container with ID starting with 3d7a0856b3eed704fd39d030c26cc763b373fa73bc10b6f0f261564436644ef1 not found: ID does not exist" containerID="3d7a0856b3eed704fd39d030c26cc763b373fa73bc10b6f0f261564436644ef1" Nov 29 06:48:50 crc kubenswrapper[4947]: I1129 06:48:50.100372 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d7a0856b3eed704fd39d030c26cc763b373fa73bc10b6f0f261564436644ef1"} err="failed to get container status \"3d7a0856b3eed704fd39d030c26cc763b373fa73bc10b6f0f261564436644ef1\": rpc error: code = NotFound desc = could not find container \"3d7a0856b3eed704fd39d030c26cc763b373fa73bc10b6f0f261564436644ef1\": container with ID starting with 3d7a0856b3eed704fd39d030c26cc763b373fa73bc10b6f0f261564436644ef1 not found: ID does not exist" Nov 29 06:48:50 crc kubenswrapper[4947]: I1129 06:48:50.115839 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-nswtf"] Nov 29 06:48:50 crc kubenswrapper[4947]: I1129 06:48:50.122563 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-nswtf"] Nov 29 06:48:51 crc kubenswrapper[4947]: I1129 06:48:51.189542 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="711e27d0-dd37-4f6f-adae-5c04bb856f47" path="/var/lib/kubelet/pods/711e27d0-dd37-4f6f-adae-5c04bb856f47/volumes" Nov 29 06:48:52 crc kubenswrapper[4947]: I1129 06:48:52.100530 4947 generic.go:334] "Generic (PLEG): container finished" podID="221f5984-f762-4e06-8026-b933d54eb4d6" containerID="75f8be32791812299be221c86076c4c66e512672cf740d9d3dfcdf66be5872fc" exitCode=0 Nov 29 06:48:52 crc kubenswrapper[4947]: I1129 06:48:52.100639 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wksrm" event={"ID":"221f5984-f762-4e06-8026-b933d54eb4d6","Type":"ContainerDied","Data":"75f8be32791812299be221c86076c4c66e512672cf740d9d3dfcdf66be5872fc"} Nov 29 06:48:53 crc kubenswrapper[4947]: I1129 06:48:53.110939 4947 generic.go:334] "Generic (PLEG): container finished" podID="221f5984-f762-4e06-8026-b933d54eb4d6" containerID="bb7328466a80fa1fca8e4bf058128ac7b6cc08ebd53125c25675ac04fff4f5b5" exitCode=0 Nov 29 06:48:53 crc kubenswrapper[4947]: I1129 06:48:53.111045 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wksrm" event={"ID":"221f5984-f762-4e06-8026-b933d54eb4d6","Type":"ContainerDied","Data":"bb7328466a80fa1fca8e4bf058128ac7b6cc08ebd53125c25675ac04fff4f5b5"} Nov 29 06:48:54 crc kubenswrapper[4947]: I1129 06:48:54.598197 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wksrm" Nov 29 06:48:54 crc kubenswrapper[4947]: I1129 06:48:54.698661 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/221f5984-f762-4e06-8026-b933d54eb4d6-bundle\") pod \"221f5984-f762-4e06-8026-b933d54eb4d6\" (UID: \"221f5984-f762-4e06-8026-b933d54eb4d6\") " Nov 29 06:48:54 crc kubenswrapper[4947]: I1129 06:48:54.698769 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/221f5984-f762-4e06-8026-b933d54eb4d6-util\") pod \"221f5984-f762-4e06-8026-b933d54eb4d6\" (UID: \"221f5984-f762-4e06-8026-b933d54eb4d6\") " Nov 29 06:48:54 crc kubenswrapper[4947]: I1129 06:48:54.698821 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25jhm\" (UniqueName: \"kubernetes.io/projected/221f5984-f762-4e06-8026-b933d54eb4d6-kube-api-access-25jhm\") pod \"221f5984-f762-4e06-8026-b933d54eb4d6\" (UID: \"221f5984-f762-4e06-8026-b933d54eb4d6\") " Nov 29 06:48:54 crc kubenswrapper[4947]: I1129 06:48:54.700156 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/221f5984-f762-4e06-8026-b933d54eb4d6-bundle" (OuterVolumeSpecName: "bundle") pod "221f5984-f762-4e06-8026-b933d54eb4d6" (UID: "221f5984-f762-4e06-8026-b933d54eb4d6"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:48:54 crc kubenswrapper[4947]: I1129 06:48:54.706879 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/221f5984-f762-4e06-8026-b933d54eb4d6-kube-api-access-25jhm" (OuterVolumeSpecName: "kube-api-access-25jhm") pod "221f5984-f762-4e06-8026-b933d54eb4d6" (UID: "221f5984-f762-4e06-8026-b933d54eb4d6"). InnerVolumeSpecName "kube-api-access-25jhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:48:54 crc kubenswrapper[4947]: I1129 06:48:54.800719 4947 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/221f5984-f762-4e06-8026-b933d54eb4d6-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 06:48:54 crc kubenswrapper[4947]: I1129 06:48:54.800764 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25jhm\" (UniqueName: \"kubernetes.io/projected/221f5984-f762-4e06-8026-b933d54eb4d6-kube-api-access-25jhm\") on node \"crc\" DevicePath \"\"" Nov 29 06:48:54 crc kubenswrapper[4947]: I1129 06:48:54.948101 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/221f5984-f762-4e06-8026-b933d54eb4d6-util" (OuterVolumeSpecName: "util") pod "221f5984-f762-4e06-8026-b933d54eb4d6" (UID: "221f5984-f762-4e06-8026-b933d54eb4d6"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:48:55 crc kubenswrapper[4947]: I1129 06:48:55.004175 4947 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/221f5984-f762-4e06-8026-b933d54eb4d6-util\") on node \"crc\" DevicePath \"\"" Nov 29 06:48:55 crc kubenswrapper[4947]: I1129 06:48:55.354877 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wksrm" event={"ID":"221f5984-f762-4e06-8026-b933d54eb4d6","Type":"ContainerDied","Data":"2c4a84f1c15cefdf1425ef19502fc21b8ed9319ffa60bf775b3217e728fce21a"} Nov 29 06:48:55 crc kubenswrapper[4947]: I1129 06:48:55.355363 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c4a84f1c15cefdf1425ef19502fc21b8ed9319ffa60bf775b3217e728fce21a" Nov 29 06:48:55 crc kubenswrapper[4947]: I1129 06:48:55.355011 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wksrm" Nov 29 06:49:06 crc kubenswrapper[4947]: I1129 06:49:06.971758 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-65c8b8f597-t4rnb"] Nov 29 06:49:06 crc kubenswrapper[4947]: E1129 06:49:06.972440 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="221f5984-f762-4e06-8026-b933d54eb4d6" containerName="pull" Nov 29 06:49:06 crc kubenswrapper[4947]: I1129 06:49:06.972455 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="221f5984-f762-4e06-8026-b933d54eb4d6" containerName="pull" Nov 29 06:49:06 crc kubenswrapper[4947]: E1129 06:49:06.972473 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="711e27d0-dd37-4f6f-adae-5c04bb856f47" containerName="console" Nov 29 06:49:06 crc kubenswrapper[4947]: I1129 06:49:06.972479 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="711e27d0-dd37-4f6f-adae-5c04bb856f47" containerName="console" Nov 29 06:49:06 crc kubenswrapper[4947]: E1129 06:49:06.972488 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="221f5984-f762-4e06-8026-b933d54eb4d6" containerName="extract" Nov 29 06:49:06 crc kubenswrapper[4947]: I1129 06:49:06.972495 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="221f5984-f762-4e06-8026-b933d54eb4d6" containerName="extract" Nov 29 06:49:06 crc kubenswrapper[4947]: E1129 06:49:06.972513 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="221f5984-f762-4e06-8026-b933d54eb4d6" containerName="util" Nov 29 06:49:06 crc kubenswrapper[4947]: I1129 06:49:06.972519 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="221f5984-f762-4e06-8026-b933d54eb4d6" containerName="util" Nov 29 06:49:06 crc kubenswrapper[4947]: I1129 06:49:06.972610 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="711e27d0-dd37-4f6f-adae-5c04bb856f47" containerName="console" Nov 29 06:49:06 crc kubenswrapper[4947]: I1129 06:49:06.972624 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="221f5984-f762-4e06-8026-b933d54eb4d6" containerName="extract" Nov 29 06:49:06 crc kubenswrapper[4947]: I1129 06:49:06.973110 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-65c8b8f597-t4rnb" Nov 29 06:49:06 crc kubenswrapper[4947]: I1129 06:49:06.975620 4947 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 29 06:49:06 crc kubenswrapper[4947]: I1129 06:49:06.975708 4947 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Nov 29 06:49:06 crc kubenswrapper[4947]: I1129 06:49:06.976117 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 29 06:49:06 crc kubenswrapper[4947]: I1129 06:49:06.976381 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 29 06:49:06 crc kubenswrapper[4947]: I1129 06:49:06.976515 4947 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-7vs7x" Nov 29 06:49:06 crc kubenswrapper[4947]: I1129 06:49:06.994574 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-65c8b8f597-t4rnb"] Nov 29 06:49:06 crc kubenswrapper[4947]: I1129 06:49:06.998256 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/41375968-ea4e-4c61-abf1-30e04742292b-apiservice-cert\") pod \"metallb-operator-controller-manager-65c8b8f597-t4rnb\" (UID: \"41375968-ea4e-4c61-abf1-30e04742292b\") " pod="metallb-system/metallb-operator-controller-manager-65c8b8f597-t4rnb" Nov 29 06:49:06 crc kubenswrapper[4947]: I1129 06:49:06.998320 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/41375968-ea4e-4c61-abf1-30e04742292b-webhook-cert\") pod \"metallb-operator-controller-manager-65c8b8f597-t4rnb\" (UID: \"41375968-ea4e-4c61-abf1-30e04742292b\") " pod="metallb-system/metallb-operator-controller-manager-65c8b8f597-t4rnb" Nov 29 06:49:06 crc kubenswrapper[4947]: I1129 06:49:06.998368 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4thzp\" (UniqueName: \"kubernetes.io/projected/41375968-ea4e-4c61-abf1-30e04742292b-kube-api-access-4thzp\") pod \"metallb-operator-controller-manager-65c8b8f597-t4rnb\" (UID: \"41375968-ea4e-4c61-abf1-30e04742292b\") " pod="metallb-system/metallb-operator-controller-manager-65c8b8f597-t4rnb" Nov 29 06:49:07 crc kubenswrapper[4947]: I1129 06:49:07.099344 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/41375968-ea4e-4c61-abf1-30e04742292b-apiservice-cert\") pod \"metallb-operator-controller-manager-65c8b8f597-t4rnb\" (UID: \"41375968-ea4e-4c61-abf1-30e04742292b\") " pod="metallb-system/metallb-operator-controller-manager-65c8b8f597-t4rnb" Nov 29 06:49:07 crc kubenswrapper[4947]: I1129 06:49:07.099421 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/41375968-ea4e-4c61-abf1-30e04742292b-webhook-cert\") pod \"metallb-operator-controller-manager-65c8b8f597-t4rnb\" (UID: \"41375968-ea4e-4c61-abf1-30e04742292b\") " pod="metallb-system/metallb-operator-controller-manager-65c8b8f597-t4rnb" Nov 29 06:49:07 crc kubenswrapper[4947]: I1129 06:49:07.099489 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4thzp\" (UniqueName: \"kubernetes.io/projected/41375968-ea4e-4c61-abf1-30e04742292b-kube-api-access-4thzp\") pod \"metallb-operator-controller-manager-65c8b8f597-t4rnb\" (UID: \"41375968-ea4e-4c61-abf1-30e04742292b\") " pod="metallb-system/metallb-operator-controller-manager-65c8b8f597-t4rnb" Nov 29 06:49:07 crc kubenswrapper[4947]: I1129 06:49:07.107942 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/41375968-ea4e-4c61-abf1-30e04742292b-apiservice-cert\") pod \"metallb-operator-controller-manager-65c8b8f597-t4rnb\" (UID: \"41375968-ea4e-4c61-abf1-30e04742292b\") " pod="metallb-system/metallb-operator-controller-manager-65c8b8f597-t4rnb" Nov 29 06:49:07 crc kubenswrapper[4947]: I1129 06:49:07.108065 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/41375968-ea4e-4c61-abf1-30e04742292b-webhook-cert\") pod \"metallb-operator-controller-manager-65c8b8f597-t4rnb\" (UID: \"41375968-ea4e-4c61-abf1-30e04742292b\") " pod="metallb-system/metallb-operator-controller-manager-65c8b8f597-t4rnb" Nov 29 06:49:07 crc kubenswrapper[4947]: I1129 06:49:07.120077 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4thzp\" (UniqueName: \"kubernetes.io/projected/41375968-ea4e-4c61-abf1-30e04742292b-kube-api-access-4thzp\") pod \"metallb-operator-controller-manager-65c8b8f597-t4rnb\" (UID: \"41375968-ea4e-4c61-abf1-30e04742292b\") " pod="metallb-system/metallb-operator-controller-manager-65c8b8f597-t4rnb" Nov 29 06:49:07 crc kubenswrapper[4947]: I1129 06:49:07.301682 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-65c8b8f597-t4rnb" Nov 29 06:49:07 crc kubenswrapper[4947]: I1129 06:49:07.304688 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-77c88b6b8-m2k6x"] Nov 29 06:49:07 crc kubenswrapper[4947]: I1129 06:49:07.305496 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-77c88b6b8-m2k6x" Nov 29 06:49:07 crc kubenswrapper[4947]: I1129 06:49:07.310753 4947 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-4wwbw" Nov 29 06:49:07 crc kubenswrapper[4947]: I1129 06:49:07.311081 4947 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 29 06:49:07 crc kubenswrapper[4947]: I1129 06:49:07.311122 4947 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 29 06:49:07 crc kubenswrapper[4947]: I1129 06:49:07.327734 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-77c88b6b8-m2k6x"] Nov 29 06:49:07 crc kubenswrapper[4947]: I1129 06:49:07.405580 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6d948a37-6c6c-4abc-83ea-b9c87e8dd6e9-webhook-cert\") pod \"metallb-operator-webhook-server-77c88b6b8-m2k6x\" (UID: \"6d948a37-6c6c-4abc-83ea-b9c87e8dd6e9\") " pod="metallb-system/metallb-operator-webhook-server-77c88b6b8-m2k6x" Nov 29 06:49:07 crc kubenswrapper[4947]: I1129 06:49:07.406577 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6d948a37-6c6c-4abc-83ea-b9c87e8dd6e9-apiservice-cert\") pod \"metallb-operator-webhook-server-77c88b6b8-m2k6x\" (UID: \"6d948a37-6c6c-4abc-83ea-b9c87e8dd6e9\") " pod="metallb-system/metallb-operator-webhook-server-77c88b6b8-m2k6x" Nov 29 06:49:07 crc kubenswrapper[4947]: I1129 06:49:07.406680 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqdwx\" (UniqueName: \"kubernetes.io/projected/6d948a37-6c6c-4abc-83ea-b9c87e8dd6e9-kube-api-access-pqdwx\") pod \"metallb-operator-webhook-server-77c88b6b8-m2k6x\" (UID: \"6d948a37-6c6c-4abc-83ea-b9c87e8dd6e9\") " pod="metallb-system/metallb-operator-webhook-server-77c88b6b8-m2k6x" Nov 29 06:49:07 crc kubenswrapper[4947]: I1129 06:49:07.508561 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6d948a37-6c6c-4abc-83ea-b9c87e8dd6e9-webhook-cert\") pod \"metallb-operator-webhook-server-77c88b6b8-m2k6x\" (UID: \"6d948a37-6c6c-4abc-83ea-b9c87e8dd6e9\") " pod="metallb-system/metallb-operator-webhook-server-77c88b6b8-m2k6x" Nov 29 06:49:07 crc kubenswrapper[4947]: I1129 06:49:07.508640 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6d948a37-6c6c-4abc-83ea-b9c87e8dd6e9-apiservice-cert\") pod \"metallb-operator-webhook-server-77c88b6b8-m2k6x\" (UID: \"6d948a37-6c6c-4abc-83ea-b9c87e8dd6e9\") " pod="metallb-system/metallb-operator-webhook-server-77c88b6b8-m2k6x" Nov 29 06:49:07 crc kubenswrapper[4947]: I1129 06:49:07.508674 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqdwx\" (UniqueName: \"kubernetes.io/projected/6d948a37-6c6c-4abc-83ea-b9c87e8dd6e9-kube-api-access-pqdwx\") pod \"metallb-operator-webhook-server-77c88b6b8-m2k6x\" (UID: \"6d948a37-6c6c-4abc-83ea-b9c87e8dd6e9\") " pod="metallb-system/metallb-operator-webhook-server-77c88b6b8-m2k6x" Nov 29 06:49:07 crc kubenswrapper[4947]: I1129 06:49:07.517128 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6d948a37-6c6c-4abc-83ea-b9c87e8dd6e9-apiservice-cert\") pod \"metallb-operator-webhook-server-77c88b6b8-m2k6x\" (UID: \"6d948a37-6c6c-4abc-83ea-b9c87e8dd6e9\") " pod="metallb-system/metallb-operator-webhook-server-77c88b6b8-m2k6x" Nov 29 06:49:07 crc kubenswrapper[4947]: I1129 06:49:07.517173 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6d948a37-6c6c-4abc-83ea-b9c87e8dd6e9-webhook-cert\") pod \"metallb-operator-webhook-server-77c88b6b8-m2k6x\" (UID: \"6d948a37-6c6c-4abc-83ea-b9c87e8dd6e9\") " pod="metallb-system/metallb-operator-webhook-server-77c88b6b8-m2k6x" Nov 29 06:49:07 crc kubenswrapper[4947]: I1129 06:49:07.531597 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqdwx\" (UniqueName: \"kubernetes.io/projected/6d948a37-6c6c-4abc-83ea-b9c87e8dd6e9-kube-api-access-pqdwx\") pod \"metallb-operator-webhook-server-77c88b6b8-m2k6x\" (UID: \"6d948a37-6c6c-4abc-83ea-b9c87e8dd6e9\") " pod="metallb-system/metallb-operator-webhook-server-77c88b6b8-m2k6x" Nov 29 06:49:07 crc kubenswrapper[4947]: I1129 06:49:07.634780 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-77c88b6b8-m2k6x" Nov 29 06:49:07 crc kubenswrapper[4947]: I1129 06:49:07.724054 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-65c8b8f597-t4rnb"] Nov 29 06:49:07 crc kubenswrapper[4947]: W1129 06:49:07.742819 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41375968_ea4e_4c61_abf1_30e04742292b.slice/crio-fb3386fd2d80022eb8228843a331e585b0d8bae260aefe5ab6668c9a4e880cf0 WatchSource:0}: Error finding container fb3386fd2d80022eb8228843a331e585b0d8bae260aefe5ab6668c9a4e880cf0: Status 404 returned error can't find the container with id fb3386fd2d80022eb8228843a331e585b0d8bae260aefe5ab6668c9a4e880cf0 Nov 29 06:49:08 crc kubenswrapper[4947]: I1129 06:49:08.106956 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-77c88b6b8-m2k6x"] Nov 29 06:49:08 crc kubenswrapper[4947]: I1129 06:49:08.450996 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-77c88b6b8-m2k6x" event={"ID":"6d948a37-6c6c-4abc-83ea-b9c87e8dd6e9","Type":"ContainerStarted","Data":"f4ed478ea27352abd67789f84f3f723189d025c3cb0ded2eca1f08a0a7f0be68"} Nov 29 06:49:08 crc kubenswrapper[4947]: I1129 06:49:08.452985 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-65c8b8f597-t4rnb" event={"ID":"41375968-ea4e-4c61-abf1-30e04742292b","Type":"ContainerStarted","Data":"fb3386fd2d80022eb8228843a331e585b0d8bae260aefe5ab6668c9a4e880cf0"} Nov 29 06:49:15 crc kubenswrapper[4947]: I1129 06:49:15.507125 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-65c8b8f597-t4rnb" event={"ID":"41375968-ea4e-4c61-abf1-30e04742292b","Type":"ContainerStarted","Data":"833d54ec25e962f4fbdae2a790d6d6dc0c30aa5386c3f5f20403375dff3f4bbc"} Nov 29 06:49:15 crc kubenswrapper[4947]: I1129 06:49:15.507961 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-65c8b8f597-t4rnb" Nov 29 06:49:15 crc kubenswrapper[4947]: I1129 06:49:15.509975 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-77c88b6b8-m2k6x" event={"ID":"6d948a37-6c6c-4abc-83ea-b9c87e8dd6e9","Type":"ContainerStarted","Data":"32b173ccbd9eb5fc566d812b34cedcdd80d3f731086aee79d468372a9936f20f"} Nov 29 06:49:15 crc kubenswrapper[4947]: I1129 06:49:15.510653 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-77c88b6b8-m2k6x" Nov 29 06:49:15 crc kubenswrapper[4947]: I1129 06:49:15.531196 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-65c8b8f597-t4rnb" podStartSLOduration=2.07075399 podStartE2EDuration="9.531177445s" podCreationTimestamp="2025-11-29 06:49:06 +0000 UTC" firstStartedPulling="2025-11-29 06:49:07.759133278 +0000 UTC m=+898.803515359" lastFinishedPulling="2025-11-29 06:49:15.219556733 +0000 UTC m=+906.263938814" observedRunningTime="2025-11-29 06:49:15.527360698 +0000 UTC m=+906.571742799" watchObservedRunningTime="2025-11-29 06:49:15.531177445 +0000 UTC m=+906.575559526" Nov 29 06:49:15 crc kubenswrapper[4947]: I1129 06:49:15.553984 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-77c88b6b8-m2k6x" podStartSLOduration=1.3819653619999999 podStartE2EDuration="8.553965215s" podCreationTimestamp="2025-11-29 06:49:07 +0000 UTC" firstStartedPulling="2025-11-29 06:49:08.122673292 +0000 UTC m=+899.167055373" lastFinishedPulling="2025-11-29 06:49:15.294673145 +0000 UTC m=+906.339055226" observedRunningTime="2025-11-29 06:49:15.548538377 +0000 UTC m=+906.592920458" watchObservedRunningTime="2025-11-29 06:49:15.553965215 +0000 UTC m=+906.598347296" Nov 29 06:49:27 crc kubenswrapper[4947]: I1129 06:49:27.647645 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-77c88b6b8-m2k6x" Nov 29 06:49:47 crc kubenswrapper[4947]: I1129 06:49:47.305268 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-65c8b8f597-t4rnb" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.161295 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-kkkmt"] Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.162421 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kkkmt" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.164656 4947 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-tx5m8" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.164803 4947 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.166778 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-j6qpj"] Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.169873 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-j6qpj" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.174617 4947 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.174630 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.176559 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-kkkmt"] Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.249301 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/b277e4bf-960a-45ad-a864-c6bfb22ade67-frr-conf\") pod \"frr-k8s-j6qpj\" (UID: \"b277e4bf-960a-45ad-a864-c6bfb22ade67\") " pod="metallb-system/frr-k8s-j6qpj" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.249392 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/b277e4bf-960a-45ad-a864-c6bfb22ade67-frr-startup\") pod \"frr-k8s-j6qpj\" (UID: \"b277e4bf-960a-45ad-a864-c6bfb22ade67\") " pod="metallb-system/frr-k8s-j6qpj" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.249495 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7r52\" (UniqueName: \"kubernetes.io/projected/207ad8ab-e9b8-437c-a649-24d58d587eb9-kube-api-access-c7r52\") pod \"frr-k8s-webhook-server-7fcb986d4-kkkmt\" (UID: \"207ad8ab-e9b8-437c-a649-24d58d587eb9\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kkkmt" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.249523 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b277e4bf-960a-45ad-a864-c6bfb22ade67-metrics-certs\") pod \"frr-k8s-j6qpj\" (UID: \"b277e4bf-960a-45ad-a864-c6bfb22ade67\") " pod="metallb-system/frr-k8s-j6qpj" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.249662 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/207ad8ab-e9b8-437c-a649-24d58d587eb9-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-kkkmt\" (UID: \"207ad8ab-e9b8-437c-a649-24d58d587eb9\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kkkmt" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.249695 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzrjb\" (UniqueName: \"kubernetes.io/projected/b277e4bf-960a-45ad-a864-c6bfb22ade67-kube-api-access-xzrjb\") pod \"frr-k8s-j6qpj\" (UID: \"b277e4bf-960a-45ad-a864-c6bfb22ade67\") " pod="metallb-system/frr-k8s-j6qpj" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.249731 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/b277e4bf-960a-45ad-a864-c6bfb22ade67-metrics\") pod \"frr-k8s-j6qpj\" (UID: \"b277e4bf-960a-45ad-a864-c6bfb22ade67\") " pod="metallb-system/frr-k8s-j6qpj" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.249761 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/b277e4bf-960a-45ad-a864-c6bfb22ade67-frr-sockets\") pod \"frr-k8s-j6qpj\" (UID: \"b277e4bf-960a-45ad-a864-c6bfb22ade67\") " pod="metallb-system/frr-k8s-j6qpj" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.249823 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/b277e4bf-960a-45ad-a864-c6bfb22ade67-reloader\") pod \"frr-k8s-j6qpj\" (UID: \"b277e4bf-960a-45ad-a864-c6bfb22ade67\") " pod="metallb-system/frr-k8s-j6qpj" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.286578 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-6lb2j"] Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.288025 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-6lb2j" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.290103 4947 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.291178 4947 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.291383 4947 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-tpmq2" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.291741 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.309854 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-f8648f98b-dmwxs"] Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.311264 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-dmwxs" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.316024 4947 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.321717 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-dmwxs"] Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.357746 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/b277e4bf-960a-45ad-a864-c6bfb22ade67-frr-startup\") pod \"frr-k8s-j6qpj\" (UID: \"b277e4bf-960a-45ad-a864-c6bfb22ade67\") " pod="metallb-system/frr-k8s-j6qpj" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.357835 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7r52\" (UniqueName: \"kubernetes.io/projected/207ad8ab-e9b8-437c-a649-24d58d587eb9-kube-api-access-c7r52\") pod \"frr-k8s-webhook-server-7fcb986d4-kkkmt\" (UID: \"207ad8ab-e9b8-437c-a649-24d58d587eb9\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kkkmt" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.357872 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b277e4bf-960a-45ad-a864-c6bfb22ade67-metrics-certs\") pod \"frr-k8s-j6qpj\" (UID: \"b277e4bf-960a-45ad-a864-c6bfb22ade67\") " pod="metallb-system/frr-k8s-j6qpj" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.357910 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/207ad8ab-e9b8-437c-a649-24d58d587eb9-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-kkkmt\" (UID: \"207ad8ab-e9b8-437c-a649-24d58d587eb9\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kkkmt" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.357935 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzrjb\" (UniqueName: \"kubernetes.io/projected/b277e4bf-960a-45ad-a864-c6bfb22ade67-kube-api-access-xzrjb\") pod \"frr-k8s-j6qpj\" (UID: \"b277e4bf-960a-45ad-a864-c6bfb22ade67\") " pod="metallb-system/frr-k8s-j6qpj" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.357974 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/b277e4bf-960a-45ad-a864-c6bfb22ade67-metrics\") pod \"frr-k8s-j6qpj\" (UID: \"b277e4bf-960a-45ad-a864-c6bfb22ade67\") " pod="metallb-system/frr-k8s-j6qpj" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.358008 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/b277e4bf-960a-45ad-a864-c6bfb22ade67-frr-sockets\") pod \"frr-k8s-j6qpj\" (UID: \"b277e4bf-960a-45ad-a864-c6bfb22ade67\") " pod="metallb-system/frr-k8s-j6qpj" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.358048 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/b277e4bf-960a-45ad-a864-c6bfb22ade67-reloader\") pod \"frr-k8s-j6qpj\" (UID: \"b277e4bf-960a-45ad-a864-c6bfb22ade67\") " pod="metallb-system/frr-k8s-j6qpj" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.358113 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/b277e4bf-960a-45ad-a864-c6bfb22ade67-frr-conf\") pod \"frr-k8s-j6qpj\" (UID: \"b277e4bf-960a-45ad-a864-c6bfb22ade67\") " pod="metallb-system/frr-k8s-j6qpj" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.359196 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/b277e4bf-960a-45ad-a864-c6bfb22ade67-frr-conf\") pod \"frr-k8s-j6qpj\" (UID: \"b277e4bf-960a-45ad-a864-c6bfb22ade67\") " pod="metallb-system/frr-k8s-j6qpj" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.360013 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/b277e4bf-960a-45ad-a864-c6bfb22ade67-metrics\") pod \"frr-k8s-j6qpj\" (UID: \"b277e4bf-960a-45ad-a864-c6bfb22ade67\") " pod="metallb-system/frr-k8s-j6qpj" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.360505 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/b277e4bf-960a-45ad-a864-c6bfb22ade67-frr-sockets\") pod \"frr-k8s-j6qpj\" (UID: \"b277e4bf-960a-45ad-a864-c6bfb22ade67\") " pod="metallb-system/frr-k8s-j6qpj" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.360772 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/b277e4bf-960a-45ad-a864-c6bfb22ade67-reloader\") pod \"frr-k8s-j6qpj\" (UID: \"b277e4bf-960a-45ad-a864-c6bfb22ade67\") " pod="metallb-system/frr-k8s-j6qpj" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.360977 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/b277e4bf-960a-45ad-a864-c6bfb22ade67-frr-startup\") pod \"frr-k8s-j6qpj\" (UID: \"b277e4bf-960a-45ad-a864-c6bfb22ade67\") " pod="metallb-system/frr-k8s-j6qpj" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.367918 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/207ad8ab-e9b8-437c-a649-24d58d587eb9-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-kkkmt\" (UID: \"207ad8ab-e9b8-437c-a649-24d58d587eb9\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kkkmt" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.367950 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b277e4bf-960a-45ad-a864-c6bfb22ade67-metrics-certs\") pod \"frr-k8s-j6qpj\" (UID: \"b277e4bf-960a-45ad-a864-c6bfb22ade67\") " pod="metallb-system/frr-k8s-j6qpj" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.384319 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzrjb\" (UniqueName: \"kubernetes.io/projected/b277e4bf-960a-45ad-a864-c6bfb22ade67-kube-api-access-xzrjb\") pod \"frr-k8s-j6qpj\" (UID: \"b277e4bf-960a-45ad-a864-c6bfb22ade67\") " pod="metallb-system/frr-k8s-j6qpj" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.384605 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7r52\" (UniqueName: \"kubernetes.io/projected/207ad8ab-e9b8-437c-a649-24d58d587eb9-kube-api-access-c7r52\") pod \"frr-k8s-webhook-server-7fcb986d4-kkkmt\" (UID: \"207ad8ab-e9b8-437c-a649-24d58d587eb9\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kkkmt" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.459579 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/e020145b-4005-44b0-89a6-293ea42f44f6-metallb-excludel2\") pod \"speaker-6lb2j\" (UID: \"e020145b-4005-44b0-89a6-293ea42f44f6\") " pod="metallb-system/speaker-6lb2j" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.459658 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8215fd44-dc8d-4791-992b-42a573cdfbed-cert\") pod \"controller-f8648f98b-dmwxs\" (UID: \"8215fd44-dc8d-4791-992b-42a573cdfbed\") " pod="metallb-system/controller-f8648f98b-dmwxs" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.459726 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hsx2\" (UniqueName: \"kubernetes.io/projected/8215fd44-dc8d-4791-992b-42a573cdfbed-kube-api-access-2hsx2\") pod \"controller-f8648f98b-dmwxs\" (UID: \"8215fd44-dc8d-4791-992b-42a573cdfbed\") " pod="metallb-system/controller-f8648f98b-dmwxs" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.459793 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e020145b-4005-44b0-89a6-293ea42f44f6-metrics-certs\") pod \"speaker-6lb2j\" (UID: \"e020145b-4005-44b0-89a6-293ea42f44f6\") " pod="metallb-system/speaker-6lb2j" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.459838 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e020145b-4005-44b0-89a6-293ea42f44f6-memberlist\") pod \"speaker-6lb2j\" (UID: \"e020145b-4005-44b0-89a6-293ea42f44f6\") " pod="metallb-system/speaker-6lb2j" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.459928 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpznx\" (UniqueName: \"kubernetes.io/projected/e020145b-4005-44b0-89a6-293ea42f44f6-kube-api-access-tpznx\") pod \"speaker-6lb2j\" (UID: \"e020145b-4005-44b0-89a6-293ea42f44f6\") " pod="metallb-system/speaker-6lb2j" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.459983 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8215fd44-dc8d-4791-992b-42a573cdfbed-metrics-certs\") pod \"controller-f8648f98b-dmwxs\" (UID: \"8215fd44-dc8d-4791-992b-42a573cdfbed\") " pod="metallb-system/controller-f8648f98b-dmwxs" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.488369 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kkkmt" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.504290 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-j6qpj" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.567557 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/e020145b-4005-44b0-89a6-293ea42f44f6-metallb-excludel2\") pod \"speaker-6lb2j\" (UID: \"e020145b-4005-44b0-89a6-293ea42f44f6\") " pod="metallb-system/speaker-6lb2j" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.567789 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hsx2\" (UniqueName: \"kubernetes.io/projected/8215fd44-dc8d-4791-992b-42a573cdfbed-kube-api-access-2hsx2\") pod \"controller-f8648f98b-dmwxs\" (UID: \"8215fd44-dc8d-4791-992b-42a573cdfbed\") " pod="metallb-system/controller-f8648f98b-dmwxs" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.567827 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8215fd44-dc8d-4791-992b-42a573cdfbed-cert\") pod \"controller-f8648f98b-dmwxs\" (UID: \"8215fd44-dc8d-4791-992b-42a573cdfbed\") " pod="metallb-system/controller-f8648f98b-dmwxs" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.567871 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e020145b-4005-44b0-89a6-293ea42f44f6-metrics-certs\") pod \"speaker-6lb2j\" (UID: \"e020145b-4005-44b0-89a6-293ea42f44f6\") " pod="metallb-system/speaker-6lb2j" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.567904 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e020145b-4005-44b0-89a6-293ea42f44f6-memberlist\") pod \"speaker-6lb2j\" (UID: \"e020145b-4005-44b0-89a6-293ea42f44f6\") " pod="metallb-system/speaker-6lb2j" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.567976 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpznx\" (UniqueName: \"kubernetes.io/projected/e020145b-4005-44b0-89a6-293ea42f44f6-kube-api-access-tpznx\") pod \"speaker-6lb2j\" (UID: \"e020145b-4005-44b0-89a6-293ea42f44f6\") " pod="metallb-system/speaker-6lb2j" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.568019 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8215fd44-dc8d-4791-992b-42a573cdfbed-metrics-certs\") pod \"controller-f8648f98b-dmwxs\" (UID: \"8215fd44-dc8d-4791-992b-42a573cdfbed\") " pod="metallb-system/controller-f8648f98b-dmwxs" Nov 29 06:49:48 crc kubenswrapper[4947]: E1129 06:49:48.569060 4947 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 29 06:49:48 crc kubenswrapper[4947]: E1129 06:49:48.569252 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e020145b-4005-44b0-89a6-293ea42f44f6-memberlist podName:e020145b-4005-44b0-89a6-293ea42f44f6 nodeName:}" failed. No retries permitted until 2025-11-29 06:49:49.06920656 +0000 UTC m=+940.113588641 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/e020145b-4005-44b0-89a6-293ea42f44f6-memberlist") pod "speaker-6lb2j" (UID: "e020145b-4005-44b0-89a6-293ea42f44f6") : secret "metallb-memberlist" not found Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.570085 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/e020145b-4005-44b0-89a6-293ea42f44f6-metallb-excludel2\") pod \"speaker-6lb2j\" (UID: \"e020145b-4005-44b0-89a6-293ea42f44f6\") " pod="metallb-system/speaker-6lb2j" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.572445 4947 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.574918 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e020145b-4005-44b0-89a6-293ea42f44f6-metrics-certs\") pod \"speaker-6lb2j\" (UID: \"e020145b-4005-44b0-89a6-293ea42f44f6\") " pod="metallb-system/speaker-6lb2j" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.580047 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8215fd44-dc8d-4791-992b-42a573cdfbed-metrics-certs\") pod \"controller-f8648f98b-dmwxs\" (UID: \"8215fd44-dc8d-4791-992b-42a573cdfbed\") " pod="metallb-system/controller-f8648f98b-dmwxs" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.582288 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8215fd44-dc8d-4791-992b-42a573cdfbed-cert\") pod \"controller-f8648f98b-dmwxs\" (UID: \"8215fd44-dc8d-4791-992b-42a573cdfbed\") " pod="metallb-system/controller-f8648f98b-dmwxs" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.595839 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpznx\" (UniqueName: \"kubernetes.io/projected/e020145b-4005-44b0-89a6-293ea42f44f6-kube-api-access-tpznx\") pod \"speaker-6lb2j\" (UID: \"e020145b-4005-44b0-89a6-293ea42f44f6\") " pod="metallb-system/speaker-6lb2j" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.596180 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hsx2\" (UniqueName: \"kubernetes.io/projected/8215fd44-dc8d-4791-992b-42a573cdfbed-kube-api-access-2hsx2\") pod \"controller-f8648f98b-dmwxs\" (UID: \"8215fd44-dc8d-4791-992b-42a573cdfbed\") " pod="metallb-system/controller-f8648f98b-dmwxs" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.629329 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-dmwxs" Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.786457 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-kkkmt"] Nov 29 06:49:48 crc kubenswrapper[4947]: W1129 06:49:48.800409 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod207ad8ab_e9b8_437c_a649_24d58d587eb9.slice/crio-ce2b72b09ff700216825456d49d87c43f1b6897a5e5965a5a9c5e1b6810a88ac WatchSource:0}: Error finding container ce2b72b09ff700216825456d49d87c43f1b6897a5e5965a5a9c5e1b6810a88ac: Status 404 returned error can't find the container with id ce2b72b09ff700216825456d49d87c43f1b6897a5e5965a5a9c5e1b6810a88ac Nov 29 06:49:48 crc kubenswrapper[4947]: I1129 06:49:48.877490 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-dmwxs"] Nov 29 06:49:48 crc kubenswrapper[4947]: W1129 06:49:48.882055 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8215fd44_dc8d_4791_992b_42a573cdfbed.slice/crio-5278c7a0cd8c87e9762b81c659e9bb35f9cc22d53e7bb699922c116efce64d47 WatchSource:0}: Error finding container 5278c7a0cd8c87e9762b81c659e9bb35f9cc22d53e7bb699922c116efce64d47: Status 404 returned error can't find the container with id 5278c7a0cd8c87e9762b81c659e9bb35f9cc22d53e7bb699922c116efce64d47 Nov 29 06:49:49 crc kubenswrapper[4947]: I1129 06:49:49.076332 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e020145b-4005-44b0-89a6-293ea42f44f6-memberlist\") pod \"speaker-6lb2j\" (UID: \"e020145b-4005-44b0-89a6-293ea42f44f6\") " pod="metallb-system/speaker-6lb2j" Nov 29 06:49:49 crc kubenswrapper[4947]: E1129 06:49:49.076535 4947 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 29 06:49:49 crc kubenswrapper[4947]: E1129 06:49:49.076622 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e020145b-4005-44b0-89a6-293ea42f44f6-memberlist podName:e020145b-4005-44b0-89a6-293ea42f44f6 nodeName:}" failed. No retries permitted until 2025-11-29 06:49:50.076594566 +0000 UTC m=+941.120976637 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/e020145b-4005-44b0-89a6-293ea42f44f6-memberlist") pod "speaker-6lb2j" (UID: "e020145b-4005-44b0-89a6-293ea42f44f6") : secret "metallb-memberlist" not found Nov 29 06:49:49 crc kubenswrapper[4947]: I1129 06:49:49.729492 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kkkmt" event={"ID":"207ad8ab-e9b8-437c-a649-24d58d587eb9","Type":"ContainerStarted","Data":"ce2b72b09ff700216825456d49d87c43f1b6897a5e5965a5a9c5e1b6810a88ac"} Nov 29 06:49:49 crc kubenswrapper[4947]: I1129 06:49:49.730962 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-dmwxs" event={"ID":"8215fd44-dc8d-4791-992b-42a573cdfbed","Type":"ContainerStarted","Data":"5278c7a0cd8c87e9762b81c659e9bb35f9cc22d53e7bb699922c116efce64d47"} Nov 29 06:49:50 crc kubenswrapper[4947]: I1129 06:49:50.116057 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e020145b-4005-44b0-89a6-293ea42f44f6-memberlist\") pod \"speaker-6lb2j\" (UID: \"e020145b-4005-44b0-89a6-293ea42f44f6\") " pod="metallb-system/speaker-6lb2j" Nov 29 06:49:50 crc kubenswrapper[4947]: E1129 06:49:50.116322 4947 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 29 06:49:50 crc kubenswrapper[4947]: E1129 06:49:50.116490 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e020145b-4005-44b0-89a6-293ea42f44f6-memberlist podName:e020145b-4005-44b0-89a6-293ea42f44f6 nodeName:}" failed. No retries permitted until 2025-11-29 06:49:52.116455085 +0000 UTC m=+943.160837166 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/e020145b-4005-44b0-89a6-293ea42f44f6-memberlist") pod "speaker-6lb2j" (UID: "e020145b-4005-44b0-89a6-293ea42f44f6") : secret "metallb-memberlist" not found Nov 29 06:49:50 crc kubenswrapper[4947]: I1129 06:49:50.751885 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-dmwxs" event={"ID":"8215fd44-dc8d-4791-992b-42a573cdfbed","Type":"ContainerStarted","Data":"ccb516431e2a59dcf65c4ce684da30adcd5520cdc06e4fadfcf74b4eb2589e48"} Nov 29 06:49:50 crc kubenswrapper[4947]: I1129 06:49:50.763916 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j6qpj" event={"ID":"b277e4bf-960a-45ad-a864-c6bfb22ade67","Type":"ContainerStarted","Data":"0d43251586a0b80fe1b79a9ea99247eab4f74f497d5227f64ce96b023955bd27"} Nov 29 06:49:51 crc kubenswrapper[4947]: I1129 06:49:51.776061 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-dmwxs" event={"ID":"8215fd44-dc8d-4791-992b-42a573cdfbed","Type":"ContainerStarted","Data":"8f29f7ecb2473ecc49947399717f38b6ef1858ecfe11988422f392c921aa24a8"} Nov 29 06:49:51 crc kubenswrapper[4947]: I1129 06:49:51.776613 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-f8648f98b-dmwxs" Nov 29 06:49:51 crc kubenswrapper[4947]: I1129 06:49:51.801832 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-f8648f98b-dmwxs" podStartSLOduration=3.801805226 podStartE2EDuration="3.801805226s" podCreationTimestamp="2025-11-29 06:49:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:49:51.797356883 +0000 UTC m=+942.841738964" watchObservedRunningTime="2025-11-29 06:49:51.801805226 +0000 UTC m=+942.846187307" Nov 29 06:49:52 crc kubenswrapper[4947]: I1129 06:49:52.162620 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e020145b-4005-44b0-89a6-293ea42f44f6-memberlist\") pod \"speaker-6lb2j\" (UID: \"e020145b-4005-44b0-89a6-293ea42f44f6\") " pod="metallb-system/speaker-6lb2j" Nov 29 06:49:52 crc kubenswrapper[4947]: I1129 06:49:52.171964 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e020145b-4005-44b0-89a6-293ea42f44f6-memberlist\") pod \"speaker-6lb2j\" (UID: \"e020145b-4005-44b0-89a6-293ea42f44f6\") " pod="metallb-system/speaker-6lb2j" Nov 29 06:49:52 crc kubenswrapper[4947]: I1129 06:49:52.208987 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-6lb2j" Nov 29 06:49:52 crc kubenswrapper[4947]: I1129 06:49:52.811887 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-6lb2j" event={"ID":"e020145b-4005-44b0-89a6-293ea42f44f6","Type":"ContainerStarted","Data":"3b23df837652ff39d8bc1c303ae7e3b183ce238de0fc8788ff2c71107b2c15a0"} Nov 29 06:49:53 crc kubenswrapper[4947]: I1129 06:49:53.841689 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-6lb2j" event={"ID":"e020145b-4005-44b0-89a6-293ea42f44f6","Type":"ContainerStarted","Data":"e79037aa55ba19ed8a3ef0e8e027d9e7fe71fd9182eabd415d3065436bea659e"} Nov 29 06:49:54 crc kubenswrapper[4947]: I1129 06:49:54.850246 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-6lb2j" event={"ID":"e020145b-4005-44b0-89a6-293ea42f44f6","Type":"ContainerStarted","Data":"7326bc20158ba1cec5255ef982c0a3e1d48cf2b3aa680fcd4734fe4a2df3fc8c"} Nov 29 06:49:54 crc kubenswrapper[4947]: I1129 06:49:54.850417 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-6lb2j" Nov 29 06:49:54 crc kubenswrapper[4947]: I1129 06:49:54.870542 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-6lb2j" podStartSLOduration=6.87052535 podStartE2EDuration="6.87052535s" podCreationTimestamp="2025-11-29 06:49:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:49:54.868999121 +0000 UTC m=+945.913381212" watchObservedRunningTime="2025-11-29 06:49:54.87052535 +0000 UTC m=+945.914907431" Nov 29 06:49:58 crc kubenswrapper[4947]: I1129 06:49:58.884296 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kkkmt" event={"ID":"207ad8ab-e9b8-437c-a649-24d58d587eb9","Type":"ContainerStarted","Data":"0ecb8fd9c8502a7a9d16d5e2398a80d0a0b424045ee0a6021ed90308f7afcfaf"} Nov 29 06:49:58 crc kubenswrapper[4947]: I1129 06:49:58.885080 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kkkmt" Nov 29 06:49:58 crc kubenswrapper[4947]: I1129 06:49:58.889476 4947 generic.go:334] "Generic (PLEG): container finished" podID="b277e4bf-960a-45ad-a864-c6bfb22ade67" containerID="db37f5f440b4d61ef9b921dfe3f46cd54aba2ab3bf89f304d69b09fd989065eb" exitCode=0 Nov 29 06:49:58 crc kubenswrapper[4947]: I1129 06:49:58.889548 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j6qpj" event={"ID":"b277e4bf-960a-45ad-a864-c6bfb22ade67","Type":"ContainerDied","Data":"db37f5f440b4d61ef9b921dfe3f46cd54aba2ab3bf89f304d69b09fd989065eb"} Nov 29 06:49:58 crc kubenswrapper[4947]: I1129 06:49:58.907351 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kkkmt" podStartSLOduration=1.525348749 podStartE2EDuration="10.907324728s" podCreationTimestamp="2025-11-29 06:49:48 +0000 UTC" firstStartedPulling="2025-11-29 06:49:48.810853271 +0000 UTC m=+939.855235352" lastFinishedPulling="2025-11-29 06:49:58.19282925 +0000 UTC m=+949.237211331" observedRunningTime="2025-11-29 06:49:58.903776567 +0000 UTC m=+949.948158648" watchObservedRunningTime="2025-11-29 06:49:58.907324728 +0000 UTC m=+949.951706819" Nov 29 06:49:59 crc kubenswrapper[4947]: I1129 06:49:59.897507 4947 generic.go:334] "Generic (PLEG): container finished" podID="b277e4bf-960a-45ad-a864-c6bfb22ade67" containerID="69b20267c30309de1fb66381a638a0bd837f621bcb635988c686be3e2d97e1c3" exitCode=0 Nov 29 06:49:59 crc kubenswrapper[4947]: I1129 06:49:59.897597 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j6qpj" event={"ID":"b277e4bf-960a-45ad-a864-c6bfb22ade67","Type":"ContainerDied","Data":"69b20267c30309de1fb66381a638a0bd837f621bcb635988c686be3e2d97e1c3"} Nov 29 06:50:00 crc kubenswrapper[4947]: I1129 06:50:00.907156 4947 generic.go:334] "Generic (PLEG): container finished" podID="b277e4bf-960a-45ad-a864-c6bfb22ade67" containerID="73817b0a1743dad15a686ab4ca4c6822e292830e589233bfe719a96372ac2662" exitCode=0 Nov 29 06:50:00 crc kubenswrapper[4947]: I1129 06:50:00.907286 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j6qpj" event={"ID":"b277e4bf-960a-45ad-a864-c6bfb22ade67","Type":"ContainerDied","Data":"73817b0a1743dad15a686ab4ca4c6822e292830e589233bfe719a96372ac2662"} Nov 29 06:50:01 crc kubenswrapper[4947]: I1129 06:50:01.930536 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j6qpj" event={"ID":"b277e4bf-960a-45ad-a864-c6bfb22ade67","Type":"ContainerStarted","Data":"fdc469ad44a201dbb4521e0cf61cd413c3dd39c879e09e5dd25f7c70528f6b08"} Nov 29 06:50:01 crc kubenswrapper[4947]: I1129 06:50:01.931840 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j6qpj" event={"ID":"b277e4bf-960a-45ad-a864-c6bfb22ade67","Type":"ContainerStarted","Data":"bb6f41e33a64b67fcaeefeee5eadc37acd7704a3b24986a2bf674de76996f595"} Nov 29 06:50:01 crc kubenswrapper[4947]: I1129 06:50:01.931985 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j6qpj" event={"ID":"b277e4bf-960a-45ad-a864-c6bfb22ade67","Type":"ContainerStarted","Data":"85a1f0176a4ba486cc0b9ff222c883c9f5a903689551e632d0b46de87accc70c"} Nov 29 06:50:01 crc kubenswrapper[4947]: I1129 06:50:01.932103 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j6qpj" event={"ID":"b277e4bf-960a-45ad-a864-c6bfb22ade67","Type":"ContainerStarted","Data":"9654d383f6a57aabe73681b0a82061b79d54d27beaa4f612d7ede72e3584f7a8"} Nov 29 06:50:02 crc kubenswrapper[4947]: I1129 06:50:02.941892 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j6qpj" event={"ID":"b277e4bf-960a-45ad-a864-c6bfb22ade67","Type":"ContainerStarted","Data":"2675b7260970a9818ce5fff77d67dc40a51a9bdec6036cdd3abbde5ba7578a3b"} Nov 29 06:50:02 crc kubenswrapper[4947]: I1129 06:50:02.943305 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j6qpj" event={"ID":"b277e4bf-960a-45ad-a864-c6bfb22ade67","Type":"ContainerStarted","Data":"fbbf284907953a7d4261f9a578b0cc2c66efc31f88ce78a95a55a5425ee3c308"} Nov 29 06:50:02 crc kubenswrapper[4947]: I1129 06:50:02.943337 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-j6qpj" Nov 29 06:50:03 crc kubenswrapper[4947]: I1129 06:50:03.505073 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-j6qpj" Nov 29 06:50:03 crc kubenswrapper[4947]: I1129 06:50:03.546413 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-j6qpj" Nov 29 06:50:03 crc kubenswrapper[4947]: I1129 06:50:03.571066 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-j6qpj" podStartSLOduration=7.984335061 podStartE2EDuration="15.571046642s" podCreationTimestamp="2025-11-29 06:49:48 +0000 UTC" firstStartedPulling="2025-11-29 06:49:50.578461045 +0000 UTC m=+941.622843126" lastFinishedPulling="2025-11-29 06:49:58.165172626 +0000 UTC m=+949.209554707" observedRunningTime="2025-11-29 06:50:02.969254014 +0000 UTC m=+954.013636105" watchObservedRunningTime="2025-11-29 06:50:03.571046642 +0000 UTC m=+954.615428723" Nov 29 06:50:08 crc kubenswrapper[4947]: I1129 06:50:08.496469 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kkkmt" Nov 29 06:50:08 crc kubenswrapper[4947]: I1129 06:50:08.658194 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-f8648f98b-dmwxs" Nov 29 06:50:12 crc kubenswrapper[4947]: I1129 06:50:12.213605 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-6lb2j" Nov 29 06:50:18 crc kubenswrapper[4947]: I1129 06:50:18.512506 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-j6qpj" Nov 29 06:50:18 crc kubenswrapper[4947]: I1129 06:50:18.869881 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-2zlv9"] Nov 29 06:50:18 crc kubenswrapper[4947]: I1129 06:50:18.871007 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2zlv9" Nov 29 06:50:18 crc kubenswrapper[4947]: I1129 06:50:18.875542 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 29 06:50:18 crc kubenswrapper[4947]: I1129 06:50:18.875810 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 29 06:50:18 crc kubenswrapper[4947]: I1129 06:50:18.876452 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-w4k5x" Nov 29 06:50:18 crc kubenswrapper[4947]: I1129 06:50:18.885786 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-2zlv9"] Nov 29 06:50:18 crc kubenswrapper[4947]: I1129 06:50:18.912158 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vsg2\" (UniqueName: \"kubernetes.io/projected/f0e10e14-0e29-485c-849e-81a02ca3cae0-kube-api-access-5vsg2\") pod \"openstack-operator-index-2zlv9\" (UID: \"f0e10e14-0e29-485c-849e-81a02ca3cae0\") " pod="openstack-operators/openstack-operator-index-2zlv9" Nov 29 06:50:19 crc kubenswrapper[4947]: I1129 06:50:19.013307 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vsg2\" (UniqueName: \"kubernetes.io/projected/f0e10e14-0e29-485c-849e-81a02ca3cae0-kube-api-access-5vsg2\") pod \"openstack-operator-index-2zlv9\" (UID: \"f0e10e14-0e29-485c-849e-81a02ca3cae0\") " pod="openstack-operators/openstack-operator-index-2zlv9" Nov 29 06:50:19 crc kubenswrapper[4947]: I1129 06:50:19.048348 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vsg2\" (UniqueName: \"kubernetes.io/projected/f0e10e14-0e29-485c-849e-81a02ca3cae0-kube-api-access-5vsg2\") pod \"openstack-operator-index-2zlv9\" (UID: \"f0e10e14-0e29-485c-849e-81a02ca3cae0\") " pod="openstack-operators/openstack-operator-index-2zlv9" Nov 29 06:50:19 crc kubenswrapper[4947]: I1129 06:50:19.199367 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2zlv9" Nov 29 06:50:19 crc kubenswrapper[4947]: I1129 06:50:19.657757 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-2zlv9"] Nov 29 06:50:20 crc kubenswrapper[4947]: I1129 06:50:20.067741 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2zlv9" event={"ID":"f0e10e14-0e29-485c-849e-81a02ca3cae0","Type":"ContainerStarted","Data":"13d675d0fb3fd7996d85bce28bd7008d39a21923885b8ed26a8b50ca89eff535"} Nov 29 06:50:24 crc kubenswrapper[4947]: I1129 06:50:24.056212 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-2zlv9"] Nov 29 06:50:24 crc kubenswrapper[4947]: I1129 06:50:24.667841 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-bnzxg"] Nov 29 06:50:24 crc kubenswrapper[4947]: I1129 06:50:24.672477 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bnzxg" Nov 29 06:50:24 crc kubenswrapper[4947]: I1129 06:50:24.682560 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-bnzxg"] Nov 29 06:50:24 crc kubenswrapper[4947]: I1129 06:50:24.824277 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4k5n\" (UniqueName: \"kubernetes.io/projected/0074ce39-b36c-4694-869f-965e7109a7ff-kube-api-access-v4k5n\") pod \"openstack-operator-index-bnzxg\" (UID: \"0074ce39-b36c-4694-869f-965e7109a7ff\") " pod="openstack-operators/openstack-operator-index-bnzxg" Nov 29 06:50:24 crc kubenswrapper[4947]: I1129 06:50:24.926092 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4k5n\" (UniqueName: \"kubernetes.io/projected/0074ce39-b36c-4694-869f-965e7109a7ff-kube-api-access-v4k5n\") pod \"openstack-operator-index-bnzxg\" (UID: \"0074ce39-b36c-4694-869f-965e7109a7ff\") " pod="openstack-operators/openstack-operator-index-bnzxg" Nov 29 06:50:24 crc kubenswrapper[4947]: I1129 06:50:24.950620 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4k5n\" (UniqueName: \"kubernetes.io/projected/0074ce39-b36c-4694-869f-965e7109a7ff-kube-api-access-v4k5n\") pod \"openstack-operator-index-bnzxg\" (UID: \"0074ce39-b36c-4694-869f-965e7109a7ff\") " pod="openstack-operators/openstack-operator-index-bnzxg" Nov 29 06:50:25 crc kubenswrapper[4947]: I1129 06:50:25.011258 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bnzxg" Nov 29 06:50:25 crc kubenswrapper[4947]: I1129 06:50:25.517072 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-bnzxg"] Nov 29 06:50:26 crc kubenswrapper[4947]: I1129 06:50:26.116731 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2zlv9" event={"ID":"f0e10e14-0e29-485c-849e-81a02ca3cae0","Type":"ContainerStarted","Data":"ba22fa725195119248f05c7c3fc7687a838a91919c241fb906344f3a086a172b"} Nov 29 06:50:26 crc kubenswrapper[4947]: I1129 06:50:26.116878 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-2zlv9" podUID="f0e10e14-0e29-485c-849e-81a02ca3cae0" containerName="registry-server" containerID="cri-o://ba22fa725195119248f05c7c3fc7687a838a91919c241fb906344f3a086a172b" gracePeriod=2 Nov 29 06:50:26 crc kubenswrapper[4947]: I1129 06:50:26.121113 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bnzxg" event={"ID":"0074ce39-b36c-4694-869f-965e7109a7ff","Type":"ContainerStarted","Data":"cf03e1c04ce7c6363d7b9fe250bd6d6e6c680e2f168278076e086f22efa364d4"} Nov 29 06:50:26 crc kubenswrapper[4947]: I1129 06:50:26.121190 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bnzxg" event={"ID":"0074ce39-b36c-4694-869f-965e7109a7ff","Type":"ContainerStarted","Data":"21e1022368950a109c70ab669e97964a59f5b0fd974d649596a75247fcf7fac0"} Nov 29 06:50:26 crc kubenswrapper[4947]: I1129 06:50:26.144484 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-2zlv9" podStartSLOduration=2.94741892 podStartE2EDuration="8.144450891s" podCreationTimestamp="2025-11-29 06:50:18 +0000 UTC" firstStartedPulling="2025-11-29 06:50:19.690048433 +0000 UTC m=+970.734430514" lastFinishedPulling="2025-11-29 06:50:24.887080404 +0000 UTC m=+975.931462485" observedRunningTime="2025-11-29 06:50:26.13929888 +0000 UTC m=+977.183680961" watchObservedRunningTime="2025-11-29 06:50:26.144450891 +0000 UTC m=+977.188832982" Nov 29 06:50:26 crc kubenswrapper[4947]: I1129 06:50:26.160746 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-bnzxg" podStartSLOduration=2.077666727 podStartE2EDuration="2.160716543s" podCreationTimestamp="2025-11-29 06:50:24 +0000 UTC" firstStartedPulling="2025-11-29 06:50:25.533357443 +0000 UTC m=+976.577739534" lastFinishedPulling="2025-11-29 06:50:25.616407249 +0000 UTC m=+976.660789350" observedRunningTime="2025-11-29 06:50:26.159355029 +0000 UTC m=+977.203737120" watchObservedRunningTime="2025-11-29 06:50:26.160716543 +0000 UTC m=+977.205098624" Nov 29 06:50:26 crc kubenswrapper[4947]: I1129 06:50:26.508873 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2zlv9" Nov 29 06:50:26 crc kubenswrapper[4947]: I1129 06:50:26.652814 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vsg2\" (UniqueName: \"kubernetes.io/projected/f0e10e14-0e29-485c-849e-81a02ca3cae0-kube-api-access-5vsg2\") pod \"f0e10e14-0e29-485c-849e-81a02ca3cae0\" (UID: \"f0e10e14-0e29-485c-849e-81a02ca3cae0\") " Nov 29 06:50:26 crc kubenswrapper[4947]: I1129 06:50:26.660618 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0e10e14-0e29-485c-849e-81a02ca3cae0-kube-api-access-5vsg2" (OuterVolumeSpecName: "kube-api-access-5vsg2") pod "f0e10e14-0e29-485c-849e-81a02ca3cae0" (UID: "f0e10e14-0e29-485c-849e-81a02ca3cae0"). InnerVolumeSpecName "kube-api-access-5vsg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:50:26 crc kubenswrapper[4947]: I1129 06:50:26.754770 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vsg2\" (UniqueName: \"kubernetes.io/projected/f0e10e14-0e29-485c-849e-81a02ca3cae0-kube-api-access-5vsg2\") on node \"crc\" DevicePath \"\"" Nov 29 06:50:27 crc kubenswrapper[4947]: I1129 06:50:27.131012 4947 generic.go:334] "Generic (PLEG): container finished" podID="f0e10e14-0e29-485c-849e-81a02ca3cae0" containerID="ba22fa725195119248f05c7c3fc7687a838a91919c241fb906344f3a086a172b" exitCode=0 Nov 29 06:50:27 crc kubenswrapper[4947]: I1129 06:50:27.132050 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2zlv9" Nov 29 06:50:27 crc kubenswrapper[4947]: I1129 06:50:27.137383 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2zlv9" event={"ID":"f0e10e14-0e29-485c-849e-81a02ca3cae0","Type":"ContainerDied","Data":"ba22fa725195119248f05c7c3fc7687a838a91919c241fb906344f3a086a172b"} Nov 29 06:50:27 crc kubenswrapper[4947]: I1129 06:50:27.137432 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2zlv9" event={"ID":"f0e10e14-0e29-485c-849e-81a02ca3cae0","Type":"ContainerDied","Data":"13d675d0fb3fd7996d85bce28bd7008d39a21923885b8ed26a8b50ca89eff535"} Nov 29 06:50:27 crc kubenswrapper[4947]: I1129 06:50:27.137458 4947 scope.go:117] "RemoveContainer" containerID="ba22fa725195119248f05c7c3fc7687a838a91919c241fb906344f3a086a172b" Nov 29 06:50:27 crc kubenswrapper[4947]: I1129 06:50:27.174970 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-2zlv9"] Nov 29 06:50:27 crc kubenswrapper[4947]: I1129 06:50:27.175943 4947 scope.go:117] "RemoveContainer" containerID="ba22fa725195119248f05c7c3fc7687a838a91919c241fb906344f3a086a172b" Nov 29 06:50:27 crc kubenswrapper[4947]: E1129 06:50:27.176571 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba22fa725195119248f05c7c3fc7687a838a91919c241fb906344f3a086a172b\": container with ID starting with ba22fa725195119248f05c7c3fc7687a838a91919c241fb906344f3a086a172b not found: ID does not exist" containerID="ba22fa725195119248f05c7c3fc7687a838a91919c241fb906344f3a086a172b" Nov 29 06:50:27 crc kubenswrapper[4947]: I1129 06:50:27.176619 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba22fa725195119248f05c7c3fc7687a838a91919c241fb906344f3a086a172b"} err="failed to get container status \"ba22fa725195119248f05c7c3fc7687a838a91919c241fb906344f3a086a172b\": rpc error: code = NotFound desc = could not find container \"ba22fa725195119248f05c7c3fc7687a838a91919c241fb906344f3a086a172b\": container with ID starting with ba22fa725195119248f05c7c3fc7687a838a91919c241fb906344f3a086a172b not found: ID does not exist" Nov 29 06:50:27 crc kubenswrapper[4947]: I1129 06:50:27.190687 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-2zlv9"] Nov 29 06:50:29 crc kubenswrapper[4947]: I1129 06:50:29.188285 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0e10e14-0e29-485c-849e-81a02ca3cae0" path="/var/lib/kubelet/pods/f0e10e14-0e29-485c-849e-81a02ca3cae0/volumes" Nov 29 06:50:35 crc kubenswrapper[4947]: I1129 06:50:35.011500 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-bnzxg" Nov 29 06:50:35 crc kubenswrapper[4947]: I1129 06:50:35.012409 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-bnzxg" Nov 29 06:50:35 crc kubenswrapper[4947]: I1129 06:50:35.044152 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-bnzxg" Nov 29 06:50:35 crc kubenswrapper[4947]: I1129 06:50:35.230392 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-bnzxg" Nov 29 06:50:42 crc kubenswrapper[4947]: I1129 06:50:42.547569 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/2c09e5b1e9a9a4ca78abc65ab9e9961b7fb58875ca4cc4de57b3675e39j25cc"] Nov 29 06:50:42 crc kubenswrapper[4947]: E1129 06:50:42.548729 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0e10e14-0e29-485c-849e-81a02ca3cae0" containerName="registry-server" Nov 29 06:50:42 crc kubenswrapper[4947]: I1129 06:50:42.548749 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0e10e14-0e29-485c-849e-81a02ca3cae0" containerName="registry-server" Nov 29 06:50:42 crc kubenswrapper[4947]: I1129 06:50:42.548906 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0e10e14-0e29-485c-849e-81a02ca3cae0" containerName="registry-server" Nov 29 06:50:42 crc kubenswrapper[4947]: I1129 06:50:42.550156 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/2c09e5b1e9a9a4ca78abc65ab9e9961b7fb58875ca4cc4de57b3675e39j25cc" Nov 29 06:50:42 crc kubenswrapper[4947]: I1129 06:50:42.555118 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-rfwnv" Nov 29 06:50:42 crc kubenswrapper[4947]: I1129 06:50:42.560961 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/2c09e5b1e9a9a4ca78abc65ab9e9961b7fb58875ca4cc4de57b3675e39j25cc"] Nov 29 06:50:42 crc kubenswrapper[4947]: I1129 06:50:42.691059 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b36e655e-d836-4174-9e94-de2532d08dc4-bundle\") pod \"2c09e5b1e9a9a4ca78abc65ab9e9961b7fb58875ca4cc4de57b3675e39j25cc\" (UID: \"b36e655e-d836-4174-9e94-de2532d08dc4\") " pod="openstack-operators/2c09e5b1e9a9a4ca78abc65ab9e9961b7fb58875ca4cc4de57b3675e39j25cc" Nov 29 06:50:42 crc kubenswrapper[4947]: I1129 06:50:42.691652 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b36e655e-d836-4174-9e94-de2532d08dc4-util\") pod \"2c09e5b1e9a9a4ca78abc65ab9e9961b7fb58875ca4cc4de57b3675e39j25cc\" (UID: \"b36e655e-d836-4174-9e94-de2532d08dc4\") " pod="openstack-operators/2c09e5b1e9a9a4ca78abc65ab9e9961b7fb58875ca4cc4de57b3675e39j25cc" Nov 29 06:50:42 crc kubenswrapper[4947]: I1129 06:50:42.691731 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf2g5\" (UniqueName: \"kubernetes.io/projected/b36e655e-d836-4174-9e94-de2532d08dc4-kube-api-access-kf2g5\") pod \"2c09e5b1e9a9a4ca78abc65ab9e9961b7fb58875ca4cc4de57b3675e39j25cc\" (UID: \"b36e655e-d836-4174-9e94-de2532d08dc4\") " pod="openstack-operators/2c09e5b1e9a9a4ca78abc65ab9e9961b7fb58875ca4cc4de57b3675e39j25cc" Nov 29 06:50:42 crc kubenswrapper[4947]: I1129 06:50:42.793322 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b36e655e-d836-4174-9e94-de2532d08dc4-bundle\") pod \"2c09e5b1e9a9a4ca78abc65ab9e9961b7fb58875ca4cc4de57b3675e39j25cc\" (UID: \"b36e655e-d836-4174-9e94-de2532d08dc4\") " pod="openstack-operators/2c09e5b1e9a9a4ca78abc65ab9e9961b7fb58875ca4cc4de57b3675e39j25cc" Nov 29 06:50:42 crc kubenswrapper[4947]: I1129 06:50:42.793658 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b36e655e-d836-4174-9e94-de2532d08dc4-util\") pod \"2c09e5b1e9a9a4ca78abc65ab9e9961b7fb58875ca4cc4de57b3675e39j25cc\" (UID: \"b36e655e-d836-4174-9e94-de2532d08dc4\") " pod="openstack-operators/2c09e5b1e9a9a4ca78abc65ab9e9961b7fb58875ca4cc4de57b3675e39j25cc" Nov 29 06:50:42 crc kubenswrapper[4947]: I1129 06:50:42.793776 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kf2g5\" (UniqueName: \"kubernetes.io/projected/b36e655e-d836-4174-9e94-de2532d08dc4-kube-api-access-kf2g5\") pod \"2c09e5b1e9a9a4ca78abc65ab9e9961b7fb58875ca4cc4de57b3675e39j25cc\" (UID: \"b36e655e-d836-4174-9e94-de2532d08dc4\") " pod="openstack-operators/2c09e5b1e9a9a4ca78abc65ab9e9961b7fb58875ca4cc4de57b3675e39j25cc" Nov 29 06:50:42 crc kubenswrapper[4947]: I1129 06:50:42.794104 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b36e655e-d836-4174-9e94-de2532d08dc4-bundle\") pod \"2c09e5b1e9a9a4ca78abc65ab9e9961b7fb58875ca4cc4de57b3675e39j25cc\" (UID: \"b36e655e-d836-4174-9e94-de2532d08dc4\") " pod="openstack-operators/2c09e5b1e9a9a4ca78abc65ab9e9961b7fb58875ca4cc4de57b3675e39j25cc" Nov 29 06:50:42 crc kubenswrapper[4947]: I1129 06:50:42.794550 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b36e655e-d836-4174-9e94-de2532d08dc4-util\") pod \"2c09e5b1e9a9a4ca78abc65ab9e9961b7fb58875ca4cc4de57b3675e39j25cc\" (UID: \"b36e655e-d836-4174-9e94-de2532d08dc4\") " pod="openstack-operators/2c09e5b1e9a9a4ca78abc65ab9e9961b7fb58875ca4cc4de57b3675e39j25cc" Nov 29 06:50:42 crc kubenswrapper[4947]: I1129 06:50:42.817763 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf2g5\" (UniqueName: \"kubernetes.io/projected/b36e655e-d836-4174-9e94-de2532d08dc4-kube-api-access-kf2g5\") pod \"2c09e5b1e9a9a4ca78abc65ab9e9961b7fb58875ca4cc4de57b3675e39j25cc\" (UID: \"b36e655e-d836-4174-9e94-de2532d08dc4\") " pod="openstack-operators/2c09e5b1e9a9a4ca78abc65ab9e9961b7fb58875ca4cc4de57b3675e39j25cc" Nov 29 06:50:42 crc kubenswrapper[4947]: I1129 06:50:42.887310 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/2c09e5b1e9a9a4ca78abc65ab9e9961b7fb58875ca4cc4de57b3675e39j25cc" Nov 29 06:50:43 crc kubenswrapper[4947]: I1129 06:50:43.355491 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/2c09e5b1e9a9a4ca78abc65ab9e9961b7fb58875ca4cc4de57b3675e39j25cc"] Nov 29 06:50:44 crc kubenswrapper[4947]: I1129 06:50:44.258153 4947 generic.go:334] "Generic (PLEG): container finished" podID="b36e655e-d836-4174-9e94-de2532d08dc4" containerID="b4b346d389d9c8a2a2aade28d54b4b14d38af890d973193bd0dfb3f8cd84b6db" exitCode=0 Nov 29 06:50:44 crc kubenswrapper[4947]: I1129 06:50:44.258269 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/2c09e5b1e9a9a4ca78abc65ab9e9961b7fb58875ca4cc4de57b3675e39j25cc" event={"ID":"b36e655e-d836-4174-9e94-de2532d08dc4","Type":"ContainerDied","Data":"b4b346d389d9c8a2a2aade28d54b4b14d38af890d973193bd0dfb3f8cd84b6db"} Nov 29 06:50:44 crc kubenswrapper[4947]: I1129 06:50:44.258634 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/2c09e5b1e9a9a4ca78abc65ab9e9961b7fb58875ca4cc4de57b3675e39j25cc" event={"ID":"b36e655e-d836-4174-9e94-de2532d08dc4","Type":"ContainerStarted","Data":"1a77ce00f78cc9645b493f37bae71b7c850bb5f5a69288c5e4648d37d0c06d82"} Nov 29 06:50:45 crc kubenswrapper[4947]: I1129 06:50:45.268712 4947 generic.go:334] "Generic (PLEG): container finished" podID="b36e655e-d836-4174-9e94-de2532d08dc4" containerID="869f70769c10216152ab5c8b6baa1181d02e71a16bb7655648c21060097323dd" exitCode=0 Nov 29 06:50:45 crc kubenswrapper[4947]: I1129 06:50:45.268798 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/2c09e5b1e9a9a4ca78abc65ab9e9961b7fb58875ca4cc4de57b3675e39j25cc" event={"ID":"b36e655e-d836-4174-9e94-de2532d08dc4","Type":"ContainerDied","Data":"869f70769c10216152ab5c8b6baa1181d02e71a16bb7655648c21060097323dd"} Nov 29 06:50:46 crc kubenswrapper[4947]: I1129 06:50:46.279510 4947 generic.go:334] "Generic (PLEG): container finished" podID="b36e655e-d836-4174-9e94-de2532d08dc4" containerID="601f427f898585e2a773f289e0b3eb2e86e0b4638129a034d6dd2dd200a9fb6c" exitCode=0 Nov 29 06:50:46 crc kubenswrapper[4947]: I1129 06:50:46.279639 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/2c09e5b1e9a9a4ca78abc65ab9e9961b7fb58875ca4cc4de57b3675e39j25cc" event={"ID":"b36e655e-d836-4174-9e94-de2532d08dc4","Type":"ContainerDied","Data":"601f427f898585e2a773f289e0b3eb2e86e0b4638129a034d6dd2dd200a9fb6c"} Nov 29 06:50:47 crc kubenswrapper[4947]: I1129 06:50:47.575028 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/2c09e5b1e9a9a4ca78abc65ab9e9961b7fb58875ca4cc4de57b3675e39j25cc" Nov 29 06:50:47 crc kubenswrapper[4947]: I1129 06:50:47.707530 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kf2g5\" (UniqueName: \"kubernetes.io/projected/b36e655e-d836-4174-9e94-de2532d08dc4-kube-api-access-kf2g5\") pod \"b36e655e-d836-4174-9e94-de2532d08dc4\" (UID: \"b36e655e-d836-4174-9e94-de2532d08dc4\") " Nov 29 06:50:47 crc kubenswrapper[4947]: I1129 06:50:47.707995 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b36e655e-d836-4174-9e94-de2532d08dc4-bundle\") pod \"b36e655e-d836-4174-9e94-de2532d08dc4\" (UID: \"b36e655e-d836-4174-9e94-de2532d08dc4\") " Nov 29 06:50:47 crc kubenswrapper[4947]: I1129 06:50:47.708012 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b36e655e-d836-4174-9e94-de2532d08dc4-util\") pod \"b36e655e-d836-4174-9e94-de2532d08dc4\" (UID: \"b36e655e-d836-4174-9e94-de2532d08dc4\") " Nov 29 06:50:47 crc kubenswrapper[4947]: I1129 06:50:47.708875 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b36e655e-d836-4174-9e94-de2532d08dc4-bundle" (OuterVolumeSpecName: "bundle") pod "b36e655e-d836-4174-9e94-de2532d08dc4" (UID: "b36e655e-d836-4174-9e94-de2532d08dc4"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:50:47 crc kubenswrapper[4947]: I1129 06:50:47.715023 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b36e655e-d836-4174-9e94-de2532d08dc4-kube-api-access-kf2g5" (OuterVolumeSpecName: "kube-api-access-kf2g5") pod "b36e655e-d836-4174-9e94-de2532d08dc4" (UID: "b36e655e-d836-4174-9e94-de2532d08dc4"). InnerVolumeSpecName "kube-api-access-kf2g5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:50:47 crc kubenswrapper[4947]: I1129 06:50:47.729368 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b36e655e-d836-4174-9e94-de2532d08dc4-util" (OuterVolumeSpecName: "util") pod "b36e655e-d836-4174-9e94-de2532d08dc4" (UID: "b36e655e-d836-4174-9e94-de2532d08dc4"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:50:47 crc kubenswrapper[4947]: I1129 06:50:47.810444 4947 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b36e655e-d836-4174-9e94-de2532d08dc4-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 06:50:47 crc kubenswrapper[4947]: I1129 06:50:47.810496 4947 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b36e655e-d836-4174-9e94-de2532d08dc4-util\") on node \"crc\" DevicePath \"\"" Nov 29 06:50:47 crc kubenswrapper[4947]: I1129 06:50:47.810508 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kf2g5\" (UniqueName: \"kubernetes.io/projected/b36e655e-d836-4174-9e94-de2532d08dc4-kube-api-access-kf2g5\") on node \"crc\" DevicePath \"\"" Nov 29 06:50:48 crc kubenswrapper[4947]: I1129 06:50:48.298061 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/2c09e5b1e9a9a4ca78abc65ab9e9961b7fb58875ca4cc4de57b3675e39j25cc" event={"ID":"b36e655e-d836-4174-9e94-de2532d08dc4","Type":"ContainerDied","Data":"1a77ce00f78cc9645b493f37bae71b7c850bb5f5a69288c5e4648d37d0c06d82"} Nov 29 06:50:48 crc kubenswrapper[4947]: I1129 06:50:48.298558 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a77ce00f78cc9645b493f37bae71b7c850bb5f5a69288c5e4648d37d0c06d82" Nov 29 06:50:48 crc kubenswrapper[4947]: I1129 06:50:48.298144 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/2c09e5b1e9a9a4ca78abc65ab9e9961b7fb58875ca4cc4de57b3675e39j25cc" Nov 29 06:50:52 crc kubenswrapper[4947]: I1129 06:50:52.083833 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fkj96"] Nov 29 06:50:52 crc kubenswrapper[4947]: E1129 06:50:52.084578 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b36e655e-d836-4174-9e94-de2532d08dc4" containerName="extract" Nov 29 06:50:52 crc kubenswrapper[4947]: I1129 06:50:52.084592 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="b36e655e-d836-4174-9e94-de2532d08dc4" containerName="extract" Nov 29 06:50:52 crc kubenswrapper[4947]: E1129 06:50:52.084612 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b36e655e-d836-4174-9e94-de2532d08dc4" containerName="util" Nov 29 06:50:52 crc kubenswrapper[4947]: I1129 06:50:52.084618 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="b36e655e-d836-4174-9e94-de2532d08dc4" containerName="util" Nov 29 06:50:52 crc kubenswrapper[4947]: E1129 06:50:52.084630 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b36e655e-d836-4174-9e94-de2532d08dc4" containerName="pull" Nov 29 06:50:52 crc kubenswrapper[4947]: I1129 06:50:52.084636 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="b36e655e-d836-4174-9e94-de2532d08dc4" containerName="pull" Nov 29 06:50:52 crc kubenswrapper[4947]: I1129 06:50:52.084757 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="b36e655e-d836-4174-9e94-de2532d08dc4" containerName="extract" Nov 29 06:50:52 crc kubenswrapper[4947]: I1129 06:50:52.085702 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fkj96" Nov 29 06:50:52 crc kubenswrapper[4947]: I1129 06:50:52.106128 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fkj96"] Nov 29 06:50:52 crc kubenswrapper[4947]: I1129 06:50:52.168580 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d93469b0-7692-4570-84f3-0d5db0913296-utilities\") pod \"certified-operators-fkj96\" (UID: \"d93469b0-7692-4570-84f3-0d5db0913296\") " pod="openshift-marketplace/certified-operators-fkj96" Nov 29 06:50:52 crc kubenswrapper[4947]: I1129 06:50:52.168668 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5mqw\" (UniqueName: \"kubernetes.io/projected/d93469b0-7692-4570-84f3-0d5db0913296-kube-api-access-w5mqw\") pod \"certified-operators-fkj96\" (UID: \"d93469b0-7692-4570-84f3-0d5db0913296\") " pod="openshift-marketplace/certified-operators-fkj96" Nov 29 06:50:52 crc kubenswrapper[4947]: I1129 06:50:52.168718 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d93469b0-7692-4570-84f3-0d5db0913296-catalog-content\") pod \"certified-operators-fkj96\" (UID: \"d93469b0-7692-4570-84f3-0d5db0913296\") " pod="openshift-marketplace/certified-operators-fkj96" Nov 29 06:50:52 crc kubenswrapper[4947]: I1129 06:50:52.270065 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d93469b0-7692-4570-84f3-0d5db0913296-catalog-content\") pod \"certified-operators-fkj96\" (UID: \"d93469b0-7692-4570-84f3-0d5db0913296\") " pod="openshift-marketplace/certified-operators-fkj96" Nov 29 06:50:52 crc kubenswrapper[4947]: I1129 06:50:52.270256 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d93469b0-7692-4570-84f3-0d5db0913296-utilities\") pod \"certified-operators-fkj96\" (UID: \"d93469b0-7692-4570-84f3-0d5db0913296\") " pod="openshift-marketplace/certified-operators-fkj96" Nov 29 06:50:52 crc kubenswrapper[4947]: I1129 06:50:52.270309 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5mqw\" (UniqueName: \"kubernetes.io/projected/d93469b0-7692-4570-84f3-0d5db0913296-kube-api-access-w5mqw\") pod \"certified-operators-fkj96\" (UID: \"d93469b0-7692-4570-84f3-0d5db0913296\") " pod="openshift-marketplace/certified-operators-fkj96" Nov 29 06:50:52 crc kubenswrapper[4947]: I1129 06:50:52.271031 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d93469b0-7692-4570-84f3-0d5db0913296-utilities\") pod \"certified-operators-fkj96\" (UID: \"d93469b0-7692-4570-84f3-0d5db0913296\") " pod="openshift-marketplace/certified-operators-fkj96" Nov 29 06:50:52 crc kubenswrapper[4947]: I1129 06:50:52.271346 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d93469b0-7692-4570-84f3-0d5db0913296-catalog-content\") pod \"certified-operators-fkj96\" (UID: \"d93469b0-7692-4570-84f3-0d5db0913296\") " pod="openshift-marketplace/certified-operators-fkj96" Nov 29 06:50:52 crc kubenswrapper[4947]: I1129 06:50:52.316298 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5mqw\" (UniqueName: \"kubernetes.io/projected/d93469b0-7692-4570-84f3-0d5db0913296-kube-api-access-w5mqw\") pod \"certified-operators-fkj96\" (UID: \"d93469b0-7692-4570-84f3-0d5db0913296\") " pod="openshift-marketplace/certified-operators-fkj96" Nov 29 06:50:52 crc kubenswrapper[4947]: I1129 06:50:52.408981 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fkj96" Nov 29 06:50:52 crc kubenswrapper[4947]: I1129 06:50:52.951017 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fkj96"] Nov 29 06:50:52 crc kubenswrapper[4947]: I1129 06:50:52.987468 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 06:50:52 crc kubenswrapper[4947]: I1129 06:50:52.987552 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 06:50:53 crc kubenswrapper[4947]: I1129 06:50:53.350549 4947 generic.go:334] "Generic (PLEG): container finished" podID="d93469b0-7692-4570-84f3-0d5db0913296" containerID="a022fe1733d0d25fdc8ea11c7f47ab538db52db4ec4c08dc30bbf139886cb077" exitCode=0 Nov 29 06:50:53 crc kubenswrapper[4947]: I1129 06:50:53.351482 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fkj96" event={"ID":"d93469b0-7692-4570-84f3-0d5db0913296","Type":"ContainerDied","Data":"a022fe1733d0d25fdc8ea11c7f47ab538db52db4ec4c08dc30bbf139886cb077"} Nov 29 06:50:53 crc kubenswrapper[4947]: I1129 06:50:53.352199 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fkj96" event={"ID":"d93469b0-7692-4570-84f3-0d5db0913296","Type":"ContainerStarted","Data":"554359623b7c0e5ec4bc2ead0f93e0bf9f467c42c3a2eb52456b42d468d5ffc8"} Nov 29 06:50:54 crc kubenswrapper[4947]: I1129 06:50:54.347759 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-5c6cf7c4d4-cpbpx"] Nov 29 06:50:54 crc kubenswrapper[4947]: I1129 06:50:54.349234 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-5c6cf7c4d4-cpbpx" Nov 29 06:50:54 crc kubenswrapper[4947]: I1129 06:50:54.351917 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-96jnf" Nov 29 06:50:54 crc kubenswrapper[4947]: I1129 06:50:54.379178 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-5c6cf7c4d4-cpbpx"] Nov 29 06:50:54 crc kubenswrapper[4947]: I1129 06:50:54.506957 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfxrh\" (UniqueName: \"kubernetes.io/projected/177cf389-1d51-4e5c-89f1-a0d377aae734-kube-api-access-gfxrh\") pod \"openstack-operator-controller-operator-5c6cf7c4d4-cpbpx\" (UID: \"177cf389-1d51-4e5c-89f1-a0d377aae734\") " pod="openstack-operators/openstack-operator-controller-operator-5c6cf7c4d4-cpbpx" Nov 29 06:50:54 crc kubenswrapper[4947]: I1129 06:50:54.608820 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfxrh\" (UniqueName: \"kubernetes.io/projected/177cf389-1d51-4e5c-89f1-a0d377aae734-kube-api-access-gfxrh\") pod \"openstack-operator-controller-operator-5c6cf7c4d4-cpbpx\" (UID: \"177cf389-1d51-4e5c-89f1-a0d377aae734\") " pod="openstack-operators/openstack-operator-controller-operator-5c6cf7c4d4-cpbpx" Nov 29 06:50:54 crc kubenswrapper[4947]: I1129 06:50:54.637711 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfxrh\" (UniqueName: \"kubernetes.io/projected/177cf389-1d51-4e5c-89f1-a0d377aae734-kube-api-access-gfxrh\") pod \"openstack-operator-controller-operator-5c6cf7c4d4-cpbpx\" (UID: \"177cf389-1d51-4e5c-89f1-a0d377aae734\") " pod="openstack-operators/openstack-operator-controller-operator-5c6cf7c4d4-cpbpx" Nov 29 06:50:54 crc kubenswrapper[4947]: I1129 06:50:54.674770 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-5c6cf7c4d4-cpbpx" Nov 29 06:50:55 crc kubenswrapper[4947]: I1129 06:50:55.488753 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ngztm"] Nov 29 06:50:55 crc kubenswrapper[4947]: I1129 06:50:55.499891 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ngztm"] Nov 29 06:50:55 crc kubenswrapper[4947]: I1129 06:50:55.500098 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ngztm" Nov 29 06:50:55 crc kubenswrapper[4947]: I1129 06:50:55.623971 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf13e8d7-d570-455d-8498-85bf78a220df-catalog-content\") pod \"community-operators-ngztm\" (UID: \"bf13e8d7-d570-455d-8498-85bf78a220df\") " pod="openshift-marketplace/community-operators-ngztm" Nov 29 06:50:55 crc kubenswrapper[4947]: I1129 06:50:55.624324 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf13e8d7-d570-455d-8498-85bf78a220df-utilities\") pod \"community-operators-ngztm\" (UID: \"bf13e8d7-d570-455d-8498-85bf78a220df\") " pod="openshift-marketplace/community-operators-ngztm" Nov 29 06:50:55 crc kubenswrapper[4947]: I1129 06:50:55.624432 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55fn4\" (UniqueName: \"kubernetes.io/projected/bf13e8d7-d570-455d-8498-85bf78a220df-kube-api-access-55fn4\") pod \"community-operators-ngztm\" (UID: \"bf13e8d7-d570-455d-8498-85bf78a220df\") " pod="openshift-marketplace/community-operators-ngztm" Nov 29 06:50:55 crc kubenswrapper[4947]: I1129 06:50:55.725983 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55fn4\" (UniqueName: \"kubernetes.io/projected/bf13e8d7-d570-455d-8498-85bf78a220df-kube-api-access-55fn4\") pod \"community-operators-ngztm\" (UID: \"bf13e8d7-d570-455d-8498-85bf78a220df\") " pod="openshift-marketplace/community-operators-ngztm" Nov 29 06:50:55 crc kubenswrapper[4947]: I1129 06:50:55.726059 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf13e8d7-d570-455d-8498-85bf78a220df-catalog-content\") pod \"community-operators-ngztm\" (UID: \"bf13e8d7-d570-455d-8498-85bf78a220df\") " pod="openshift-marketplace/community-operators-ngztm" Nov 29 06:50:55 crc kubenswrapper[4947]: I1129 06:50:55.726139 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf13e8d7-d570-455d-8498-85bf78a220df-utilities\") pod \"community-operators-ngztm\" (UID: \"bf13e8d7-d570-455d-8498-85bf78a220df\") " pod="openshift-marketplace/community-operators-ngztm" Nov 29 06:50:55 crc kubenswrapper[4947]: I1129 06:50:55.726739 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf13e8d7-d570-455d-8498-85bf78a220df-utilities\") pod \"community-operators-ngztm\" (UID: \"bf13e8d7-d570-455d-8498-85bf78a220df\") " pod="openshift-marketplace/community-operators-ngztm" Nov 29 06:50:55 crc kubenswrapper[4947]: I1129 06:50:55.726826 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf13e8d7-d570-455d-8498-85bf78a220df-catalog-content\") pod \"community-operators-ngztm\" (UID: \"bf13e8d7-d570-455d-8498-85bf78a220df\") " pod="openshift-marketplace/community-operators-ngztm" Nov 29 06:50:55 crc kubenswrapper[4947]: I1129 06:50:55.749119 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55fn4\" (UniqueName: \"kubernetes.io/projected/bf13e8d7-d570-455d-8498-85bf78a220df-kube-api-access-55fn4\") pod \"community-operators-ngztm\" (UID: \"bf13e8d7-d570-455d-8498-85bf78a220df\") " pod="openshift-marketplace/community-operators-ngztm" Nov 29 06:50:55 crc kubenswrapper[4947]: I1129 06:50:55.818277 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ngztm" Nov 29 06:50:57 crc kubenswrapper[4947]: I1129 06:50:57.225085 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ngztm"] Nov 29 06:50:57 crc kubenswrapper[4947]: I1129 06:50:57.249278 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-5c6cf7c4d4-cpbpx"] Nov 29 06:50:57 crc kubenswrapper[4947]: W1129 06:50:57.257431 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod177cf389_1d51_4e5c_89f1_a0d377aae734.slice/crio-3c1fc1a39a7de7f8e589dd9a54f4a0545d71926ecd241b3b89f7966ae1fe4b62 WatchSource:0}: Error finding container 3c1fc1a39a7de7f8e589dd9a54f4a0545d71926ecd241b3b89f7966ae1fe4b62: Status 404 returned error can't find the container with id 3c1fc1a39a7de7f8e589dd9a54f4a0545d71926ecd241b3b89f7966ae1fe4b62 Nov 29 06:50:57 crc kubenswrapper[4947]: I1129 06:50:57.399611 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-5c6cf7c4d4-cpbpx" event={"ID":"177cf389-1d51-4e5c-89f1-a0d377aae734","Type":"ContainerStarted","Data":"3c1fc1a39a7de7f8e589dd9a54f4a0545d71926ecd241b3b89f7966ae1fe4b62"} Nov 29 06:50:57 crc kubenswrapper[4947]: I1129 06:50:57.402084 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ngztm" event={"ID":"bf13e8d7-d570-455d-8498-85bf78a220df","Type":"ContainerStarted","Data":"4285d734167223af90950c5a6c1af4ffd802f4353aeb292836079716918ce5bb"} Nov 29 06:50:57 crc kubenswrapper[4947]: I1129 06:50:57.404591 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fkj96" event={"ID":"d93469b0-7692-4570-84f3-0d5db0913296","Type":"ContainerStarted","Data":"f1728f6c4468229f047eeb81f5eb61f26893691a993d90b89e6b870f0a714927"} Nov 29 06:50:58 crc kubenswrapper[4947]: I1129 06:50:58.415408 4947 generic.go:334] "Generic (PLEG): container finished" podID="bf13e8d7-d570-455d-8498-85bf78a220df" containerID="f5f89d7b24dec075381fb8d528f1be1cccd99269a0c179e27bf84ed1f822b78f" exitCode=0 Nov 29 06:50:58 crc kubenswrapper[4947]: I1129 06:50:58.415552 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ngztm" event={"ID":"bf13e8d7-d570-455d-8498-85bf78a220df","Type":"ContainerDied","Data":"f5f89d7b24dec075381fb8d528f1be1cccd99269a0c179e27bf84ed1f822b78f"} Nov 29 06:50:58 crc kubenswrapper[4947]: I1129 06:50:58.419011 4947 generic.go:334] "Generic (PLEG): container finished" podID="d93469b0-7692-4570-84f3-0d5db0913296" containerID="f1728f6c4468229f047eeb81f5eb61f26893691a993d90b89e6b870f0a714927" exitCode=0 Nov 29 06:50:58 crc kubenswrapper[4947]: I1129 06:50:58.419068 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fkj96" event={"ID":"d93469b0-7692-4570-84f3-0d5db0913296","Type":"ContainerDied","Data":"f1728f6c4468229f047eeb81f5eb61f26893691a993d90b89e6b870f0a714927"} Nov 29 06:51:03 crc kubenswrapper[4947]: I1129 06:51:03.491368 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ngztm" event={"ID":"bf13e8d7-d570-455d-8498-85bf78a220df","Type":"ContainerStarted","Data":"58b5188b2570345475c390b2867507f21ab97f875eb40ebb24c106c86015e61d"} Nov 29 06:51:03 crc kubenswrapper[4947]: I1129 06:51:03.496162 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fkj96" event={"ID":"d93469b0-7692-4570-84f3-0d5db0913296","Type":"ContainerStarted","Data":"8fb8eba34e2235d20b230f948ce551d9e01327fd9566a98514137579c799d5d7"} Nov 29 06:51:03 crc kubenswrapper[4947]: I1129 06:51:03.501741 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-5c6cf7c4d4-cpbpx" event={"ID":"177cf389-1d51-4e5c-89f1-a0d377aae734","Type":"ContainerStarted","Data":"8a55f3e2ebc4037e780ecd97e071f843085472854a2a5d241473fc37eefcfc37"} Nov 29 06:51:03 crc kubenswrapper[4947]: I1129 06:51:03.502501 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-5c6cf7c4d4-cpbpx" Nov 29 06:51:03 crc kubenswrapper[4947]: I1129 06:51:03.558141 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-5c6cf7c4d4-cpbpx" podStartSLOduration=3.8496701509999998 podStartE2EDuration="9.558112271s" podCreationTimestamp="2025-11-29 06:50:54 +0000 UTC" firstStartedPulling="2025-11-29 06:50:57.259108514 +0000 UTC m=+1008.303490595" lastFinishedPulling="2025-11-29 06:51:02.967550634 +0000 UTC m=+1014.011932715" observedRunningTime="2025-11-29 06:51:03.554571101 +0000 UTC m=+1014.598953192" watchObservedRunningTime="2025-11-29 06:51:03.558112271 +0000 UTC m=+1014.602494352" Nov 29 06:51:03 crc kubenswrapper[4947]: I1129 06:51:03.581256 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fkj96" podStartSLOduration=2.067121454 podStartE2EDuration="11.581155695s" podCreationTimestamp="2025-11-29 06:50:52 +0000 UTC" firstStartedPulling="2025-11-29 06:50:53.352251315 +0000 UTC m=+1004.396633396" lastFinishedPulling="2025-11-29 06:51:02.866285556 +0000 UTC m=+1013.910667637" observedRunningTime="2025-11-29 06:51:03.577484852 +0000 UTC m=+1014.621866933" watchObservedRunningTime="2025-11-29 06:51:03.581155695 +0000 UTC m=+1014.625537776" Nov 29 06:51:04 crc kubenswrapper[4947]: I1129 06:51:04.510740 4947 generic.go:334] "Generic (PLEG): container finished" podID="bf13e8d7-d570-455d-8498-85bf78a220df" containerID="58b5188b2570345475c390b2867507f21ab97f875eb40ebb24c106c86015e61d" exitCode=0 Nov 29 06:51:04 crc kubenswrapper[4947]: I1129 06:51:04.510837 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ngztm" event={"ID":"bf13e8d7-d570-455d-8498-85bf78a220df","Type":"ContainerDied","Data":"58b5188b2570345475c390b2867507f21ab97f875eb40ebb24c106c86015e61d"} Nov 29 06:51:06 crc kubenswrapper[4947]: I1129 06:51:06.528849 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ngztm" event={"ID":"bf13e8d7-d570-455d-8498-85bf78a220df","Type":"ContainerStarted","Data":"b0e250cc54e399d5baa9392d2d6db9d750ddd668c4c729d46413e85dd381c703"} Nov 29 06:51:06 crc kubenswrapper[4947]: I1129 06:51:06.562942 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ngztm" podStartSLOduration=4.605016318 podStartE2EDuration="11.562906954s" podCreationTimestamp="2025-11-29 06:50:55 +0000 UTC" firstStartedPulling="2025-11-29 06:50:58.417656656 +0000 UTC m=+1009.462038737" lastFinishedPulling="2025-11-29 06:51:05.375547282 +0000 UTC m=+1016.419929373" observedRunningTime="2025-11-29 06:51:06.554582593 +0000 UTC m=+1017.598964684" watchObservedRunningTime="2025-11-29 06:51:06.562906954 +0000 UTC m=+1017.607289115" Nov 29 06:51:12 crc kubenswrapper[4947]: I1129 06:51:12.410904 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fkj96" Nov 29 06:51:12 crc kubenswrapper[4947]: I1129 06:51:12.411724 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fkj96" Nov 29 06:51:12 crc kubenswrapper[4947]: I1129 06:51:12.455416 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fkj96" Nov 29 06:51:12 crc kubenswrapper[4947]: I1129 06:51:12.612515 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fkj96" Nov 29 06:51:12 crc kubenswrapper[4947]: I1129 06:51:12.697334 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fkj96"] Nov 29 06:51:14 crc kubenswrapper[4947]: I1129 06:51:14.580874 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fkj96" podUID="d93469b0-7692-4570-84f3-0d5db0913296" containerName="registry-server" containerID="cri-o://8fb8eba34e2235d20b230f948ce551d9e01327fd9566a98514137579c799d5d7" gracePeriod=2 Nov 29 06:51:14 crc kubenswrapper[4947]: I1129 06:51:14.679788 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-5c6cf7c4d4-cpbpx" Nov 29 06:51:15 crc kubenswrapper[4947]: I1129 06:51:15.532937 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fkj96" Nov 29 06:51:15 crc kubenswrapper[4947]: I1129 06:51:15.592882 4947 generic.go:334] "Generic (PLEG): container finished" podID="d93469b0-7692-4570-84f3-0d5db0913296" containerID="8fb8eba34e2235d20b230f948ce551d9e01327fd9566a98514137579c799d5d7" exitCode=0 Nov 29 06:51:15 crc kubenswrapper[4947]: I1129 06:51:15.592957 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fkj96" Nov 29 06:51:15 crc kubenswrapper[4947]: I1129 06:51:15.592981 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fkj96" event={"ID":"d93469b0-7692-4570-84f3-0d5db0913296","Type":"ContainerDied","Data":"8fb8eba34e2235d20b230f948ce551d9e01327fd9566a98514137579c799d5d7"} Nov 29 06:51:15 crc kubenswrapper[4947]: I1129 06:51:15.593037 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fkj96" event={"ID":"d93469b0-7692-4570-84f3-0d5db0913296","Type":"ContainerDied","Data":"554359623b7c0e5ec4bc2ead0f93e0bf9f467c42c3a2eb52456b42d468d5ffc8"} Nov 29 06:51:15 crc kubenswrapper[4947]: I1129 06:51:15.593065 4947 scope.go:117] "RemoveContainer" containerID="8fb8eba34e2235d20b230f948ce551d9e01327fd9566a98514137579c799d5d7" Nov 29 06:51:15 crc kubenswrapper[4947]: I1129 06:51:15.605512 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5mqw\" (UniqueName: \"kubernetes.io/projected/d93469b0-7692-4570-84f3-0d5db0913296-kube-api-access-w5mqw\") pod \"d93469b0-7692-4570-84f3-0d5db0913296\" (UID: \"d93469b0-7692-4570-84f3-0d5db0913296\") " Nov 29 06:51:15 crc kubenswrapper[4947]: I1129 06:51:15.605598 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d93469b0-7692-4570-84f3-0d5db0913296-catalog-content\") pod \"d93469b0-7692-4570-84f3-0d5db0913296\" (UID: \"d93469b0-7692-4570-84f3-0d5db0913296\") " Nov 29 06:51:15 crc kubenswrapper[4947]: I1129 06:51:15.605644 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d93469b0-7692-4570-84f3-0d5db0913296-utilities\") pod \"d93469b0-7692-4570-84f3-0d5db0913296\" (UID: \"d93469b0-7692-4570-84f3-0d5db0913296\") " Nov 29 06:51:15 crc kubenswrapper[4947]: I1129 06:51:15.607012 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d93469b0-7692-4570-84f3-0d5db0913296-utilities" (OuterVolumeSpecName: "utilities") pod "d93469b0-7692-4570-84f3-0d5db0913296" (UID: "d93469b0-7692-4570-84f3-0d5db0913296"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:51:15 crc kubenswrapper[4947]: I1129 06:51:15.620322 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d93469b0-7692-4570-84f3-0d5db0913296-kube-api-access-w5mqw" (OuterVolumeSpecName: "kube-api-access-w5mqw") pod "d93469b0-7692-4570-84f3-0d5db0913296" (UID: "d93469b0-7692-4570-84f3-0d5db0913296"). InnerVolumeSpecName "kube-api-access-w5mqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:51:15 crc kubenswrapper[4947]: I1129 06:51:15.628605 4947 scope.go:117] "RemoveContainer" containerID="f1728f6c4468229f047eeb81f5eb61f26893691a993d90b89e6b870f0a714927" Nov 29 06:51:15 crc kubenswrapper[4947]: I1129 06:51:15.651328 4947 scope.go:117] "RemoveContainer" containerID="a022fe1733d0d25fdc8ea11c7f47ab538db52db4ec4c08dc30bbf139886cb077" Nov 29 06:51:15 crc kubenswrapper[4947]: I1129 06:51:15.701961 4947 scope.go:117] "RemoveContainer" containerID="8fb8eba34e2235d20b230f948ce551d9e01327fd9566a98514137579c799d5d7" Nov 29 06:51:15 crc kubenswrapper[4947]: E1129 06:51:15.706508 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fb8eba34e2235d20b230f948ce551d9e01327fd9566a98514137579c799d5d7\": container with ID starting with 8fb8eba34e2235d20b230f948ce551d9e01327fd9566a98514137579c799d5d7 not found: ID does not exist" containerID="8fb8eba34e2235d20b230f948ce551d9e01327fd9566a98514137579c799d5d7" Nov 29 06:51:15 crc kubenswrapper[4947]: I1129 06:51:15.706596 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fb8eba34e2235d20b230f948ce551d9e01327fd9566a98514137579c799d5d7"} err="failed to get container status \"8fb8eba34e2235d20b230f948ce551d9e01327fd9566a98514137579c799d5d7\": rpc error: code = NotFound desc = could not find container \"8fb8eba34e2235d20b230f948ce551d9e01327fd9566a98514137579c799d5d7\": container with ID starting with 8fb8eba34e2235d20b230f948ce551d9e01327fd9566a98514137579c799d5d7 not found: ID does not exist" Nov 29 06:51:15 crc kubenswrapper[4947]: I1129 06:51:15.706653 4947 scope.go:117] "RemoveContainer" containerID="f1728f6c4468229f047eeb81f5eb61f26893691a993d90b89e6b870f0a714927" Nov 29 06:51:15 crc kubenswrapper[4947]: I1129 06:51:15.707013 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5mqw\" (UniqueName: \"kubernetes.io/projected/d93469b0-7692-4570-84f3-0d5db0913296-kube-api-access-w5mqw\") on node \"crc\" DevicePath \"\"" Nov 29 06:51:15 crc kubenswrapper[4947]: I1129 06:51:15.707037 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d93469b0-7692-4570-84f3-0d5db0913296-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 06:51:15 crc kubenswrapper[4947]: E1129 06:51:15.711122 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1728f6c4468229f047eeb81f5eb61f26893691a993d90b89e6b870f0a714927\": container with ID starting with f1728f6c4468229f047eeb81f5eb61f26893691a993d90b89e6b870f0a714927 not found: ID does not exist" containerID="f1728f6c4468229f047eeb81f5eb61f26893691a993d90b89e6b870f0a714927" Nov 29 06:51:15 crc kubenswrapper[4947]: I1129 06:51:15.711218 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1728f6c4468229f047eeb81f5eb61f26893691a993d90b89e6b870f0a714927"} err="failed to get container status \"f1728f6c4468229f047eeb81f5eb61f26893691a993d90b89e6b870f0a714927\": rpc error: code = NotFound desc = could not find container \"f1728f6c4468229f047eeb81f5eb61f26893691a993d90b89e6b870f0a714927\": container with ID starting with f1728f6c4468229f047eeb81f5eb61f26893691a993d90b89e6b870f0a714927 not found: ID does not exist" Nov 29 06:51:15 crc kubenswrapper[4947]: I1129 06:51:15.711268 4947 scope.go:117] "RemoveContainer" containerID="a022fe1733d0d25fdc8ea11c7f47ab538db52db4ec4c08dc30bbf139886cb077" Nov 29 06:51:15 crc kubenswrapper[4947]: E1129 06:51:15.715694 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a022fe1733d0d25fdc8ea11c7f47ab538db52db4ec4c08dc30bbf139886cb077\": container with ID starting with a022fe1733d0d25fdc8ea11c7f47ab538db52db4ec4c08dc30bbf139886cb077 not found: ID does not exist" containerID="a022fe1733d0d25fdc8ea11c7f47ab538db52db4ec4c08dc30bbf139886cb077" Nov 29 06:51:15 crc kubenswrapper[4947]: I1129 06:51:15.715738 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a022fe1733d0d25fdc8ea11c7f47ab538db52db4ec4c08dc30bbf139886cb077"} err="failed to get container status \"a022fe1733d0d25fdc8ea11c7f47ab538db52db4ec4c08dc30bbf139886cb077\": rpc error: code = NotFound desc = could not find container \"a022fe1733d0d25fdc8ea11c7f47ab538db52db4ec4c08dc30bbf139886cb077\": container with ID starting with a022fe1733d0d25fdc8ea11c7f47ab538db52db4ec4c08dc30bbf139886cb077 not found: ID does not exist" Nov 29 06:51:15 crc kubenswrapper[4947]: I1129 06:51:15.733767 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d93469b0-7692-4570-84f3-0d5db0913296-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d93469b0-7692-4570-84f3-0d5db0913296" (UID: "d93469b0-7692-4570-84f3-0d5db0913296"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:51:15 crc kubenswrapper[4947]: I1129 06:51:15.808607 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d93469b0-7692-4570-84f3-0d5db0913296-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 06:51:15 crc kubenswrapper[4947]: I1129 06:51:15.818955 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ngztm" Nov 29 06:51:15 crc kubenswrapper[4947]: I1129 06:51:15.819037 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ngztm" Nov 29 06:51:15 crc kubenswrapper[4947]: I1129 06:51:15.879922 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ngztm" Nov 29 06:51:15 crc kubenswrapper[4947]: I1129 06:51:15.922989 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fkj96"] Nov 29 06:51:15 crc kubenswrapper[4947]: I1129 06:51:15.929455 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fkj96"] Nov 29 06:51:16 crc kubenswrapper[4947]: I1129 06:51:16.661517 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ngztm" Nov 29 06:51:17 crc kubenswrapper[4947]: I1129 06:51:17.189305 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d93469b0-7692-4570-84f3-0d5db0913296" path="/var/lib/kubelet/pods/d93469b0-7692-4570-84f3-0d5db0913296/volumes" Nov 29 06:51:18 crc kubenswrapper[4947]: I1129 06:51:18.895437 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ngztm"] Nov 29 06:51:18 crc kubenswrapper[4947]: I1129 06:51:18.895724 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ngztm" podUID="bf13e8d7-d570-455d-8498-85bf78a220df" containerName="registry-server" containerID="cri-o://b0e250cc54e399d5baa9392d2d6db9d750ddd668c4c729d46413e85dd381c703" gracePeriod=2 Nov 29 06:51:20 crc kubenswrapper[4947]: I1129 06:51:20.636558 4947 generic.go:334] "Generic (PLEG): container finished" podID="bf13e8d7-d570-455d-8498-85bf78a220df" containerID="b0e250cc54e399d5baa9392d2d6db9d750ddd668c4c729d46413e85dd381c703" exitCode=0 Nov 29 06:51:20 crc kubenswrapper[4947]: I1129 06:51:20.636643 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ngztm" event={"ID":"bf13e8d7-d570-455d-8498-85bf78a220df","Type":"ContainerDied","Data":"b0e250cc54e399d5baa9392d2d6db9d750ddd668c4c729d46413e85dd381c703"} Nov 29 06:51:20 crc kubenswrapper[4947]: I1129 06:51:20.987934 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ngztm" Nov 29 06:51:21 crc kubenswrapper[4947]: I1129 06:51:21.095689 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf13e8d7-d570-455d-8498-85bf78a220df-catalog-content\") pod \"bf13e8d7-d570-455d-8498-85bf78a220df\" (UID: \"bf13e8d7-d570-455d-8498-85bf78a220df\") " Nov 29 06:51:21 crc kubenswrapper[4947]: I1129 06:51:21.095761 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf13e8d7-d570-455d-8498-85bf78a220df-utilities\") pod \"bf13e8d7-d570-455d-8498-85bf78a220df\" (UID: \"bf13e8d7-d570-455d-8498-85bf78a220df\") " Nov 29 06:51:21 crc kubenswrapper[4947]: I1129 06:51:21.095820 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55fn4\" (UniqueName: \"kubernetes.io/projected/bf13e8d7-d570-455d-8498-85bf78a220df-kube-api-access-55fn4\") pod \"bf13e8d7-d570-455d-8498-85bf78a220df\" (UID: \"bf13e8d7-d570-455d-8498-85bf78a220df\") " Nov 29 06:51:21 crc kubenswrapper[4947]: I1129 06:51:21.097077 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf13e8d7-d570-455d-8498-85bf78a220df-utilities" (OuterVolumeSpecName: "utilities") pod "bf13e8d7-d570-455d-8498-85bf78a220df" (UID: "bf13e8d7-d570-455d-8498-85bf78a220df"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:51:21 crc kubenswrapper[4947]: I1129 06:51:21.105435 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf13e8d7-d570-455d-8498-85bf78a220df-kube-api-access-55fn4" (OuterVolumeSpecName: "kube-api-access-55fn4") pod "bf13e8d7-d570-455d-8498-85bf78a220df" (UID: "bf13e8d7-d570-455d-8498-85bf78a220df"). InnerVolumeSpecName "kube-api-access-55fn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:51:21 crc kubenswrapper[4947]: I1129 06:51:21.145419 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf13e8d7-d570-455d-8498-85bf78a220df-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bf13e8d7-d570-455d-8498-85bf78a220df" (UID: "bf13e8d7-d570-455d-8498-85bf78a220df"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:51:21 crc kubenswrapper[4947]: I1129 06:51:21.197425 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55fn4\" (UniqueName: \"kubernetes.io/projected/bf13e8d7-d570-455d-8498-85bf78a220df-kube-api-access-55fn4\") on node \"crc\" DevicePath \"\"" Nov 29 06:51:21 crc kubenswrapper[4947]: I1129 06:51:21.197479 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf13e8d7-d570-455d-8498-85bf78a220df-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 06:51:21 crc kubenswrapper[4947]: I1129 06:51:21.197489 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf13e8d7-d570-455d-8498-85bf78a220df-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 06:51:21 crc kubenswrapper[4947]: I1129 06:51:21.648679 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ngztm" event={"ID":"bf13e8d7-d570-455d-8498-85bf78a220df","Type":"ContainerDied","Data":"4285d734167223af90950c5a6c1af4ffd802f4353aeb292836079716918ce5bb"} Nov 29 06:51:21 crc kubenswrapper[4947]: I1129 06:51:21.648781 4947 scope.go:117] "RemoveContainer" containerID="b0e250cc54e399d5baa9392d2d6db9d750ddd668c4c729d46413e85dd381c703" Nov 29 06:51:21 crc kubenswrapper[4947]: I1129 06:51:21.648774 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ngztm" Nov 29 06:51:21 crc kubenswrapper[4947]: I1129 06:51:21.671351 4947 scope.go:117] "RemoveContainer" containerID="58b5188b2570345475c390b2867507f21ab97f875eb40ebb24c106c86015e61d" Nov 29 06:51:21 crc kubenswrapper[4947]: I1129 06:51:21.675352 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ngztm"] Nov 29 06:51:21 crc kubenswrapper[4947]: I1129 06:51:21.685315 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ngztm"] Nov 29 06:51:21 crc kubenswrapper[4947]: I1129 06:51:21.698721 4947 scope.go:117] "RemoveContainer" containerID="f5f89d7b24dec075381fb8d528f1be1cccd99269a0c179e27bf84ed1f822b78f" Nov 29 06:51:22 crc kubenswrapper[4947]: I1129 06:51:22.987050 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 06:51:22 crc kubenswrapper[4947]: I1129 06:51:22.987117 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 06:51:23 crc kubenswrapper[4947]: I1129 06:51:23.188417 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf13e8d7-d570-455d-8498-85bf78a220df" path="/var/lib/kubelet/pods/bf13e8d7-d570-455d-8498-85bf78a220df/volumes" Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.550900 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-cpl5k"] Nov 29 06:51:37 crc kubenswrapper[4947]: E1129 06:51:37.552419 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf13e8d7-d570-455d-8498-85bf78a220df" containerName="extract-content" Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.552445 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf13e8d7-d570-455d-8498-85bf78a220df" containerName="extract-content" Nov 29 06:51:37 crc kubenswrapper[4947]: E1129 06:51:37.552466 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d93469b0-7692-4570-84f3-0d5db0913296" containerName="registry-server" Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.552474 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="d93469b0-7692-4570-84f3-0d5db0913296" containerName="registry-server" Nov 29 06:51:37 crc kubenswrapper[4947]: E1129 06:51:37.552486 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d93469b0-7692-4570-84f3-0d5db0913296" containerName="extract-content" Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.552498 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="d93469b0-7692-4570-84f3-0d5db0913296" containerName="extract-content" Nov 29 06:51:37 crc kubenswrapper[4947]: E1129 06:51:37.552527 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d93469b0-7692-4570-84f3-0d5db0913296" containerName="extract-utilities" Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.552539 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="d93469b0-7692-4570-84f3-0d5db0913296" containerName="extract-utilities" Nov 29 06:51:37 crc kubenswrapper[4947]: E1129 06:51:37.552556 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf13e8d7-d570-455d-8498-85bf78a220df" containerName="registry-server" Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.552563 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf13e8d7-d570-455d-8498-85bf78a220df" containerName="registry-server" Nov 29 06:51:37 crc kubenswrapper[4947]: E1129 06:51:37.552575 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf13e8d7-d570-455d-8498-85bf78a220df" containerName="extract-utilities" Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.552583 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf13e8d7-d570-455d-8498-85bf78a220df" containerName="extract-utilities" Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.552768 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf13e8d7-d570-455d-8498-85bf78a220df" containerName="registry-server" Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.552792 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="d93469b0-7692-4570-84f3-0d5db0913296" containerName="registry-server" Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.553850 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-cpl5k" Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.560787 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5898f4cf77-vpq4v"] Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.565663 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5898f4cf77-vpq4v" Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.589753 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-twsbv" Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.589832 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-bxzvn" Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.595533 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-c45vq"] Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.596721 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-c45vq" Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.601919 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-xxvzg" Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.625368 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5898f4cf77-vpq4v"] Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.637361 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-cpl5k"] Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.654098 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-c45vq"] Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.673520 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjxzs\" (UniqueName: \"kubernetes.io/projected/e9138b46-80df-4e49-a519-807c3037d727-kube-api-access-jjxzs\") pod \"barbican-operator-controller-manager-7d9dfd778-cpl5k\" (UID: \"e9138b46-80df-4e49-a519-807c3037d727\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-cpl5k" Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.673636 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tzxh\" (UniqueName: \"kubernetes.io/projected/a3977bd5-f0c4-4d95-bc6c-905bb2f03a07-kube-api-access-2tzxh\") pod \"cinder-operator-controller-manager-5898f4cf77-vpq4v\" (UID: \"a3977bd5-f0c4-4d95-bc6c-905bb2f03a07\") " pod="openstack-operators/cinder-operator-controller-manager-5898f4cf77-vpq4v" Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.678301 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-668d9c48b9-g24m9"] Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.679566 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-g24m9" Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.682822 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-kp9pf" Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.709672 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-668d9c48b9-g24m9"] Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.733477 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-rrmmb"] Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.759336 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-rrmmb" Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.765353 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-bpxcj" Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.810805 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v474d\" (UniqueName: \"kubernetes.io/projected/b97d6c13-451c-43b2-a9cf-a1cb50dc4f71-kube-api-access-v474d\") pod \"glance-operator-controller-manager-668d9c48b9-g24m9\" (UID: \"b97d6c13-451c-43b2-a9cf-a1cb50dc4f71\") " pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-g24m9" Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.810994 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9rww\" (UniqueName: \"kubernetes.io/projected/278ee247-5381-4806-b4b7-9247f9ff162d-kube-api-access-h9rww\") pod \"designate-operator-controller-manager-78b4bc895b-c45vq\" (UID: \"278ee247-5381-4806-b4b7-9247f9ff162d\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-c45vq" Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.811169 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tzxh\" (UniqueName: \"kubernetes.io/projected/a3977bd5-f0c4-4d95-bc6c-905bb2f03a07-kube-api-access-2tzxh\") pod \"cinder-operator-controller-manager-5898f4cf77-vpq4v\" (UID: \"a3977bd5-f0c4-4d95-bc6c-905bb2f03a07\") " pod="openstack-operators/cinder-operator-controller-manager-5898f4cf77-vpq4v" Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.811407 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjxzs\" (UniqueName: \"kubernetes.io/projected/e9138b46-80df-4e49-a519-807c3037d727-kube-api-access-jjxzs\") pod \"barbican-operator-controller-manager-7d9dfd778-cpl5k\" (UID: \"e9138b46-80df-4e49-a519-807c3037d727\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-cpl5k" Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.814335 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-rrmmb"] Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.838338 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-wwvll"] Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.839949 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-wwvll" Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.844095 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-jr7dv" Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.870325 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-wwvll"] Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.881505 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjxzs\" (UniqueName: \"kubernetes.io/projected/e9138b46-80df-4e49-a519-807c3037d727-kube-api-access-jjxzs\") pod \"barbican-operator-controller-manager-7d9dfd778-cpl5k\" (UID: \"e9138b46-80df-4e49-a519-807c3037d727\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-cpl5k" Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.884360 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-546d4bdf48-cfpc5"] Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.885785 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-cfpc5" Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.886944 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-cpl5k" Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.888841 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-s7x9c" Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.895164 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-p6ml6"] Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.897139 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tzxh\" (UniqueName: \"kubernetes.io/projected/a3977bd5-f0c4-4d95-bc6c-905bb2f03a07-kube-api-access-2tzxh\") pod \"cinder-operator-controller-manager-5898f4cf77-vpq4v\" (UID: \"a3977bd5-f0c4-4d95-bc6c-905bb2f03a07\") " pod="openstack-operators/cinder-operator-controller-manager-5898f4cf77-vpq4v" Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.897756 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5898f4cf77-vpq4v" Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.902842 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-p6ml6" Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.914878 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-q2xj6" Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.916633 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v474d\" (UniqueName: \"kubernetes.io/projected/b97d6c13-451c-43b2-a9cf-a1cb50dc4f71-kube-api-access-v474d\") pod \"glance-operator-controller-manager-668d9c48b9-g24m9\" (UID: \"b97d6c13-451c-43b2-a9cf-a1cb50dc4f71\") " pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-g24m9" Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.916705 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwmnx\" (UniqueName: \"kubernetes.io/projected/3c7abd7d-ad0b-4a8e-9de6-95a7da2c11de-kube-api-access-dwmnx\") pod \"heat-operator-controller-manager-5f64f6f8bb-rrmmb\" (UID: \"3c7abd7d-ad0b-4a8e-9de6-95a7da2c11de\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-rrmmb" Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.916731 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9rww\" (UniqueName: \"kubernetes.io/projected/278ee247-5381-4806-b4b7-9247f9ff162d-kube-api-access-h9rww\") pod \"designate-operator-controller-manager-78b4bc895b-c45vq\" (UID: \"278ee247-5381-4806-b4b7-9247f9ff162d\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-c45vq" Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.916757 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgv8z\" (UniqueName: \"kubernetes.io/projected/0502c0dc-a197-41c4-a69a-ee8b633f4cb6-kube-api-access-zgv8z\") pod \"horizon-operator-controller-manager-68c6d99b8f-wwvll\" (UID: \"0502c0dc-a197-41c4-a69a-ee8b633f4cb6\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-wwvll" Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.916818 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkwjj\" (UniqueName: \"kubernetes.io/projected/f5486cef-a49e-43c1-b4b2-798ee149238f-kube-api-access-bkwjj\") pod \"keystone-operator-controller-manager-546d4bdf48-cfpc5\" (UID: \"f5486cef-a49e-43c1-b4b2-798ee149238f\") " pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-cfpc5" Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.916841 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sclxr\" (UniqueName: \"kubernetes.io/projected/8f81b728-2bf1-4638-91f7-717ab75349f3-kube-api-access-sclxr\") pod \"ironic-operator-controller-manager-6c548fd776-p6ml6\" (UID: \"8f81b728-2bf1-4638-91f7-717ab75349f3\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-p6ml6" Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.955050 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9rww\" (UniqueName: \"kubernetes.io/projected/278ee247-5381-4806-b4b7-9247f9ff162d-kube-api-access-h9rww\") pod \"designate-operator-controller-manager-78b4bc895b-c45vq\" (UID: \"278ee247-5381-4806-b4b7-9247f9ff162d\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-c45vq" Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.963613 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-4ffsj"] Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.965684 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-4ffsj" Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.973850 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v474d\" (UniqueName: \"kubernetes.io/projected/b97d6c13-451c-43b2-a9cf-a1cb50dc4f71-kube-api-access-v474d\") pod \"glance-operator-controller-manager-668d9c48b9-g24m9\" (UID: \"b97d6c13-451c-43b2-a9cf-a1cb50dc4f71\") " pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-g24m9" Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.976407 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-6546668bfd-27k89"] Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.976834 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-c45vq" Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.995130 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-27k89" Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.999322 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 29 06:51:37 crc kubenswrapper[4947]: I1129 06:51:37.999808 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-6hzdl" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.006923 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-546d4bdf48-cfpc5"] Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.010661 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-g24m9" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.020943 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-7zsvr" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.021276 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkwjj\" (UniqueName: \"kubernetes.io/projected/f5486cef-a49e-43c1-b4b2-798ee149238f-kube-api-access-bkwjj\") pod \"keystone-operator-controller-manager-546d4bdf48-cfpc5\" (UID: \"f5486cef-a49e-43c1-b4b2-798ee149238f\") " pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-cfpc5" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.021343 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sclxr\" (UniqueName: \"kubernetes.io/projected/8f81b728-2bf1-4638-91f7-717ab75349f3-kube-api-access-sclxr\") pod \"ironic-operator-controller-manager-6c548fd776-p6ml6\" (UID: \"8f81b728-2bf1-4638-91f7-717ab75349f3\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-p6ml6" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.021379 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwmnx\" (UniqueName: \"kubernetes.io/projected/3c7abd7d-ad0b-4a8e-9de6-95a7da2c11de-kube-api-access-dwmnx\") pod \"heat-operator-controller-manager-5f64f6f8bb-rrmmb\" (UID: \"3c7abd7d-ad0b-4a8e-9de6-95a7da2c11de\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-rrmmb" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.021420 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgv8z\" (UniqueName: \"kubernetes.io/projected/0502c0dc-a197-41c4-a69a-ee8b633f4cb6-kube-api-access-zgv8z\") pod \"horizon-operator-controller-manager-68c6d99b8f-wwvll\" (UID: \"0502c0dc-a197-41c4-a69a-ee8b633f4cb6\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-wwvll" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.071039 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jkq9j"] Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.072442 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jkq9j" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.079648 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwmnx\" (UniqueName: \"kubernetes.io/projected/3c7abd7d-ad0b-4a8e-9de6-95a7da2c11de-kube-api-access-dwmnx\") pod \"heat-operator-controller-manager-5f64f6f8bb-rrmmb\" (UID: \"3c7abd7d-ad0b-4a8e-9de6-95a7da2c11de\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-rrmmb" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.090524 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkwjj\" (UniqueName: \"kubernetes.io/projected/f5486cef-a49e-43c1-b4b2-798ee149238f-kube-api-access-bkwjj\") pod \"keystone-operator-controller-manager-546d4bdf48-cfpc5\" (UID: \"f5486cef-a49e-43c1-b4b2-798ee149238f\") " pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-cfpc5" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.098276 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sclxr\" (UniqueName: \"kubernetes.io/projected/8f81b728-2bf1-4638-91f7-717ab75349f3-kube-api-access-sclxr\") pod \"ironic-operator-controller-manager-6c548fd776-p6ml6\" (UID: \"8f81b728-2bf1-4638-91f7-717ab75349f3\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-p6ml6" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.105858 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-mgc6w" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.114232 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgv8z\" (UniqueName: \"kubernetes.io/projected/0502c0dc-a197-41c4-a69a-ee8b633f4cb6-kube-api-access-zgv8z\") pod \"horizon-operator-controller-manager-68c6d99b8f-wwvll\" (UID: \"0502c0dc-a197-41c4-a69a-ee8b633f4cb6\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-wwvll" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.120766 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-rrmmb" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.121207 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-4ffsj"] Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.122565 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9qdb\" (UniqueName: \"kubernetes.io/projected/fcd59aa5-5edf-4200-aa2d-298f8b452fff-kube-api-access-q9qdb\") pod \"manila-operator-controller-manager-6546668bfd-27k89\" (UID: \"fcd59aa5-5edf-4200-aa2d-298f8b452fff\") " pod="openstack-operators/manila-operator-controller-manager-6546668bfd-27k89" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.122609 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b1873231-ef75-414f-85a0-9536e7e45d24-cert\") pod \"infra-operator-controller-manager-57548d458d-4ffsj\" (UID: \"b1873231-ef75-414f-85a0-9536e7e45d24\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-4ffsj" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.122669 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcqw4\" (UniqueName: \"kubernetes.io/projected/b1873231-ef75-414f-85a0-9536e7e45d24-kube-api-access-hcqw4\") pod \"infra-operator-controller-manager-57548d458d-4ffsj\" (UID: \"b1873231-ef75-414f-85a0-9536e7e45d24\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-4ffsj" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.156816 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-p6ml6" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.174369 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-p6ml6"] Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.227239 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9qdb\" (UniqueName: \"kubernetes.io/projected/fcd59aa5-5edf-4200-aa2d-298f8b452fff-kube-api-access-q9qdb\") pod \"manila-operator-controller-manager-6546668bfd-27k89\" (UID: \"fcd59aa5-5edf-4200-aa2d-298f8b452fff\") " pod="openstack-operators/manila-operator-controller-manager-6546668bfd-27k89" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.227654 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b1873231-ef75-414f-85a0-9536e7e45d24-cert\") pod \"infra-operator-controller-manager-57548d458d-4ffsj\" (UID: \"b1873231-ef75-414f-85a0-9536e7e45d24\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-4ffsj" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.227767 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wrrc\" (UniqueName: \"kubernetes.io/projected/3c962745-1298-4fbc-a4c7-ae2b75c1ce49-kube-api-access-4wrrc\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-jkq9j\" (UID: \"3c962745-1298-4fbc-a4c7-ae2b75c1ce49\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jkq9j" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.227855 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcqw4\" (UniqueName: \"kubernetes.io/projected/b1873231-ef75-414f-85a0-9536e7e45d24-kube-api-access-hcqw4\") pod \"infra-operator-controller-manager-57548d458d-4ffsj\" (UID: \"b1873231-ef75-414f-85a0-9536e7e45d24\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-4ffsj" Nov 29 06:51:38 crc kubenswrapper[4947]: E1129 06:51:38.228571 4947 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 29 06:51:38 crc kubenswrapper[4947]: E1129 06:51:38.228673 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1873231-ef75-414f-85a0-9536e7e45d24-cert podName:b1873231-ef75-414f-85a0-9536e7e45d24 nodeName:}" failed. No retries permitted until 2025-11-29 06:51:38.728649005 +0000 UTC m=+1049.773031086 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b1873231-ef75-414f-85a0-9536e7e45d24-cert") pod "infra-operator-controller-manager-57548d458d-4ffsj" (UID: "b1873231-ef75-414f-85a0-9536e7e45d24") : secret "infra-operator-webhook-server-cert" not found Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.271512 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9qdb\" (UniqueName: \"kubernetes.io/projected/fcd59aa5-5edf-4200-aa2d-298f8b452fff-kube-api-access-q9qdb\") pod \"manila-operator-controller-manager-6546668bfd-27k89\" (UID: \"fcd59aa5-5edf-4200-aa2d-298f8b452fff\") " pod="openstack-operators/manila-operator-controller-manager-6546668bfd-27k89" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.278312 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcqw4\" (UniqueName: \"kubernetes.io/projected/b1873231-ef75-414f-85a0-9536e7e45d24-kube-api-access-hcqw4\") pod \"infra-operator-controller-manager-57548d458d-4ffsj\" (UID: \"b1873231-ef75-414f-85a0-9536e7e45d24\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-4ffsj" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.322014 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-wwvll" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.331840 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-6546668bfd-27k89"] Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.338665 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wrrc\" (UniqueName: \"kubernetes.io/projected/3c962745-1298-4fbc-a4c7-ae2b75c1ce49-kube-api-access-4wrrc\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-jkq9j\" (UID: \"3c962745-1298-4fbc-a4c7-ae2b75c1ce49\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jkq9j" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.368081 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-mmx2k"] Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.370749 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-mmx2k" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.385316 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-5lxnb" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.390949 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-cfpc5" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.419267 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wrrc\" (UniqueName: \"kubernetes.io/projected/3c962745-1298-4fbc-a4c7-ae2b75c1ce49-kube-api-access-4wrrc\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-jkq9j\" (UID: \"3c962745-1298-4fbc-a4c7-ae2b75c1ce49\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jkq9j" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.420102 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-mmx2k"] Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.447493 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-r7gd9"] Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.448833 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-r7gd9" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.457760 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-k9h7j" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.471921 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jkq9j"] Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.493297 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-9bjj9"] Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.498650 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-9bjj9" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.506387 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-s7x44" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.536031 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-27k89" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.551084 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4gjl\" (UniqueName: \"kubernetes.io/projected/edb90e2e-84a1-4544-87b4-7e26c9dfd9bc-kube-api-access-n4gjl\") pod \"nova-operator-controller-manager-697bc559fc-r7gd9\" (UID: \"edb90e2e-84a1-4544-87b4-7e26c9dfd9bc\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-r7gd9" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.551480 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvhdh\" (UniqueName: \"kubernetes.io/projected/401765a6-8ea5-478e-bd29-5c2c717b57d3-kube-api-access-fvhdh\") pod \"mariadb-operator-controller-manager-56bbcc9d85-mmx2k\" (UID: \"401765a6-8ea5-478e-bd29-5c2c717b57d3\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-mmx2k" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.565910 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jkq9j" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.581357 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd44g9x2"] Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.583041 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd44g9x2" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.595609 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.652402 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-l9txg" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.654817 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-r7gd9"] Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.657546 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvhdh\" (UniqueName: \"kubernetes.io/projected/401765a6-8ea5-478e-bd29-5c2c717b57d3-kube-api-access-fvhdh\") pod \"mariadb-operator-controller-manager-56bbcc9d85-mmx2k\" (UID: \"401765a6-8ea5-478e-bd29-5c2c717b57d3\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-mmx2k" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.657648 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4gjl\" (UniqueName: \"kubernetes.io/projected/edb90e2e-84a1-4544-87b4-7e26c9dfd9bc-kube-api-access-n4gjl\") pod \"nova-operator-controller-manager-697bc559fc-r7gd9\" (UID: \"edb90e2e-84a1-4544-87b4-7e26c9dfd9bc\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-r7gd9" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.657741 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29pg7\" (UniqueName: \"kubernetes.io/projected/7834eae4-c153-4d24-be4b-cfeb03744cff-kube-api-access-29pg7\") pod \"octavia-operator-controller-manager-998648c74-9bjj9\" (UID: \"7834eae4-c153-4d24-be4b-cfeb03744cff\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-9bjj9" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.685064 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-lhstd"] Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.698676 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvhdh\" (UniqueName: \"kubernetes.io/projected/401765a6-8ea5-478e-bd29-5c2c717b57d3-kube-api-access-fvhdh\") pod \"mariadb-operator-controller-manager-56bbcc9d85-mmx2k\" (UID: \"401765a6-8ea5-478e-bd29-5c2c717b57d3\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-mmx2k" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.702931 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lhstd" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.711868 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-92kq9" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.715666 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4gjl\" (UniqueName: \"kubernetes.io/projected/edb90e2e-84a1-4544-87b4-7e26c9dfd9bc-kube-api-access-n4gjl\") pod \"nova-operator-controller-manager-697bc559fc-r7gd9\" (UID: \"edb90e2e-84a1-4544-87b4-7e26c9dfd9bc\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-r7gd9" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.728864 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-9bjj9"] Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.759025 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b1873231-ef75-414f-85a0-9536e7e45d24-cert\") pod \"infra-operator-controller-manager-57548d458d-4ffsj\" (UID: \"b1873231-ef75-414f-85a0-9536e7e45d24\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-4ffsj" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.759101 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4af0248a-8357-4dce-95fc-1ed6384dc3f2-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd44g9x2\" (UID: \"4af0248a-8357-4dce-95fc-1ed6384dc3f2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd44g9x2" Nov 29 06:51:38 crc kubenswrapper[4947]: E1129 06:51:38.759300 4947 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 29 06:51:38 crc kubenswrapper[4947]: E1129 06:51:38.759362 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1873231-ef75-414f-85a0-9536e7e45d24-cert podName:b1873231-ef75-414f-85a0-9536e7e45d24 nodeName:}" failed. No retries permitted until 2025-11-29 06:51:39.759344814 +0000 UTC m=+1050.803726895 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b1873231-ef75-414f-85a0-9536e7e45d24-cert") pod "infra-operator-controller-manager-57548d458d-4ffsj" (UID: "b1873231-ef75-414f-85a0-9536e7e45d24") : secret "infra-operator-webhook-server-cert" not found Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.759585 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-mmx2k" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.759958 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29pg7\" (UniqueName: \"kubernetes.io/projected/7834eae4-c153-4d24-be4b-cfeb03744cff-kube-api-access-29pg7\") pod \"octavia-operator-controller-manager-998648c74-9bjj9\" (UID: \"7834eae4-c153-4d24-be4b-cfeb03744cff\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-9bjj9" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.760011 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr42v\" (UniqueName: \"kubernetes.io/projected/4af0248a-8357-4dce-95fc-1ed6384dc3f2-kube-api-access-tr42v\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd44g9x2\" (UID: \"4af0248a-8357-4dce-95fc-1ed6384dc3f2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd44g9x2" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.788661 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-r7gd9" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.803177 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29pg7\" (UniqueName: \"kubernetes.io/projected/7834eae4-c153-4d24-be4b-cfeb03744cff-kube-api-access-29pg7\") pod \"octavia-operator-controller-manager-998648c74-9bjj9\" (UID: \"7834eae4-c153-4d24-be4b-cfeb03744cff\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-9bjj9" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.808447 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd44g9x2"] Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.814766 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-r4qxh"] Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.816539 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-r4qxh" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.820674 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-vvdpt" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.836423 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-lhstd"] Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.844441 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-r4qxh"] Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.851641 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-9bjj9" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.861328 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-86lcn"] Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.863126 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4af0248a-8357-4dce-95fc-1ed6384dc3f2-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd44g9x2\" (UID: \"4af0248a-8357-4dce-95fc-1ed6384dc3f2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd44g9x2" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.863532 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-86lcn" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.863549 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8s4p\" (UniqueName: \"kubernetes.io/projected/d5a6afb3-6d78-488e-9f48-9d4b58f998bd-kube-api-access-x8s4p\") pod \"ovn-operator-controller-manager-b6456fdb6-lhstd\" (UID: \"d5a6afb3-6d78-488e-9f48-9d4b58f998bd\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lhstd" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.863602 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tr42v\" (UniqueName: \"kubernetes.io/projected/4af0248a-8357-4dce-95fc-1ed6384dc3f2-kube-api-access-tr42v\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd44g9x2\" (UID: \"4af0248a-8357-4dce-95fc-1ed6384dc3f2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd44g9x2" Nov 29 06:51:38 crc kubenswrapper[4947]: E1129 06:51:38.864058 4947 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 29 06:51:38 crc kubenswrapper[4947]: E1129 06:51:38.864180 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4af0248a-8357-4dce-95fc-1ed6384dc3f2-cert podName:4af0248a-8357-4dce-95fc-1ed6384dc3f2 nodeName:}" failed. No retries permitted until 2025-11-29 06:51:39.364143802 +0000 UTC m=+1050.408525943 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4af0248a-8357-4dce-95fc-1ed6384dc3f2-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd44g9x2" (UID: "4af0248a-8357-4dce-95fc-1ed6384dc3f2") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.867231 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-nh9x9" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.871576 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-d5zp8"] Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.873901 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-d5zp8" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.874074 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-lc774"] Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.875814 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-lc774" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.879267 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-xlxp7" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.879613 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-rn2vj" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.889350 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tr42v\" (UniqueName: \"kubernetes.io/projected/4af0248a-8357-4dce-95fc-1ed6384dc3f2-kube-api-access-tr42v\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd44g9x2\" (UID: \"4af0248a-8357-4dce-95fc-1ed6384dc3f2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd44g9x2" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.906691 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-86lcn"] Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.935259 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-d5zp8"] Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.943298 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-lc774"] Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.948984 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-hxtg8"] Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.964318 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-hxtg8"] Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.964846 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8s4p\" (UniqueName: \"kubernetes.io/projected/d5a6afb3-6d78-488e-9f48-9d4b58f998bd-kube-api-access-x8s4p\") pod \"ovn-operator-controller-manager-b6456fdb6-lhstd\" (UID: \"d5a6afb3-6d78-488e-9f48-9d4b58f998bd\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lhstd" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.964953 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnqkd\" (UniqueName: \"kubernetes.io/projected/49e564a6-0297-489a-8239-d195776466e7-kube-api-access-vnqkd\") pod \"test-operator-controller-manager-5854674fcc-86lcn\" (UID: \"49e564a6-0297-489a-8239-d195776466e7\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-86lcn" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.965014 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhxmb\" (UniqueName: \"kubernetes.io/projected/bb3b3efc-1204-4486-b13d-be927701b46a-kube-api-access-hhxmb\") pod \"placement-operator-controller-manager-78f8948974-r4qxh\" (UID: \"bb3b3efc-1204-4486-b13d-be927701b46a\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-r4qxh" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.965046 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkdcp\" (UniqueName: \"kubernetes.io/projected/3741905b-b90e-4f39-b6f9-8e197ebd3b42-kube-api-access-pkdcp\") pod \"swift-operator-controller-manager-5f8c65bbfc-d5zp8\" (UID: \"3741905b-b90e-4f39-b6f9-8e197ebd3b42\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-d5zp8" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.965087 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blk9z\" (UniqueName: \"kubernetes.io/projected/bae6a0cd-2803-4ddb-9de7-11cb8b39f0fb-kube-api-access-blk9z\") pod \"telemetry-operator-controller-manager-76cc84c6bb-lc774\" (UID: \"bae6a0cd-2803-4ddb-9de7-11cb8b39f0fb\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-lc774" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.965402 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-hxtg8" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.974595 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-8tn72" Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.987795 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6c7b7f98c7-qvfhd"] Nov 29 06:51:38 crc kubenswrapper[4947]: I1129 06:51:38.994535 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6c7b7f98c7-qvfhd" Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.002302 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.002484 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-cfhck" Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.002515 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6c7b7f98c7-qvfhd"] Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.002701 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.007932 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-256pd"] Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.009435 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-256pd" Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.016933 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-q8k4r" Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.017003 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8s4p\" (UniqueName: \"kubernetes.io/projected/d5a6afb3-6d78-488e-9f48-9d4b58f998bd-kube-api-access-x8s4p\") pod \"ovn-operator-controller-manager-b6456fdb6-lhstd\" (UID: \"d5a6afb3-6d78-488e-9f48-9d4b58f998bd\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lhstd" Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.024878 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-256pd"] Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.067177 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx47n\" (UniqueName: \"kubernetes.io/projected/3c90fa55-0db3-435f-9927-983db02d2fac-kube-api-access-qx47n\") pod \"watcher-operator-controller-manager-769dc69bc-hxtg8\" (UID: \"3c90fa55-0db3-435f-9927-983db02d2fac\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-hxtg8" Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.067257 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e2802692-9854-4734-b9b0-c62eb59fb041-metrics-certs\") pod \"openstack-operator-controller-manager-6c7b7f98c7-qvfhd\" (UID: \"e2802692-9854-4734-b9b0-c62eb59fb041\") " pod="openstack-operators/openstack-operator-controller-manager-6c7b7f98c7-qvfhd" Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.067297 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnqkd\" (UniqueName: \"kubernetes.io/projected/49e564a6-0297-489a-8239-d195776466e7-kube-api-access-vnqkd\") pod \"test-operator-controller-manager-5854674fcc-86lcn\" (UID: \"49e564a6-0297-489a-8239-d195776466e7\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-86lcn" Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.067354 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhxmb\" (UniqueName: \"kubernetes.io/projected/bb3b3efc-1204-4486-b13d-be927701b46a-kube-api-access-hhxmb\") pod \"placement-operator-controller-manager-78f8948974-r4qxh\" (UID: \"bb3b3efc-1204-4486-b13d-be927701b46a\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-r4qxh" Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.067391 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkdcp\" (UniqueName: \"kubernetes.io/projected/3741905b-b90e-4f39-b6f9-8e197ebd3b42-kube-api-access-pkdcp\") pod \"swift-operator-controller-manager-5f8c65bbfc-d5zp8\" (UID: \"3741905b-b90e-4f39-b6f9-8e197ebd3b42\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-d5zp8" Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.067425 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e2802692-9854-4734-b9b0-c62eb59fb041-webhook-certs\") pod \"openstack-operator-controller-manager-6c7b7f98c7-qvfhd\" (UID: \"e2802692-9854-4734-b9b0-c62eb59fb041\") " pod="openstack-operators/openstack-operator-controller-manager-6c7b7f98c7-qvfhd" Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.067457 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blk9z\" (UniqueName: \"kubernetes.io/projected/bae6a0cd-2803-4ddb-9de7-11cb8b39f0fb-kube-api-access-blk9z\") pod \"telemetry-operator-controller-manager-76cc84c6bb-lc774\" (UID: \"bae6a0cd-2803-4ddb-9de7-11cb8b39f0fb\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-lc774" Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.067499 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n64tb\" (UniqueName: \"kubernetes.io/projected/e2802692-9854-4734-b9b0-c62eb59fb041-kube-api-access-n64tb\") pod \"openstack-operator-controller-manager-6c7b7f98c7-qvfhd\" (UID: \"e2802692-9854-4734-b9b0-c62eb59fb041\") " pod="openstack-operators/openstack-operator-controller-manager-6c7b7f98c7-qvfhd" Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.075057 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lhstd" Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.110918 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhxmb\" (UniqueName: \"kubernetes.io/projected/bb3b3efc-1204-4486-b13d-be927701b46a-kube-api-access-hhxmb\") pod \"placement-operator-controller-manager-78f8948974-r4qxh\" (UID: \"bb3b3efc-1204-4486-b13d-be927701b46a\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-r4qxh" Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.126699 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnqkd\" (UniqueName: \"kubernetes.io/projected/49e564a6-0297-489a-8239-d195776466e7-kube-api-access-vnqkd\") pod \"test-operator-controller-manager-5854674fcc-86lcn\" (UID: \"49e564a6-0297-489a-8239-d195776466e7\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-86lcn" Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.137359 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blk9z\" (UniqueName: \"kubernetes.io/projected/bae6a0cd-2803-4ddb-9de7-11cb8b39f0fb-kube-api-access-blk9z\") pod \"telemetry-operator-controller-manager-76cc84c6bb-lc774\" (UID: \"bae6a0cd-2803-4ddb-9de7-11cb8b39f0fb\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-lc774" Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.138146 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkdcp\" (UniqueName: \"kubernetes.io/projected/3741905b-b90e-4f39-b6f9-8e197ebd3b42-kube-api-access-pkdcp\") pod \"swift-operator-controller-manager-5f8c65bbfc-d5zp8\" (UID: \"3741905b-b90e-4f39-b6f9-8e197ebd3b42\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-d5zp8" Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.138204 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5898f4cf77-vpq4v"] Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.163721 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-r4qxh" Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.170511 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hwgn\" (UniqueName: \"kubernetes.io/projected/3b8c773c-790f-4897-bf54-8ea2a8780a9a-kube-api-access-8hwgn\") pod \"rabbitmq-cluster-operator-manager-668c99d594-256pd\" (UID: \"3b8c773c-790f-4897-bf54-8ea2a8780a9a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-256pd" Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.170788 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx47n\" (UniqueName: \"kubernetes.io/projected/3c90fa55-0db3-435f-9927-983db02d2fac-kube-api-access-qx47n\") pod \"watcher-operator-controller-manager-769dc69bc-hxtg8\" (UID: \"3c90fa55-0db3-435f-9927-983db02d2fac\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-hxtg8" Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.170849 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e2802692-9854-4734-b9b0-c62eb59fb041-metrics-certs\") pod \"openstack-operator-controller-manager-6c7b7f98c7-qvfhd\" (UID: \"e2802692-9854-4734-b9b0-c62eb59fb041\") " pod="openstack-operators/openstack-operator-controller-manager-6c7b7f98c7-qvfhd" Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.170927 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e2802692-9854-4734-b9b0-c62eb59fb041-webhook-certs\") pod \"openstack-operator-controller-manager-6c7b7f98c7-qvfhd\" (UID: \"e2802692-9854-4734-b9b0-c62eb59fb041\") " pod="openstack-operators/openstack-operator-controller-manager-6c7b7f98c7-qvfhd" Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.170988 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n64tb\" (UniqueName: \"kubernetes.io/projected/e2802692-9854-4734-b9b0-c62eb59fb041-kube-api-access-n64tb\") pod \"openstack-operator-controller-manager-6c7b7f98c7-qvfhd\" (UID: \"e2802692-9854-4734-b9b0-c62eb59fb041\") " pod="openstack-operators/openstack-operator-controller-manager-6c7b7f98c7-qvfhd" Nov 29 06:51:39 crc kubenswrapper[4947]: E1129 06:51:39.172297 4947 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 29 06:51:39 crc kubenswrapper[4947]: E1129 06:51:39.175047 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e2802692-9854-4734-b9b0-c62eb59fb041-webhook-certs podName:e2802692-9854-4734-b9b0-c62eb59fb041 nodeName:}" failed. No retries permitted until 2025-11-29 06:51:39.675020756 +0000 UTC m=+1050.719402837 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e2802692-9854-4734-b9b0-c62eb59fb041-webhook-certs") pod "openstack-operator-controller-manager-6c7b7f98c7-qvfhd" (UID: "e2802692-9854-4734-b9b0-c62eb59fb041") : secret "webhook-server-cert" not found Nov 29 06:51:39 crc kubenswrapper[4947]: E1129 06:51:39.177344 4947 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 29 06:51:39 crc kubenswrapper[4947]: E1129 06:51:39.177472 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e2802692-9854-4734-b9b0-c62eb59fb041-metrics-certs podName:e2802692-9854-4734-b9b0-c62eb59fb041 nodeName:}" failed. No retries permitted until 2025-11-29 06:51:39.677441617 +0000 UTC m=+1050.721823698 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e2802692-9854-4734-b9b0-c62eb59fb041-metrics-certs") pod "openstack-operator-controller-manager-6c7b7f98c7-qvfhd" (UID: "e2802692-9854-4734-b9b0-c62eb59fb041") : secret "metrics-server-cert" not found Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.200802 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx47n\" (UniqueName: \"kubernetes.io/projected/3c90fa55-0db3-435f-9927-983db02d2fac-kube-api-access-qx47n\") pod \"watcher-operator-controller-manager-769dc69bc-hxtg8\" (UID: \"3c90fa55-0db3-435f-9927-983db02d2fac\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-hxtg8" Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.206146 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n64tb\" (UniqueName: \"kubernetes.io/projected/e2802692-9854-4734-b9b0-c62eb59fb041-kube-api-access-n64tb\") pod \"openstack-operator-controller-manager-6c7b7f98c7-qvfhd\" (UID: \"e2802692-9854-4734-b9b0-c62eb59fb041\") " pod="openstack-operators/openstack-operator-controller-manager-6c7b7f98c7-qvfhd" Nov 29 06:51:39 crc kubenswrapper[4947]: W1129 06:51:39.206358 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb97d6c13_451c_43b2_a9cf_a1cb50dc4f71.slice/crio-86dea920cea70d7972c89f0f47efaf49ea849520da200eb5b1bddf56f46b2007 WatchSource:0}: Error finding container 86dea920cea70d7972c89f0f47efaf49ea849520da200eb5b1bddf56f46b2007: Status 404 returned error can't find the container with id 86dea920cea70d7972c89f0f47efaf49ea849520da200eb5b1bddf56f46b2007 Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.238843 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-c45vq"] Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.260176 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-668d9c48b9-g24m9"] Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.272196 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hwgn\" (UniqueName: \"kubernetes.io/projected/3b8c773c-790f-4897-bf54-8ea2a8780a9a-kube-api-access-8hwgn\") pod \"rabbitmq-cluster-operator-manager-668c99d594-256pd\" (UID: \"3b8c773c-790f-4897-bf54-8ea2a8780a9a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-256pd" Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.295619 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hwgn\" (UniqueName: \"kubernetes.io/projected/3b8c773c-790f-4897-bf54-8ea2a8780a9a-kube-api-access-8hwgn\") pod \"rabbitmq-cluster-operator-manager-668c99d594-256pd\" (UID: \"3b8c773c-790f-4897-bf54-8ea2a8780a9a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-256pd" Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.356946 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-86lcn" Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.374451 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4af0248a-8357-4dce-95fc-1ed6384dc3f2-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd44g9x2\" (UID: \"4af0248a-8357-4dce-95fc-1ed6384dc3f2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd44g9x2" Nov 29 06:51:39 crc kubenswrapper[4947]: E1129 06:51:39.375546 4947 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 29 06:51:39 crc kubenswrapper[4947]: E1129 06:51:39.375644 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4af0248a-8357-4dce-95fc-1ed6384dc3f2-cert podName:4af0248a-8357-4dce-95fc-1ed6384dc3f2 nodeName:}" failed. No retries permitted until 2025-11-29 06:51:40.375616953 +0000 UTC m=+1051.419999034 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4af0248a-8357-4dce-95fc-1ed6384dc3f2-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd44g9x2" (UID: "4af0248a-8357-4dce-95fc-1ed6384dc3f2") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.380639 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-lc774" Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.398413 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-p6ml6"] Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.405488 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-d5zp8" Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.412319 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-wwvll"] Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.429073 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-hxtg8" Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.431687 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-cpl5k"] Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.504734 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-256pd" Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.595613 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-546d4bdf48-cfpc5"] Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.622979 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-rrmmb"] Nov 29 06:51:39 crc kubenswrapper[4947]: W1129 06:51:39.631115 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5486cef_a49e_43c1_b4b2_798ee149238f.slice/crio-3d3679e849dd043d9fd60e23f996cb5e468818f11908a3ca4f8d327764a904ed WatchSource:0}: Error finding container 3d3679e849dd043d9fd60e23f996cb5e468818f11908a3ca4f8d327764a904ed: Status 404 returned error can't find the container with id 3d3679e849dd043d9fd60e23f996cb5e468818f11908a3ca4f8d327764a904ed Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.648503 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-6546668bfd-27k89"] Nov 29 06:51:39 crc kubenswrapper[4947]: W1129 06:51:39.653114 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c7abd7d_ad0b_4a8e_9de6_95a7da2c11de.slice/crio-a12f96ab0b490a26db8de73b839f30804b3c0ba1d29c98f4e69477eec8578276 WatchSource:0}: Error finding container a12f96ab0b490a26db8de73b839f30804b3c0ba1d29c98f4e69477eec8578276: Status 404 returned error can't find the container with id a12f96ab0b490a26db8de73b839f30804b3c0ba1d29c98f4e69477eec8578276 Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.672047 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jkq9j"] Nov 29 06:51:39 crc kubenswrapper[4947]: W1129 06:51:39.673290 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfcd59aa5_5edf_4200_aa2d_298f8b452fff.slice/crio-808241326c2bd9140f5bfc5299d3b7cc4a4bf7d291fe89922dc3dc4e123585cf WatchSource:0}: Error finding container 808241326c2bd9140f5bfc5299d3b7cc4a4bf7d291fe89922dc3dc4e123585cf: Status 404 returned error can't find the container with id 808241326c2bd9140f5bfc5299d3b7cc4a4bf7d291fe89922dc3dc4e123585cf Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.682923 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e2802692-9854-4734-b9b0-c62eb59fb041-metrics-certs\") pod \"openstack-operator-controller-manager-6c7b7f98c7-qvfhd\" (UID: \"e2802692-9854-4734-b9b0-c62eb59fb041\") " pod="openstack-operators/openstack-operator-controller-manager-6c7b7f98c7-qvfhd" Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.683027 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e2802692-9854-4734-b9b0-c62eb59fb041-webhook-certs\") pod \"openstack-operator-controller-manager-6c7b7f98c7-qvfhd\" (UID: \"e2802692-9854-4734-b9b0-c62eb59fb041\") " pod="openstack-operators/openstack-operator-controller-manager-6c7b7f98c7-qvfhd" Nov 29 06:51:39 crc kubenswrapper[4947]: E1129 06:51:39.683250 4947 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 29 06:51:39 crc kubenswrapper[4947]: E1129 06:51:39.683320 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e2802692-9854-4734-b9b0-c62eb59fb041-webhook-certs podName:e2802692-9854-4734-b9b0-c62eb59fb041 nodeName:}" failed. No retries permitted until 2025-11-29 06:51:40.683297696 +0000 UTC m=+1051.727679777 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e2802692-9854-4734-b9b0-c62eb59fb041-webhook-certs") pod "openstack-operator-controller-manager-6c7b7f98c7-qvfhd" (UID: "e2802692-9854-4734-b9b0-c62eb59fb041") : secret "webhook-server-cert" not found Nov 29 06:51:39 crc kubenswrapper[4947]: E1129 06:51:39.683528 4947 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 29 06:51:39 crc kubenswrapper[4947]: E1129 06:51:39.683673 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e2802692-9854-4734-b9b0-c62eb59fb041-metrics-certs podName:e2802692-9854-4734-b9b0-c62eb59fb041 nodeName:}" failed. No retries permitted until 2025-11-29 06:51:40.683631645 +0000 UTC m=+1051.728013896 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e2802692-9854-4734-b9b0-c62eb59fb041-metrics-certs") pod "openstack-operator-controller-manager-6c7b7f98c7-qvfhd" (UID: "e2802692-9854-4734-b9b0-c62eb59fb041") : secret "metrics-server-cert" not found Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.784951 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b1873231-ef75-414f-85a0-9536e7e45d24-cert\") pod \"infra-operator-controller-manager-57548d458d-4ffsj\" (UID: \"b1873231-ef75-414f-85a0-9536e7e45d24\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-4ffsj" Nov 29 06:51:39 crc kubenswrapper[4947]: E1129 06:51:39.785109 4947 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 29 06:51:39 crc kubenswrapper[4947]: E1129 06:51:39.785170 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1873231-ef75-414f-85a0-9536e7e45d24-cert podName:b1873231-ef75-414f-85a0-9536e7e45d24 nodeName:}" failed. No retries permitted until 2025-11-29 06:51:41.785151789 +0000 UTC m=+1052.829533870 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b1873231-ef75-414f-85a0-9536e7e45d24-cert") pod "infra-operator-controller-manager-57548d458d-4ffsj" (UID: "b1873231-ef75-414f-85a0-9536e7e45d24") : secret "infra-operator-webhook-server-cert" not found Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.814950 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-p6ml6" event={"ID":"8f81b728-2bf1-4638-91f7-717ab75349f3","Type":"ContainerStarted","Data":"383286cc88a9a5744d5666a70f24dc096102d324fc1de007db202e6cb91ba0da"} Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.816953 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-c45vq" event={"ID":"278ee247-5381-4806-b4b7-9247f9ff162d","Type":"ContainerStarted","Data":"a63fd4dc307493261816531e2e09a23e755eebda9abaddc9615efb48df82acdc"} Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.817768 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5898f4cf77-vpq4v" event={"ID":"a3977bd5-f0c4-4d95-bc6c-905bb2f03a07","Type":"ContainerStarted","Data":"3751ab6ec1eb927f74054dbcef3c4c37ce54785412e0f251297a7c31b05e7d2b"} Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.818605 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-cfpc5" event={"ID":"f5486cef-a49e-43c1-b4b2-798ee149238f","Type":"ContainerStarted","Data":"3d3679e849dd043d9fd60e23f996cb5e468818f11908a3ca4f8d327764a904ed"} Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.819429 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-cpl5k" event={"ID":"e9138b46-80df-4e49-a519-807c3037d727","Type":"ContainerStarted","Data":"9b2087934b0b39dec89580a4b45111ee9c8c700d3b081432c59fd0e4d17538c7"} Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.820556 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jkq9j" event={"ID":"3c962745-1298-4fbc-a4c7-ae2b75c1ce49","Type":"ContainerStarted","Data":"f831b38618b0c1c08b466d87955fe75b96aa9beefed6ce6e63dae035b819c88b"} Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.821478 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-wwvll" event={"ID":"0502c0dc-a197-41c4-a69a-ee8b633f4cb6","Type":"ContainerStarted","Data":"a252f62bcb17ff3521f1ad88809bcb6e9d6c6a3e08ecfcf1abedc7dd337d6503"} Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.822184 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-27k89" event={"ID":"fcd59aa5-5edf-4200-aa2d-298f8b452fff","Type":"ContainerStarted","Data":"808241326c2bd9140f5bfc5299d3b7cc4a4bf7d291fe89922dc3dc4e123585cf"} Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.822918 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-g24m9" event={"ID":"b97d6c13-451c-43b2-a9cf-a1cb50dc4f71","Type":"ContainerStarted","Data":"86dea920cea70d7972c89f0f47efaf49ea849520da200eb5b1bddf56f46b2007"} Nov 29 06:51:39 crc kubenswrapper[4947]: I1129 06:51:39.825041 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-rrmmb" event={"ID":"3c7abd7d-ad0b-4a8e-9de6-95a7da2c11de","Type":"ContainerStarted","Data":"a12f96ab0b490a26db8de73b839f30804b3c0ba1d29c98f4e69477eec8578276"} Nov 29 06:51:40 crc kubenswrapper[4947]: I1129 06:51:40.044579 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-mmx2k"] Nov 29 06:51:40 crc kubenswrapper[4947]: I1129 06:51:40.052557 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-lhstd"] Nov 29 06:51:40 crc kubenswrapper[4947]: I1129 06:51:40.058765 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-r7gd9"] Nov 29 06:51:40 crc kubenswrapper[4947]: I1129 06:51:40.090424 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-9bjj9"] Nov 29 06:51:40 crc kubenswrapper[4947]: I1129 06:51:40.114837 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-r4qxh"] Nov 29 06:51:40 crc kubenswrapper[4947]: I1129 06:51:40.151624 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-256pd"] Nov 29 06:51:40 crc kubenswrapper[4947]: I1129 06:51:40.164396 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-86lcn"] Nov 29 06:51:40 crc kubenswrapper[4947]: I1129 06:51:40.174376 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-d5zp8"] Nov 29 06:51:40 crc kubenswrapper[4947]: I1129 06:51:40.185945 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-lc774"] Nov 29 06:51:40 crc kubenswrapper[4947]: E1129 06:51:40.203589 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8hwgn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-256pd_openstack-operators(3b8c773c-790f-4897-bf54-8ea2a8780a9a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 29 06:51:40 crc kubenswrapper[4947]: E1129 06:51:40.203702 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-blk9z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cc84c6bb-lc774_openstack-operators(bae6a0cd-2803-4ddb-9de7-11cb8b39f0fb): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 29 06:51:40 crc kubenswrapper[4947]: E1129 06:51:40.203829 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pkdcp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5f8c65bbfc-d5zp8_openstack-operators(3741905b-b90e-4f39-b6f9-8e197ebd3b42): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 29 06:51:40 crc kubenswrapper[4947]: E1129 06:51:40.205179 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vnqkd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-86lcn_openstack-operators(49e564a6-0297-489a-8239-d195776466e7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 29 06:51:40 crc kubenswrapper[4947]: E1129 06:51:40.205598 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-256pd" podUID="3b8c773c-790f-4897-bf54-8ea2a8780a9a" Nov 29 06:51:40 crc kubenswrapper[4947]: E1129 06:51:40.206518 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-blk9z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cc84c6bb-lc774_openstack-operators(bae6a0cd-2803-4ddb-9de7-11cb8b39f0fb): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 29 06:51:40 crc kubenswrapper[4947]: I1129 06:51:40.206558 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-hxtg8"] Nov 29 06:51:40 crc kubenswrapper[4947]: E1129 06:51:40.206768 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pkdcp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5f8c65bbfc-d5zp8_openstack-operators(3741905b-b90e-4f39-b6f9-8e197ebd3b42): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 29 06:51:40 crc kubenswrapper[4947]: E1129 06:51:40.207802 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vnqkd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-86lcn_openstack-operators(49e564a6-0297-489a-8239-d195776466e7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 29 06:51:40 crc kubenswrapper[4947]: E1129 06:51:40.207869 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-lc774" podUID="bae6a0cd-2803-4ddb-9de7-11cb8b39f0fb" Nov 29 06:51:40 crc kubenswrapper[4947]: E1129 06:51:40.207882 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-d5zp8" podUID="3741905b-b90e-4f39-b6f9-8e197ebd3b42" Nov 29 06:51:40 crc kubenswrapper[4947]: E1129 06:51:40.210296 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-86lcn" podUID="49e564a6-0297-489a-8239-d195776466e7" Nov 29 06:51:40 crc kubenswrapper[4947]: E1129 06:51:40.210946 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qx47n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-769dc69bc-hxtg8_openstack-operators(3c90fa55-0db3-435f-9927-983db02d2fac): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 29 06:51:40 crc kubenswrapper[4947]: E1129 06:51:40.214695 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qx47n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-769dc69bc-hxtg8_openstack-operators(3c90fa55-0db3-435f-9927-983db02d2fac): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 29 06:51:40 crc kubenswrapper[4947]: E1129 06:51:40.215925 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-hxtg8" podUID="3c90fa55-0db3-435f-9927-983db02d2fac" Nov 29 06:51:40 crc kubenswrapper[4947]: I1129 06:51:40.401931 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4af0248a-8357-4dce-95fc-1ed6384dc3f2-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd44g9x2\" (UID: \"4af0248a-8357-4dce-95fc-1ed6384dc3f2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd44g9x2" Nov 29 06:51:40 crc kubenswrapper[4947]: E1129 06:51:40.402209 4947 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 29 06:51:40 crc kubenswrapper[4947]: E1129 06:51:40.402373 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4af0248a-8357-4dce-95fc-1ed6384dc3f2-cert podName:4af0248a-8357-4dce-95fc-1ed6384dc3f2 nodeName:}" failed. No retries permitted until 2025-11-29 06:51:42.402338881 +0000 UTC m=+1053.446721122 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4af0248a-8357-4dce-95fc-1ed6384dc3f2-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd44g9x2" (UID: "4af0248a-8357-4dce-95fc-1ed6384dc3f2") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 29 06:51:40 crc kubenswrapper[4947]: I1129 06:51:40.708499 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e2802692-9854-4734-b9b0-c62eb59fb041-metrics-certs\") pod \"openstack-operator-controller-manager-6c7b7f98c7-qvfhd\" (UID: \"e2802692-9854-4734-b9b0-c62eb59fb041\") " pod="openstack-operators/openstack-operator-controller-manager-6c7b7f98c7-qvfhd" Nov 29 06:51:40 crc kubenswrapper[4947]: I1129 06:51:40.708627 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e2802692-9854-4734-b9b0-c62eb59fb041-webhook-certs\") pod \"openstack-operator-controller-manager-6c7b7f98c7-qvfhd\" (UID: \"e2802692-9854-4734-b9b0-c62eb59fb041\") " pod="openstack-operators/openstack-operator-controller-manager-6c7b7f98c7-qvfhd" Nov 29 06:51:40 crc kubenswrapper[4947]: E1129 06:51:40.708905 4947 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 29 06:51:40 crc kubenswrapper[4947]: E1129 06:51:40.708995 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e2802692-9854-4734-b9b0-c62eb59fb041-webhook-certs podName:e2802692-9854-4734-b9b0-c62eb59fb041 nodeName:}" failed. No retries permitted until 2025-11-29 06:51:42.708950647 +0000 UTC m=+1053.753332728 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e2802692-9854-4734-b9b0-c62eb59fb041-webhook-certs") pod "openstack-operator-controller-manager-6c7b7f98c7-qvfhd" (UID: "e2802692-9854-4734-b9b0-c62eb59fb041") : secret "webhook-server-cert" not found Nov 29 06:51:40 crc kubenswrapper[4947]: E1129 06:51:40.709704 4947 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 29 06:51:40 crc kubenswrapper[4947]: E1129 06:51:40.709757 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e2802692-9854-4734-b9b0-c62eb59fb041-metrics-certs podName:e2802692-9854-4734-b9b0-c62eb59fb041 nodeName:}" failed. No retries permitted until 2025-11-29 06:51:42.709726826 +0000 UTC m=+1053.754108917 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e2802692-9854-4734-b9b0-c62eb59fb041-metrics-certs") pod "openstack-operator-controller-manager-6c7b7f98c7-qvfhd" (UID: "e2802692-9854-4734-b9b0-c62eb59fb041") : secret "metrics-server-cert" not found Nov 29 06:51:40 crc kubenswrapper[4947]: I1129 06:51:40.836110 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lhstd" event={"ID":"d5a6afb3-6d78-488e-9f48-9d4b58f998bd","Type":"ContainerStarted","Data":"91139a92caaa767c4b8b19a5a206e0041cf438af432c60d4d4ef67fa31e65f9e"} Nov 29 06:51:40 crc kubenswrapper[4947]: I1129 06:51:40.838917 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-d5zp8" event={"ID":"3741905b-b90e-4f39-b6f9-8e197ebd3b42","Type":"ContainerStarted","Data":"985a3f4be59dac0e14174e946a0698e6a6ec107c41459c917636bf708ee59851"} Nov 29 06:51:40 crc kubenswrapper[4947]: I1129 06:51:40.842440 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-r4qxh" event={"ID":"bb3b3efc-1204-4486-b13d-be927701b46a","Type":"ContainerStarted","Data":"57ddf96e6a99d75ffaedbeeea22db88673e15025eb2ca993c17dd46608404488"} Nov 29 06:51:40 crc kubenswrapper[4947]: E1129 06:51:40.843062 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-d5zp8" podUID="3741905b-b90e-4f39-b6f9-8e197ebd3b42" Nov 29 06:51:40 crc kubenswrapper[4947]: I1129 06:51:40.845294 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-mmx2k" event={"ID":"401765a6-8ea5-478e-bd29-5c2c717b57d3","Type":"ContainerStarted","Data":"21c521f6f8bef964cb5d3001749ce55e7e21bb079c4699c418cff02cbaf11f2f"} Nov 29 06:51:40 crc kubenswrapper[4947]: I1129 06:51:40.847600 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-r7gd9" event={"ID":"edb90e2e-84a1-4544-87b4-7e26c9dfd9bc","Type":"ContainerStarted","Data":"3bba5b8d901861c8f01a1a8b069c072a88d03b4bdef12bd57e6176ce3c65f5df"} Nov 29 06:51:40 crc kubenswrapper[4947]: I1129 06:51:40.849209 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-256pd" event={"ID":"3b8c773c-790f-4897-bf54-8ea2a8780a9a","Type":"ContainerStarted","Data":"cad181fb9091b77067fd81e12709073769c61a5131cf38b1c282fe6560ac939c"} Nov 29 06:51:40 crc kubenswrapper[4947]: I1129 06:51:40.850933 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-lc774" event={"ID":"bae6a0cd-2803-4ddb-9de7-11cb8b39f0fb","Type":"ContainerStarted","Data":"543c9d62499abf71028df2b9e7b07f40c458d2f9dbf5a47f40b354053b1d2357"} Nov 29 06:51:40 crc kubenswrapper[4947]: E1129 06:51:40.851725 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-256pd" podUID="3b8c773c-790f-4897-bf54-8ea2a8780a9a" Nov 29 06:51:40 crc kubenswrapper[4947]: I1129 06:51:40.854836 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-86lcn" event={"ID":"49e564a6-0297-489a-8239-d195776466e7","Type":"ContainerStarted","Data":"c928eab3844ff6c7f6db85c45447eb242ababd849e6a6ed5759e020c88f66ffa"} Nov 29 06:51:40 crc kubenswrapper[4947]: E1129 06:51:40.872391 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-86lcn" podUID="49e564a6-0297-489a-8239-d195776466e7" Nov 29 06:51:40 crc kubenswrapper[4947]: E1129 06:51:40.872583 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-lc774" podUID="bae6a0cd-2803-4ddb-9de7-11cb8b39f0fb" Nov 29 06:51:40 crc kubenswrapper[4947]: I1129 06:51:40.873229 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-9bjj9" event={"ID":"7834eae4-c153-4d24-be4b-cfeb03744cff","Type":"ContainerStarted","Data":"26fcd4872401489f070e8a9765c5911cf995e3f274c5917feacdc6b94189aa38"} Nov 29 06:51:40 crc kubenswrapper[4947]: I1129 06:51:40.875254 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-hxtg8" event={"ID":"3c90fa55-0db3-435f-9927-983db02d2fac","Type":"ContainerStarted","Data":"8750ce6221b8c5605b86cf1d4a9ac6f7f66ddb411f8250e2a42b4bc5f3249d99"} Nov 29 06:51:40 crc kubenswrapper[4947]: E1129 06:51:40.883144 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-hxtg8" podUID="3c90fa55-0db3-435f-9927-983db02d2fac" Nov 29 06:51:41 crc kubenswrapper[4947]: I1129 06:51:41.837393 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b1873231-ef75-414f-85a0-9536e7e45d24-cert\") pod \"infra-operator-controller-manager-57548d458d-4ffsj\" (UID: \"b1873231-ef75-414f-85a0-9536e7e45d24\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-4ffsj" Nov 29 06:51:41 crc kubenswrapper[4947]: E1129 06:51:41.837974 4947 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 29 06:51:41 crc kubenswrapper[4947]: E1129 06:51:41.838034 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1873231-ef75-414f-85a0-9536e7e45d24-cert podName:b1873231-ef75-414f-85a0-9536e7e45d24 nodeName:}" failed. No retries permitted until 2025-11-29 06:51:45.838016521 +0000 UTC m=+1056.882398602 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b1873231-ef75-414f-85a0-9536e7e45d24-cert") pod "infra-operator-controller-manager-57548d458d-4ffsj" (UID: "b1873231-ef75-414f-85a0-9536e7e45d24") : secret "infra-operator-webhook-server-cert" not found Nov 29 06:51:41 crc kubenswrapper[4947]: E1129 06:51:41.910572 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-256pd" podUID="3b8c773c-790f-4897-bf54-8ea2a8780a9a" Nov 29 06:51:41 crc kubenswrapper[4947]: E1129 06:51:41.912403 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-86lcn" podUID="49e564a6-0297-489a-8239-d195776466e7" Nov 29 06:51:41 crc kubenswrapper[4947]: E1129 06:51:41.912956 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-lc774" podUID="bae6a0cd-2803-4ddb-9de7-11cb8b39f0fb" Nov 29 06:51:41 crc kubenswrapper[4947]: E1129 06:51:41.914717 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-d5zp8" podUID="3741905b-b90e-4f39-b6f9-8e197ebd3b42" Nov 29 06:51:41 crc kubenswrapper[4947]: E1129 06:51:41.915010 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-hxtg8" podUID="3c90fa55-0db3-435f-9927-983db02d2fac" Nov 29 06:51:42 crc kubenswrapper[4947]: I1129 06:51:42.447823 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4af0248a-8357-4dce-95fc-1ed6384dc3f2-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd44g9x2\" (UID: \"4af0248a-8357-4dce-95fc-1ed6384dc3f2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd44g9x2" Nov 29 06:51:42 crc kubenswrapper[4947]: E1129 06:51:42.448028 4947 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 29 06:51:42 crc kubenswrapper[4947]: E1129 06:51:42.448089 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4af0248a-8357-4dce-95fc-1ed6384dc3f2-cert podName:4af0248a-8357-4dce-95fc-1ed6384dc3f2 nodeName:}" failed. No retries permitted until 2025-11-29 06:51:46.448069042 +0000 UTC m=+1057.492451123 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4af0248a-8357-4dce-95fc-1ed6384dc3f2-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd44g9x2" (UID: "4af0248a-8357-4dce-95fc-1ed6384dc3f2") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 29 06:51:42 crc kubenswrapper[4947]: I1129 06:51:42.753259 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e2802692-9854-4734-b9b0-c62eb59fb041-metrics-certs\") pod \"openstack-operator-controller-manager-6c7b7f98c7-qvfhd\" (UID: \"e2802692-9854-4734-b9b0-c62eb59fb041\") " pod="openstack-operators/openstack-operator-controller-manager-6c7b7f98c7-qvfhd" Nov 29 06:51:42 crc kubenswrapper[4947]: I1129 06:51:42.753425 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e2802692-9854-4734-b9b0-c62eb59fb041-webhook-certs\") pod \"openstack-operator-controller-manager-6c7b7f98c7-qvfhd\" (UID: \"e2802692-9854-4734-b9b0-c62eb59fb041\") " pod="openstack-operators/openstack-operator-controller-manager-6c7b7f98c7-qvfhd" Nov 29 06:51:42 crc kubenswrapper[4947]: E1129 06:51:42.753512 4947 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 29 06:51:42 crc kubenswrapper[4947]: E1129 06:51:42.753644 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e2802692-9854-4734-b9b0-c62eb59fb041-metrics-certs podName:e2802692-9854-4734-b9b0-c62eb59fb041 nodeName:}" failed. No retries permitted until 2025-11-29 06:51:46.753613311 +0000 UTC m=+1057.797995572 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e2802692-9854-4734-b9b0-c62eb59fb041-metrics-certs") pod "openstack-operator-controller-manager-6c7b7f98c7-qvfhd" (UID: "e2802692-9854-4734-b9b0-c62eb59fb041") : secret "metrics-server-cert" not found Nov 29 06:51:42 crc kubenswrapper[4947]: E1129 06:51:42.753699 4947 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 29 06:51:42 crc kubenswrapper[4947]: E1129 06:51:42.753796 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e2802692-9854-4734-b9b0-c62eb59fb041-webhook-certs podName:e2802692-9854-4734-b9b0-c62eb59fb041 nodeName:}" failed. No retries permitted until 2025-11-29 06:51:46.753771485 +0000 UTC m=+1057.798153776 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e2802692-9854-4734-b9b0-c62eb59fb041-webhook-certs") pod "openstack-operator-controller-manager-6c7b7f98c7-qvfhd" (UID: "e2802692-9854-4734-b9b0-c62eb59fb041") : secret "webhook-server-cert" not found Nov 29 06:51:45 crc kubenswrapper[4947]: I1129 06:51:45.927612 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b1873231-ef75-414f-85a0-9536e7e45d24-cert\") pod \"infra-operator-controller-manager-57548d458d-4ffsj\" (UID: \"b1873231-ef75-414f-85a0-9536e7e45d24\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-4ffsj" Nov 29 06:51:45 crc kubenswrapper[4947]: E1129 06:51:45.927855 4947 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 29 06:51:45 crc kubenswrapper[4947]: E1129 06:51:45.928554 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1873231-ef75-414f-85a0-9536e7e45d24-cert podName:b1873231-ef75-414f-85a0-9536e7e45d24 nodeName:}" failed. No retries permitted until 2025-11-29 06:51:53.928534239 +0000 UTC m=+1064.972916320 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b1873231-ef75-414f-85a0-9536e7e45d24-cert") pod "infra-operator-controller-manager-57548d458d-4ffsj" (UID: "b1873231-ef75-414f-85a0-9536e7e45d24") : secret "infra-operator-webhook-server-cert" not found Nov 29 06:51:46 crc kubenswrapper[4947]: I1129 06:51:46.540122 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4af0248a-8357-4dce-95fc-1ed6384dc3f2-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd44g9x2\" (UID: \"4af0248a-8357-4dce-95fc-1ed6384dc3f2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd44g9x2" Nov 29 06:51:46 crc kubenswrapper[4947]: E1129 06:51:46.540399 4947 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 29 06:51:46 crc kubenswrapper[4947]: E1129 06:51:46.540841 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4af0248a-8357-4dce-95fc-1ed6384dc3f2-cert podName:4af0248a-8357-4dce-95fc-1ed6384dc3f2 nodeName:}" failed. No retries permitted until 2025-11-29 06:51:54.540808166 +0000 UTC m=+1065.585190247 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4af0248a-8357-4dce-95fc-1ed6384dc3f2-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd44g9x2" (UID: "4af0248a-8357-4dce-95fc-1ed6384dc3f2") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 29 06:51:46 crc kubenswrapper[4947]: I1129 06:51:46.846780 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e2802692-9854-4734-b9b0-c62eb59fb041-metrics-certs\") pod \"openstack-operator-controller-manager-6c7b7f98c7-qvfhd\" (UID: \"e2802692-9854-4734-b9b0-c62eb59fb041\") " pod="openstack-operators/openstack-operator-controller-manager-6c7b7f98c7-qvfhd" Nov 29 06:51:46 crc kubenswrapper[4947]: I1129 06:51:46.846868 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e2802692-9854-4734-b9b0-c62eb59fb041-webhook-certs\") pod \"openstack-operator-controller-manager-6c7b7f98c7-qvfhd\" (UID: \"e2802692-9854-4734-b9b0-c62eb59fb041\") " pod="openstack-operators/openstack-operator-controller-manager-6c7b7f98c7-qvfhd" Nov 29 06:51:46 crc kubenswrapper[4947]: E1129 06:51:46.847057 4947 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 29 06:51:46 crc kubenswrapper[4947]: E1129 06:51:46.847139 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e2802692-9854-4734-b9b0-c62eb59fb041-webhook-certs podName:e2802692-9854-4734-b9b0-c62eb59fb041 nodeName:}" failed. No retries permitted until 2025-11-29 06:51:54.847115174 +0000 UTC m=+1065.891497255 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e2802692-9854-4734-b9b0-c62eb59fb041-webhook-certs") pod "openstack-operator-controller-manager-6c7b7f98c7-qvfhd" (UID: "e2802692-9854-4734-b9b0-c62eb59fb041") : secret "webhook-server-cert" not found Nov 29 06:51:46 crc kubenswrapper[4947]: E1129 06:51:46.847139 4947 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 29 06:51:46 crc kubenswrapper[4947]: E1129 06:51:46.847267 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e2802692-9854-4734-b9b0-c62eb59fb041-metrics-certs podName:e2802692-9854-4734-b9b0-c62eb59fb041 nodeName:}" failed. No retries permitted until 2025-11-29 06:51:54.847240547 +0000 UTC m=+1065.891622628 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e2802692-9854-4734-b9b0-c62eb59fb041-metrics-certs") pod "openstack-operator-controller-manager-6c7b7f98c7-qvfhd" (UID: "e2802692-9854-4734-b9b0-c62eb59fb041") : secret "metrics-server-cert" not found Nov 29 06:51:52 crc kubenswrapper[4947]: E1129 06:51:52.054030 4947 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:f6059a0fbf031d34dcf086d14ce8c0546caeaee23c5780e90b5037c5feee9fea" Nov 29 06:51:52 crc kubenswrapper[4947]: E1129 06:51:52.054572 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:f6059a0fbf031d34dcf086d14ce8c0546caeaee23c5780e90b5037c5feee9fea,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jjxzs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7d9dfd778-cpl5k_openstack-operators(e9138b46-80df-4e49-a519-807c3037d727): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 06:51:52 crc kubenswrapper[4947]: I1129 06:51:52.136628 4947 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 06:51:52 crc kubenswrapper[4947]: I1129 06:51:52.987720 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 06:51:52 crc kubenswrapper[4947]: I1129 06:51:52.988157 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 06:51:52 crc kubenswrapper[4947]: I1129 06:51:52.988211 4947 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" Nov 29 06:51:52 crc kubenswrapper[4947]: I1129 06:51:52.989045 4947 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"95afd1d0c4fb1119bc14de336e7d92cb2ee91cd1747056ef7ee978c29db619c9"} pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 06:51:52 crc kubenswrapper[4947]: I1129 06:51:52.989117 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" containerID="cri-o://95afd1d0c4fb1119bc14de336e7d92cb2ee91cd1747056ef7ee978c29db619c9" gracePeriod=600 Nov 29 06:51:53 crc kubenswrapper[4947]: I1129 06:51:53.986855 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b1873231-ef75-414f-85a0-9536e7e45d24-cert\") pod \"infra-operator-controller-manager-57548d458d-4ffsj\" (UID: \"b1873231-ef75-414f-85a0-9536e7e45d24\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-4ffsj" Nov 29 06:51:53 crc kubenswrapper[4947]: E1129 06:51:53.987057 4947 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 29 06:51:53 crc kubenswrapper[4947]: E1129 06:51:53.987157 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1873231-ef75-414f-85a0-9536e7e45d24-cert podName:b1873231-ef75-414f-85a0-9536e7e45d24 nodeName:}" failed. No retries permitted until 2025-11-29 06:52:09.987133789 +0000 UTC m=+1081.031515880 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b1873231-ef75-414f-85a0-9536e7e45d24-cert") pod "infra-operator-controller-manager-57548d458d-4ffsj" (UID: "b1873231-ef75-414f-85a0-9536e7e45d24") : secret "infra-operator-webhook-server-cert" not found Nov 29 06:51:54 crc kubenswrapper[4947]: I1129 06:51:54.008267 4947 generic.go:334] "Generic (PLEG): container finished" podID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerID="95afd1d0c4fb1119bc14de336e7d92cb2ee91cd1747056ef7ee978c29db619c9" exitCode=0 Nov 29 06:51:54 crc kubenswrapper[4947]: I1129 06:51:54.008327 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" event={"ID":"5f4d791f-bb61-4aaa-a09c-3007b59645a7","Type":"ContainerDied","Data":"95afd1d0c4fb1119bc14de336e7d92cb2ee91cd1747056ef7ee978c29db619c9"} Nov 29 06:51:54 crc kubenswrapper[4947]: I1129 06:51:54.008375 4947 scope.go:117] "RemoveContainer" containerID="e3f38270dbfc41785276b23821b9697dddfbb4108ac42aabfc9b7652679ef1e3" Nov 29 06:51:54 crc kubenswrapper[4947]: I1129 06:51:54.595795 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4af0248a-8357-4dce-95fc-1ed6384dc3f2-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd44g9x2\" (UID: \"4af0248a-8357-4dce-95fc-1ed6384dc3f2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd44g9x2" Nov 29 06:51:54 crc kubenswrapper[4947]: I1129 06:51:54.612111 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4af0248a-8357-4dce-95fc-1ed6384dc3f2-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd44g9x2\" (UID: \"4af0248a-8357-4dce-95fc-1ed6384dc3f2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd44g9x2" Nov 29 06:51:54 crc kubenswrapper[4947]: I1129 06:51:54.840727 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd44g9x2" Nov 29 06:51:54 crc kubenswrapper[4947]: I1129 06:51:54.900698 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e2802692-9854-4734-b9b0-c62eb59fb041-webhook-certs\") pod \"openstack-operator-controller-manager-6c7b7f98c7-qvfhd\" (UID: \"e2802692-9854-4734-b9b0-c62eb59fb041\") " pod="openstack-operators/openstack-operator-controller-manager-6c7b7f98c7-qvfhd" Nov 29 06:51:54 crc kubenswrapper[4947]: I1129 06:51:54.900859 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e2802692-9854-4734-b9b0-c62eb59fb041-metrics-certs\") pod \"openstack-operator-controller-manager-6c7b7f98c7-qvfhd\" (UID: \"e2802692-9854-4734-b9b0-c62eb59fb041\") " pod="openstack-operators/openstack-operator-controller-manager-6c7b7f98c7-qvfhd" Nov 29 06:51:54 crc kubenswrapper[4947]: I1129 06:51:54.904960 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e2802692-9854-4734-b9b0-c62eb59fb041-metrics-certs\") pod \"openstack-operator-controller-manager-6c7b7f98c7-qvfhd\" (UID: \"e2802692-9854-4734-b9b0-c62eb59fb041\") " pod="openstack-operators/openstack-operator-controller-manager-6c7b7f98c7-qvfhd" Nov 29 06:51:54 crc kubenswrapper[4947]: I1129 06:51:54.905486 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e2802692-9854-4734-b9b0-c62eb59fb041-webhook-certs\") pod \"openstack-operator-controller-manager-6c7b7f98c7-qvfhd\" (UID: \"e2802692-9854-4734-b9b0-c62eb59fb041\") " pod="openstack-operators/openstack-operator-controller-manager-6c7b7f98c7-qvfhd" Nov 29 06:51:55 crc kubenswrapper[4947]: I1129 06:51:55.057377 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6c7b7f98c7-qvfhd" Nov 29 06:51:59 crc kubenswrapper[4947]: E1129 06:51:59.083789 4947 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:9e847f4dbdea19ab997f32a02b3680a9bd966f9c705911645c3866a19fda9ea5" Nov 29 06:51:59 crc kubenswrapper[4947]: E1129 06:51:59.084449 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:9e847f4dbdea19ab997f32a02b3680a9bd966f9c705911645c3866a19fda9ea5,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zgv8z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-68c6d99b8f-wwvll_openstack-operators(0502c0dc-a197-41c4-a69a-ee8b633f4cb6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 06:52:00 crc kubenswrapper[4947]: E1129 06:52:00.117603 4947 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59" Nov 29 06:52:00 crc kubenswrapper[4947]: E1129 06:52:00.117939 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x8s4p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-lhstd_openstack-operators(d5a6afb3-6d78-488e-9f48-9d4b58f998bd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 06:52:02 crc kubenswrapper[4947]: E1129 06:52:02.611815 4947 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:600ca007e493d3af0fcc2ebac92e8da5efd2afe812b62d7d3d4dd0115bdf05d7" Nov 29 06:52:02 crc kubenswrapper[4947]: E1129 06:52:02.612503 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:600ca007e493d3af0fcc2ebac92e8da5efd2afe812b62d7d3d4dd0115bdf05d7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fvhdh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-56bbcc9d85-mmx2k_openstack-operators(401765a6-8ea5-478e-bd29-5c2c717b57d3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 06:52:03 crc kubenswrapper[4947]: E1129 06:52:03.414397 4947 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557" Nov 29 06:52:03 crc kubenswrapper[4947]: E1129 06:52:03.414724 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4wrrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5fdfd5b6b5-jkq9j_openstack-operators(3c962745-1298-4fbc-a4c7-ae2b75c1ce49): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 06:52:04 crc kubenswrapper[4947]: E1129 06:52:04.082947 4947 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:0f523b7e2fa9e86fef986acf07d0c42d5658c475d565f11eaea926ebffcb6530" Nov 29 06:52:04 crc kubenswrapper[4947]: E1129 06:52:04.083481 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:0f523b7e2fa9e86fef986acf07d0c42d5658c475d565f11eaea926ebffcb6530,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sclxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-6c548fd776-p6ml6_openstack-operators(8f81b728-2bf1-4638-91f7-717ab75349f3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 06:52:04 crc kubenswrapper[4947]: E1129 06:52:04.908063 4947 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:ecf7be921850bdc04697ed1b332bab39ad2a64e4e45c2a445c04f9bae6ac61b5" Nov 29 06:52:04 crc kubenswrapper[4947]: E1129 06:52:04.908392 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:ecf7be921850bdc04697ed1b332bab39ad2a64e4e45c2a445c04f9bae6ac61b5,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q9qdb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-6546668bfd-27k89_openstack-operators(fcd59aa5-5edf-4200-aa2d-298f8b452fff): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 06:52:05 crc kubenswrapper[4947]: E1129 06:52:05.547863 4947 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168" Nov 29 06:52:05 crc kubenswrapper[4947]: E1129 06:52:05.548687 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-29pg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-9bjj9_openstack-operators(7834eae4-c153-4d24-be4b-cfeb03744cff): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 06:52:06 crc kubenswrapper[4947]: E1129 06:52:06.368066 4947 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:986861e5a0a9954f63581d9d55a30f8057883cefea489415d76257774526eea3" Nov 29 06:52:06 crc kubenswrapper[4947]: E1129 06:52:06.368364 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:986861e5a0a9954f63581d9d55a30f8057883cefea489415d76257774526eea3,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bkwjj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-546d4bdf48-cfpc5_openstack-operators(f5486cef-a49e-43c1-b4b2-798ee149238f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 06:52:10 crc kubenswrapper[4947]: I1129 06:52:10.029092 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b1873231-ef75-414f-85a0-9536e7e45d24-cert\") pod \"infra-operator-controller-manager-57548d458d-4ffsj\" (UID: \"b1873231-ef75-414f-85a0-9536e7e45d24\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-4ffsj" Nov 29 06:52:10 crc kubenswrapper[4947]: I1129 06:52:10.041297 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b1873231-ef75-414f-85a0-9536e7e45d24-cert\") pod \"infra-operator-controller-manager-57548d458d-4ffsj\" (UID: \"b1873231-ef75-414f-85a0-9536e7e45d24\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-4ffsj" Nov 29 06:52:10 crc kubenswrapper[4947]: I1129 06:52:10.315325 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-6hzdl" Nov 29 06:52:10 crc kubenswrapper[4947]: I1129 06:52:10.323647 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-4ffsj" Nov 29 06:52:11 crc kubenswrapper[4947]: E1129 06:52:11.022615 4947 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670" Nov 29 06:52:11 crc kubenswrapper[4947]: E1129 06:52:11.023480 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n4gjl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-r7gd9_openstack-operators(edb90e2e-84a1-4544-87b4-7e26c9dfd9bc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 06:52:14 crc kubenswrapper[4947]: E1129 06:52:14.347708 4947 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Nov 29 06:52:14 crc kubenswrapper[4947]: E1129 06:52:14.348627 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8hwgn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-256pd_openstack-operators(3b8c773c-790f-4897-bf54-8ea2a8780a9a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 06:52:14 crc kubenswrapper[4947]: E1129 06:52:14.349980 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-256pd" podUID="3b8c773c-790f-4897-bf54-8ea2a8780a9a" Nov 29 06:52:15 crc kubenswrapper[4947]: I1129 06:52:15.161630 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd44g9x2"] Nov 29 06:52:15 crc kubenswrapper[4947]: I1129 06:52:15.242433 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6c7b7f98c7-qvfhd"] Nov 29 06:52:16 crc kubenswrapper[4947]: W1129 06:52:16.050833 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4af0248a_8357_4dce_95fc_1ed6384dc3f2.slice/crio-fbd839ff7ff5546b656ad93e9223c9b2443fad9dc797fdc9cf295756320a8de7 WatchSource:0}: Error finding container fbd839ff7ff5546b656ad93e9223c9b2443fad9dc797fdc9cf295756320a8de7: Status 404 returned error can't find the container with id fbd839ff7ff5546b656ad93e9223c9b2443fad9dc797fdc9cf295756320a8de7 Nov 29 06:52:16 crc kubenswrapper[4947]: W1129 06:52:16.052003 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2802692_9854_4734_b9b0_c62eb59fb041.slice/crio-ca8bc8936f656a53ef459beb0c47abca13e6cacd33b8d209e8d59fd19d30db73 WatchSource:0}: Error finding container ca8bc8936f656a53ef459beb0c47abca13e6cacd33b8d209e8d59fd19d30db73: Status 404 returned error can't find the container with id ca8bc8936f656a53ef459beb0c47abca13e6cacd33b8d209e8d59fd19d30db73 Nov 29 06:52:16 crc kubenswrapper[4947]: I1129 06:52:16.192940 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6c7b7f98c7-qvfhd" event={"ID":"e2802692-9854-4734-b9b0-c62eb59fb041","Type":"ContainerStarted","Data":"ca8bc8936f656a53ef459beb0c47abca13e6cacd33b8d209e8d59fd19d30db73"} Nov 29 06:52:16 crc kubenswrapper[4947]: I1129 06:52:16.195766 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd44g9x2" event={"ID":"4af0248a-8357-4dce-95fc-1ed6384dc3f2","Type":"ContainerStarted","Data":"fbd839ff7ff5546b656ad93e9223c9b2443fad9dc797fdc9cf295756320a8de7"} Nov 29 06:52:16 crc kubenswrapper[4947]: I1129 06:52:16.259795 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-4ffsj"] Nov 29 06:52:17 crc kubenswrapper[4947]: I1129 06:52:17.214182 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-g24m9" event={"ID":"b97d6c13-451c-43b2-a9cf-a1cb50dc4f71","Type":"ContainerStarted","Data":"30ec33bfd176241571fe805c7571a9aaa5b17b248f00d9f5c60e4a8bdb23dad2"} Nov 29 06:52:17 crc kubenswrapper[4947]: I1129 06:52:17.215866 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-rrmmb" event={"ID":"3c7abd7d-ad0b-4a8e-9de6-95a7da2c11de","Type":"ContainerStarted","Data":"2050e191bb07529734ae7166dbae95271ae902e518ad007f698aa6d3a561046c"} Nov 29 06:52:17 crc kubenswrapper[4947]: I1129 06:52:17.219542 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" event={"ID":"5f4d791f-bb61-4aaa-a09c-3007b59645a7","Type":"ContainerStarted","Data":"a415fb27869ca193be5294677b2f866f2ec48db054e83e7f53b656f014c7087f"} Nov 29 06:52:18 crc kubenswrapper[4947]: W1129 06:52:18.278180 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1873231_ef75_414f_85a0_9536e7e45d24.slice/crio-e4b87bb8458f383c82044b54aaee3a5f7e3434356aba576bfcc6d15421b3b1d0 WatchSource:0}: Error finding container e4b87bb8458f383c82044b54aaee3a5f7e3434356aba576bfcc6d15421b3b1d0: Status 404 returned error can't find the container with id e4b87bb8458f383c82044b54aaee3a5f7e3434356aba576bfcc6d15421b3b1d0 Nov 29 06:52:19 crc kubenswrapper[4947]: I1129 06:52:19.246263 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5898f4cf77-vpq4v" event={"ID":"a3977bd5-f0c4-4d95-bc6c-905bb2f03a07","Type":"ContainerStarted","Data":"7d6762c2b0503b3ae06e98b32b9c7562e401ea1fbbdfe148387cab96a1e98a1c"} Nov 29 06:52:19 crc kubenswrapper[4947]: I1129 06:52:19.248094 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-r4qxh" event={"ID":"bb3b3efc-1204-4486-b13d-be927701b46a","Type":"ContainerStarted","Data":"d6253518899761d1f00f1499c62c0d870ece4ddb755adfb295f6840c40b44b6c"} Nov 29 06:52:19 crc kubenswrapper[4947]: I1129 06:52:19.249268 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-4ffsj" event={"ID":"b1873231-ef75-414f-85a0-9536e7e45d24","Type":"ContainerStarted","Data":"e4b87bb8458f383c82044b54aaee3a5f7e3434356aba576bfcc6d15421b3b1d0"} Nov 29 06:52:19 crc kubenswrapper[4947]: I1129 06:52:19.250956 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-c45vq" event={"ID":"278ee247-5381-4806-b4b7-9247f9ff162d","Type":"ContainerStarted","Data":"c333f8ffdde665eab790fa40c0c8280cfbf924967ecc0a5be0d9596534aefc6f"} Nov 29 06:52:21 crc kubenswrapper[4947]: I1129 06:52:21.271713 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-lc774" event={"ID":"bae6a0cd-2803-4ddb-9de7-11cb8b39f0fb","Type":"ContainerStarted","Data":"e0eeae4392196f7767f2f10aee2f1d005dc719e4078956aa0d1db89891276c50"} Nov 29 06:52:22 crc kubenswrapper[4947]: I1129 06:52:22.289453 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6c7b7f98c7-qvfhd" event={"ID":"e2802692-9854-4734-b9b0-c62eb59fb041","Type":"ContainerStarted","Data":"6aabb1edc8766a04c2b4b9058d40904151ce5906b67503b6d8401a463cbf7cb4"} Nov 29 06:52:22 crc kubenswrapper[4947]: I1129 06:52:22.289995 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6c7b7f98c7-qvfhd" Nov 29 06:52:22 crc kubenswrapper[4947]: I1129 06:52:22.300598 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-86lcn" event={"ID":"49e564a6-0297-489a-8239-d195776466e7","Type":"ContainerStarted","Data":"8edf1ff318a6a523db278edb823ec47d0bcb4544d249f7008dc0ac939c7eb336"} Nov 29 06:52:22 crc kubenswrapper[4947]: I1129 06:52:22.308593 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-hxtg8" event={"ID":"3c90fa55-0db3-435f-9927-983db02d2fac","Type":"ContainerStarted","Data":"d02c76dea8e0d2f6ab4ee5f7e577fe21477b03f64e1a25e539f0bbb72b4c7ac1"} Nov 29 06:52:22 crc kubenswrapper[4947]: I1129 06:52:22.311581 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-d5zp8" event={"ID":"3741905b-b90e-4f39-b6f9-8e197ebd3b42","Type":"ContainerStarted","Data":"54825c06907223f6086aa587d33aa4f007fa043749bb1623d1954cdb181eae14"} Nov 29 06:52:22 crc kubenswrapper[4947]: E1129 06:52:22.693534 4947 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Nov 29 06:52:22 crc kubenswrapper[4947]: E1129 06:52:22.693843 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jjxzs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7d9dfd778-cpl5k_openstack-operators(e9138b46-80df-4e49-a519-807c3037d727): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 06:52:22 crc kubenswrapper[4947]: E1129 06:52:22.695099 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-cpl5k" podUID="e9138b46-80df-4e49-a519-807c3037d727" Nov 29 06:52:23 crc kubenswrapper[4947]: I1129 06:52:23.322514 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-g24m9" event={"ID":"b97d6c13-451c-43b2-a9cf-a1cb50dc4f71","Type":"ContainerStarted","Data":"4fb951f6ba7ac0deeaa253f5ed90e81e60c2e9a64e37ab6c03cd9cc270f175cc"} Nov 29 06:52:23 crc kubenswrapper[4947]: I1129 06:52:23.323242 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-g24m9" Nov 29 06:52:23 crc kubenswrapper[4947]: I1129 06:52:23.331065 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-g24m9" Nov 29 06:52:23 crc kubenswrapper[4947]: I1129 06:52:23.346122 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6c7b7f98c7-qvfhd" podStartSLOduration=45.346100343 podStartE2EDuration="45.346100343s" podCreationTimestamp="2025-11-29 06:51:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:52:22.347767406 +0000 UTC m=+1093.392149487" watchObservedRunningTime="2025-11-29 06:52:23.346100343 +0000 UTC m=+1094.390482424" Nov 29 06:52:23 crc kubenswrapper[4947]: I1129 06:52:23.380303 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-g24m9" podStartSLOduration=2.589028574 podStartE2EDuration="46.380240939s" podCreationTimestamp="2025-11-29 06:51:37 +0000 UTC" firstStartedPulling="2025-11-29 06:51:39.223066964 +0000 UTC m=+1050.267449045" lastFinishedPulling="2025-11-29 06:52:23.014279329 +0000 UTC m=+1094.058661410" observedRunningTime="2025-11-29 06:52:23.347338245 +0000 UTC m=+1094.391720326" watchObservedRunningTime="2025-11-29 06:52:23.380240939 +0000 UTC m=+1094.424623020" Nov 29 06:52:23 crc kubenswrapper[4947]: E1129 06:52:23.665952 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jkq9j" podUID="3c962745-1298-4fbc-a4c7-ae2b75c1ce49" Nov 29 06:52:23 crc kubenswrapper[4947]: E1129 06:52:23.677459 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-998648c74-9bjj9" podUID="7834eae4-c153-4d24-be4b-cfeb03744cff" Nov 29 06:52:23 crc kubenswrapper[4947]: E1129 06:52:23.982584 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-wwvll" podUID="0502c0dc-a197-41c4-a69a-ee8b633f4cb6" Nov 29 06:52:24 crc kubenswrapper[4947]: I1129 06:52:24.363158 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-d5zp8" event={"ID":"3741905b-b90e-4f39-b6f9-8e197ebd3b42","Type":"ContainerStarted","Data":"743935f6c2fdf7a4a4803ca730149764fed62a417000771bd3b214b05cd5e454"} Nov 29 06:52:24 crc kubenswrapper[4947]: I1129 06:52:24.364121 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-d5zp8" Nov 29 06:52:24 crc kubenswrapper[4947]: I1129 06:52:24.382456 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-lc774" event={"ID":"bae6a0cd-2803-4ddb-9de7-11cb8b39f0fb","Type":"ContainerStarted","Data":"4f7b4aa8bf62f5b9b21841afd06e88ce7e918abfdd7c63d91d20852f6a4b62f7"} Nov 29 06:52:24 crc kubenswrapper[4947]: I1129 06:52:24.383134 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-lc774" Nov 29 06:52:24 crc kubenswrapper[4947]: I1129 06:52:24.388998 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-d5zp8" podStartSLOduration=3.373912352 podStartE2EDuration="46.388982832s" podCreationTimestamp="2025-11-29 06:51:38 +0000 UTC" firstStartedPulling="2025-11-29 06:51:40.203765066 +0000 UTC m=+1051.248147147" lastFinishedPulling="2025-11-29 06:52:23.218835546 +0000 UTC m=+1094.263217627" observedRunningTime="2025-11-29 06:52:24.384875277 +0000 UTC m=+1095.429257368" watchObservedRunningTime="2025-11-29 06:52:24.388982832 +0000 UTC m=+1095.433364903" Nov 29 06:52:24 crc kubenswrapper[4947]: I1129 06:52:24.405726 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-c45vq" event={"ID":"278ee247-5381-4806-b4b7-9247f9ff162d","Type":"ContainerStarted","Data":"d854f41978cb58ce0e040890b3e08a0a29d7fb0c83f25b8921b0805a4d098bc7"} Nov 29 06:52:24 crc kubenswrapper[4947]: I1129 06:52:24.415096 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-rrmmb" event={"ID":"3c7abd7d-ad0b-4a8e-9de6-95a7da2c11de","Type":"ContainerStarted","Data":"ae065beae9c57ff7a7c02ecb947ee5bd40d71c77a0ca3c6ce9510bdfd7eb4123"} Nov 29 06:52:24 crc kubenswrapper[4947]: I1129 06:52:24.418005 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-rrmmb" Nov 29 06:52:24 crc kubenswrapper[4947]: I1129 06:52:24.418826 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-rrmmb" Nov 29 06:52:24 crc kubenswrapper[4947]: I1129 06:52:24.421905 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-9bjj9" event={"ID":"7834eae4-c153-4d24-be4b-cfeb03744cff","Type":"ContainerStarted","Data":"ee00b4e3f78369931f87b5fcf7c07412edad5b334767d07677a7d246b555ffa9"} Nov 29 06:52:24 crc kubenswrapper[4947]: I1129 06:52:24.435242 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-lc774" podStartSLOduration=3.19481202 podStartE2EDuration="46.435194444s" podCreationTimestamp="2025-11-29 06:51:38 +0000 UTC" firstStartedPulling="2025-11-29 06:51:40.203517299 +0000 UTC m=+1051.247899380" lastFinishedPulling="2025-11-29 06:52:23.443899723 +0000 UTC m=+1094.488281804" observedRunningTime="2025-11-29 06:52:24.406037524 +0000 UTC m=+1095.450419605" watchObservedRunningTime="2025-11-29 06:52:24.435194444 +0000 UTC m=+1095.479576525" Nov 29 06:52:24 crc kubenswrapper[4947]: I1129 06:52:24.447684 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-hxtg8" event={"ID":"3c90fa55-0db3-435f-9927-983db02d2fac","Type":"ContainerStarted","Data":"c2a647305469dce1f4c1cfb1ff8606fe84c15819f96d6683c4798e1dd6bf3ded"} Nov 29 06:52:24 crc kubenswrapper[4947]: I1129 06:52:24.448100 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-hxtg8" Nov 29 06:52:24 crc kubenswrapper[4947]: I1129 06:52:24.454310 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-rrmmb" podStartSLOduration=3.917799475 podStartE2EDuration="47.454289998s" podCreationTimestamp="2025-11-29 06:51:37 +0000 UTC" firstStartedPulling="2025-11-29 06:51:39.657449931 +0000 UTC m=+1050.701832012" lastFinishedPulling="2025-11-29 06:52:23.193940454 +0000 UTC m=+1094.238322535" observedRunningTime="2025-11-29 06:52:24.442119459 +0000 UTC m=+1095.486501540" watchObservedRunningTime="2025-11-29 06:52:24.454289998 +0000 UTC m=+1095.498672079" Nov 29 06:52:24 crc kubenswrapper[4947]: I1129 06:52:24.459007 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-r4qxh" event={"ID":"bb3b3efc-1204-4486-b13d-be927701b46a","Type":"ContainerStarted","Data":"e0256a76de81f0a0d43779ea9f47542d42c97e0e28e99dd9d7616a77c4225e8f"} Nov 29 06:52:24 crc kubenswrapper[4947]: I1129 06:52:24.475873 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-86lcn" event={"ID":"49e564a6-0297-489a-8239-d195776466e7","Type":"ContainerStarted","Data":"705788eb255ad5ddd77d3fabb07cceb2e22e29d2f47a3ea065c1521e54791b65"} Nov 29 06:52:24 crc kubenswrapper[4947]: I1129 06:52:24.478602 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5854674fcc-86lcn" Nov 29 06:52:24 crc kubenswrapper[4947]: I1129 06:52:24.484811 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-wwvll" event={"ID":"0502c0dc-a197-41c4-a69a-ee8b633f4cb6","Type":"ContainerStarted","Data":"ee4381d29a87d86953013fffd393c32823aac1020d7bab3a8d39c74a73b394d8"} Nov 29 06:52:24 crc kubenswrapper[4947]: I1129 06:52:24.500455 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jkq9j" event={"ID":"3c962745-1298-4fbc-a4c7-ae2b75c1ce49","Type":"ContainerStarted","Data":"1d6a4a2258a94a370d3d235b25874a27fdcf7222f2ebe8ddcefeddde36abec71"} Nov 29 06:52:24 crc kubenswrapper[4947]: I1129 06:52:24.549110 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-hxtg8" podStartSLOduration=3.099953413 podStartE2EDuration="46.549087612s" podCreationTimestamp="2025-11-29 06:51:38 +0000 UTC" firstStartedPulling="2025-11-29 06:51:40.210841244 +0000 UTC m=+1051.255223315" lastFinishedPulling="2025-11-29 06:52:23.659975433 +0000 UTC m=+1094.704357514" observedRunningTime="2025-11-29 06:52:24.542435233 +0000 UTC m=+1095.586817314" watchObservedRunningTime="2025-11-29 06:52:24.549087612 +0000 UTC m=+1095.593469693" Nov 29 06:52:24 crc kubenswrapper[4947]: I1129 06:52:24.589260 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5854674fcc-86lcn" podStartSLOduration=3.49135519 podStartE2EDuration="46.58924113s" podCreationTimestamp="2025-11-29 06:51:38 +0000 UTC" firstStartedPulling="2025-11-29 06:51:40.205000327 +0000 UTC m=+1051.249382398" lastFinishedPulling="2025-11-29 06:52:23.302886257 +0000 UTC m=+1094.347268338" observedRunningTime="2025-11-29 06:52:24.587042185 +0000 UTC m=+1095.631424296" watchObservedRunningTime="2025-11-29 06:52:24.58924113 +0000 UTC m=+1095.633623211" Nov 29 06:52:25 crc kubenswrapper[4947]: I1129 06:52:25.511129 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-lc774" Nov 29 06:52:26 crc kubenswrapper[4947]: E1129 06:52:26.181480 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-256pd" podUID="3b8c773c-790f-4897-bf54-8ea2a8780a9a" Nov 29 06:52:27 crc kubenswrapper[4947]: I1129 06:52:27.947751 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tkhhz"] Nov 29 06:52:27 crc kubenswrapper[4947]: I1129 06:52:27.949799 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tkhhz" Nov 29 06:52:27 crc kubenswrapper[4947]: I1129 06:52:27.969789 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tkhhz"] Nov 29 06:52:28 crc kubenswrapper[4947]: I1129 06:52:28.113205 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c85c02b-9761-4495-b695-e251bcf72ed1-catalog-content\") pod \"redhat-marketplace-tkhhz\" (UID: \"7c85c02b-9761-4495-b695-e251bcf72ed1\") " pod="openshift-marketplace/redhat-marketplace-tkhhz" Nov 29 06:52:28 crc kubenswrapper[4947]: I1129 06:52:28.113300 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c85c02b-9761-4495-b695-e251bcf72ed1-utilities\") pod \"redhat-marketplace-tkhhz\" (UID: \"7c85c02b-9761-4495-b695-e251bcf72ed1\") " pod="openshift-marketplace/redhat-marketplace-tkhhz" Nov 29 06:52:28 crc kubenswrapper[4947]: I1129 06:52:28.113376 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99rcf\" (UniqueName: \"kubernetes.io/projected/7c85c02b-9761-4495-b695-e251bcf72ed1-kube-api-access-99rcf\") pod \"redhat-marketplace-tkhhz\" (UID: \"7c85c02b-9761-4495-b695-e251bcf72ed1\") " pod="openshift-marketplace/redhat-marketplace-tkhhz" Nov 29 06:52:28 crc kubenswrapper[4947]: I1129 06:52:28.217501 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c85c02b-9761-4495-b695-e251bcf72ed1-catalog-content\") pod \"redhat-marketplace-tkhhz\" (UID: \"7c85c02b-9761-4495-b695-e251bcf72ed1\") " pod="openshift-marketplace/redhat-marketplace-tkhhz" Nov 29 06:52:28 crc kubenswrapper[4947]: I1129 06:52:28.217590 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c85c02b-9761-4495-b695-e251bcf72ed1-utilities\") pod \"redhat-marketplace-tkhhz\" (UID: \"7c85c02b-9761-4495-b695-e251bcf72ed1\") " pod="openshift-marketplace/redhat-marketplace-tkhhz" Nov 29 06:52:28 crc kubenswrapper[4947]: I1129 06:52:28.217652 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99rcf\" (UniqueName: \"kubernetes.io/projected/7c85c02b-9761-4495-b695-e251bcf72ed1-kube-api-access-99rcf\") pod \"redhat-marketplace-tkhhz\" (UID: \"7c85c02b-9761-4495-b695-e251bcf72ed1\") " pod="openshift-marketplace/redhat-marketplace-tkhhz" Nov 29 06:52:28 crc kubenswrapper[4947]: I1129 06:52:28.218908 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c85c02b-9761-4495-b695-e251bcf72ed1-catalog-content\") pod \"redhat-marketplace-tkhhz\" (UID: \"7c85c02b-9761-4495-b695-e251bcf72ed1\") " pod="openshift-marketplace/redhat-marketplace-tkhhz" Nov 29 06:52:28 crc kubenswrapper[4947]: I1129 06:52:28.219254 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c85c02b-9761-4495-b695-e251bcf72ed1-utilities\") pod \"redhat-marketplace-tkhhz\" (UID: \"7c85c02b-9761-4495-b695-e251bcf72ed1\") " pod="openshift-marketplace/redhat-marketplace-tkhhz" Nov 29 06:52:28 crc kubenswrapper[4947]: I1129 06:52:28.249451 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99rcf\" (UniqueName: \"kubernetes.io/projected/7c85c02b-9761-4495-b695-e251bcf72ed1-kube-api-access-99rcf\") pod \"redhat-marketplace-tkhhz\" (UID: \"7c85c02b-9761-4495-b695-e251bcf72ed1\") " pod="openshift-marketplace/redhat-marketplace-tkhhz" Nov 29 06:52:28 crc kubenswrapper[4947]: I1129 06:52:28.276131 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tkhhz" Nov 29 06:52:28 crc kubenswrapper[4947]: I1129 06:52:28.738248 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tkhhz"] Nov 29 06:52:29 crc kubenswrapper[4947]: I1129 06:52:29.360601 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5854674fcc-86lcn" Nov 29 06:52:29 crc kubenswrapper[4947]: I1129 06:52:29.409584 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-d5zp8" Nov 29 06:52:29 crc kubenswrapper[4947]: I1129 06:52:29.434553 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-hxtg8" Nov 29 06:52:29 crc kubenswrapper[4947]: I1129 06:52:29.550152 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tkhhz" event={"ID":"7c85c02b-9761-4495-b695-e251bcf72ed1","Type":"ContainerStarted","Data":"d9c2836731d451c9add8b965a151765dd84932987b48b7c46905e0f25e3fca8b"} Nov 29 06:52:33 crc kubenswrapper[4947]: I1129 06:52:33.586815 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-c45vq" Nov 29 06:52:33 crc kubenswrapper[4947]: I1129 06:52:33.589862 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-c45vq" Nov 29 06:52:33 crc kubenswrapper[4947]: I1129 06:52:33.608813 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-c45vq" podStartSLOduration=12.143271147 podStartE2EDuration="56.608786281s" podCreationTimestamp="2025-11-29 06:51:37 +0000 UTC" firstStartedPulling="2025-11-29 06:51:39.220954451 +0000 UTC m=+1050.265336532" lastFinishedPulling="2025-11-29 06:52:23.686469585 +0000 UTC m=+1094.730851666" observedRunningTime="2025-11-29 06:52:33.606802801 +0000 UTC m=+1104.651184892" watchObservedRunningTime="2025-11-29 06:52:33.608786281 +0000 UTC m=+1104.653168362" Nov 29 06:52:34 crc kubenswrapper[4947]: E1129 06:52:34.406080 4947 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Nov 29 06:52:34 crc kubenswrapper[4947]: E1129 06:52:34.406574 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bkwjj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-546d4bdf48-cfpc5_openstack-operators(f5486cef-a49e-43c1-b4b2-798ee149238f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 06:52:34 crc kubenswrapper[4947]: E1129 06:52:34.407822 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-cfpc5" podUID="f5486cef-a49e-43c1-b4b2-798ee149238f" Nov 29 06:52:34 crc kubenswrapper[4947]: I1129 06:52:34.594504 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-78f8948974-r4qxh" Nov 29 06:52:34 crc kubenswrapper[4947]: I1129 06:52:34.598210 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-78f8948974-r4qxh" Nov 29 06:52:34 crc kubenswrapper[4947]: I1129 06:52:34.620255 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-78f8948974-r4qxh" podStartSLOduration=13.137368031 podStartE2EDuration="56.620229285s" podCreationTimestamp="2025-11-29 06:51:38 +0000 UTC" firstStartedPulling="2025-11-29 06:51:40.176115144 +0000 UTC m=+1051.220497225" lastFinishedPulling="2025-11-29 06:52:23.658976398 +0000 UTC m=+1094.703358479" observedRunningTime="2025-11-29 06:52:34.613306882 +0000 UTC m=+1105.657688983" watchObservedRunningTime="2025-11-29 06:52:34.620229285 +0000 UTC m=+1105.664611366" Nov 29 06:52:35 crc kubenswrapper[4947]: I1129 06:52:35.063418 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6c7b7f98c7-qvfhd" Nov 29 06:52:35 crc kubenswrapper[4947]: I1129 06:52:35.602450 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-r7gd9" event={"ID":"edb90e2e-84a1-4544-87b4-7e26c9dfd9bc","Type":"ContainerStarted","Data":"ffb0898a384322e56deb9726e64d9e34bd05da856810430e077de81526eff0ef"} Nov 29 06:52:36 crc kubenswrapper[4947]: E1129 06:52:36.359854 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-r7gd9" podUID="edb90e2e-84a1-4544-87b4-7e26c9dfd9bc" Nov 29 06:52:36 crc kubenswrapper[4947]: I1129 06:52:36.611341 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5898f4cf77-vpq4v" event={"ID":"a3977bd5-f0c4-4d95-bc6c-905bb2f03a07","Type":"ContainerStarted","Data":"90a29ecae041634d8525e6a19925cb7e827a9ebfd08dfe0afe68c20e921465bd"} Nov 29 06:52:36 crc kubenswrapper[4947]: I1129 06:52:36.612901 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lhstd" event={"ID":"d5a6afb3-6d78-488e-9f48-9d4b58f998bd","Type":"ContainerStarted","Data":"7f9ad8dbc913d6a42d7f007c0652f4a48119beb63ffcc09ad9a6695facdd9fbc"} Nov 29 06:52:36 crc kubenswrapper[4947]: I1129 06:52:36.615339 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-cpl5k" event={"ID":"e9138b46-80df-4e49-a519-807c3037d727","Type":"ContainerStarted","Data":"7f30dc8ec30e9011e08a11054fe4a944c9da6e40915f6fbbb78384b4a3e64b3b"} Nov 29 06:52:37 crc kubenswrapper[4947]: I1129 06:52:37.623190 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5898f4cf77-vpq4v" Nov 29 06:52:37 crc kubenswrapper[4947]: I1129 06:52:37.626921 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5898f4cf77-vpq4v" Nov 29 06:52:37 crc kubenswrapper[4947]: I1129 06:52:37.655081 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5898f4cf77-vpq4v" podStartSLOduration=15.622101017 podStartE2EDuration="1m0.655056242s" podCreationTimestamp="2025-11-29 06:51:37 +0000 UTC" firstStartedPulling="2025-11-29 06:51:39.016303591 +0000 UTC m=+1050.060685682" lastFinishedPulling="2025-11-29 06:52:24.049258836 +0000 UTC m=+1095.093640907" observedRunningTime="2025-11-29 06:52:37.650724933 +0000 UTC m=+1108.695107014" watchObservedRunningTime="2025-11-29 06:52:37.655056242 +0000 UTC m=+1108.699438323" Nov 29 06:52:37 crc kubenswrapper[4947]: E1129 06:52:37.970113 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lhstd" podUID="d5a6afb3-6d78-488e-9f48-9d4b58f998bd" Nov 29 06:52:40 crc kubenswrapper[4947]: I1129 06:52:40.646961 4947 generic.go:334] "Generic (PLEG): container finished" podID="7c85c02b-9761-4495-b695-e251bcf72ed1" containerID="b62348e8b01b8e08a4a6ca4ba79d2b0f048ea97b902de91fe273cb86e00594ba" exitCode=0 Nov 29 06:52:40 crc kubenswrapper[4947]: I1129 06:52:40.647037 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tkhhz" event={"ID":"7c85c02b-9761-4495-b695-e251bcf72ed1","Type":"ContainerDied","Data":"b62348e8b01b8e08a4a6ca4ba79d2b0f048ea97b902de91fe273cb86e00594ba"} Nov 29 06:52:42 crc kubenswrapper[4947]: E1129 06:52:42.244586 4947 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/infra-operator@sha256:09a6d0613ee2d3c1c809fc36c22678458ac271e0da87c970aec0a5339f5423f7" Nov 29 06:52:42 crc kubenswrapper[4947]: E1129 06:52:42.244968 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/infra-operator@sha256:09a6d0613ee2d3c1c809fc36c22678458ac271e0da87c970aec0a5339f5423f7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{600 -3} {} 600m DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{536870912 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hcqw4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-57548d458d-4ffsj_openstack-operators(b1873231-ef75-414f-85a0-9536e7e45d24): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 06:52:42 crc kubenswrapper[4947]: E1129 06:52:42.501867 4947 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Nov 29 06:52:42 crc kubenswrapper[4947]: E1129 06:52:42.502717 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fvhdh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-56bbcc9d85-mmx2k_openstack-operators(401765a6-8ea5-478e-bd29-5c2c717b57d3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 06:52:42 crc kubenswrapper[4947]: E1129 06:52:42.504014 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-mmx2k" podUID="401765a6-8ea5-478e-bd29-5c2c717b57d3" Nov 29 06:52:44 crc kubenswrapper[4947]: E1129 06:52:44.321802 4947 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:14cfad6ea2e7f7ecc4cb2aafceb9c61514b3d04b66668832d1e4ac3b19f1ab81" Nov 29 06:52:44 crc kubenswrapper[4947]: E1129 06:52:44.322702 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:14cfad6ea2e7f7ecc4cb2aafceb9c61514b3d04b66668832d1e4ac3b19f1ab81,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-evaluator:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-notifier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT,Value:registry.redhat.io/ubi9/httpd-24:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/mysqld-exporter:v0.15.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-backup:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-volume:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_API_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_PROC_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-processor:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-backend-bind9:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-mdns:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-producer:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-unbound:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-frr:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-iscsid:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT,Value:quay.io/sustainable_computing_io/kepler:release-0.7.12,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cron:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-multipathd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/node-exporter:v1.5.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/navidys/prometheus-podman-exporter:v1.10.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api-cfn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-redis:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/ironic-python-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT,Value:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-share:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-netutils:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-novncproxy:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-health-manager:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-housekeeping:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rsyslog:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-must-gather:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/edpm-hardened-uefi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-account:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-container:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-object:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-applier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-decision-engine:current-podified,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tr42v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-baremetal-operator-controller-manager-64bc77cfd44g9x2_openstack-operators(4af0248a-8357-4dce-95fc-1ed6384dc3f2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 06:52:44 crc kubenswrapper[4947]: E1129 06:52:44.872504 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/infra-operator-controller-manager-57548d458d-4ffsj" podUID="b1873231-ef75-414f-85a0-9536e7e45d24" Nov 29 06:52:44 crc kubenswrapper[4947]: E1129 06:52:44.873430 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-27k89" podUID="fcd59aa5-5edf-4200-aa2d-298f8b452fff" Nov 29 06:52:44 crc kubenswrapper[4947]: E1129 06:52:44.873718 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd44g9x2" podUID="4af0248a-8357-4dce-95fc-1ed6384dc3f2" Nov 29 06:52:45 crc kubenswrapper[4947]: E1129 06:52:45.042500 4947 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Nov 29 06:52:45 crc kubenswrapper[4947]: E1129 06:52:45.042683 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sclxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-6c548fd776-p6ml6_openstack-operators(8f81b728-2bf1-4638-91f7-717ab75349f3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 06:52:45 crc kubenswrapper[4947]: E1129 06:52:45.044044 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-p6ml6" podUID="8f81b728-2bf1-4638-91f7-717ab75349f3" Nov 29 06:52:45 crc kubenswrapper[4947]: I1129 06:52:45.693536 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lhstd" event={"ID":"d5a6afb3-6d78-488e-9f48-9d4b58f998bd","Type":"ContainerStarted","Data":"92ea6be6b71b6e527f90342cc6570dd9768be588d9c5af71b8073998cd053ebd"} Nov 29 06:52:45 crc kubenswrapper[4947]: I1129 06:52:45.696054 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jkq9j" event={"ID":"3c962745-1298-4fbc-a4c7-ae2b75c1ce49","Type":"ContainerStarted","Data":"e7f2370da2e04cf9df5cde4a78536b715b8817e8b1198a36490b1d3cb5622dfe"} Nov 29 06:52:45 crc kubenswrapper[4947]: I1129 06:52:45.698184 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-4ffsj" event={"ID":"b1873231-ef75-414f-85a0-9536e7e45d24","Type":"ContainerStarted","Data":"a5a9c13cb0eacf17b4f0abced33b3ffd7717f91a1fe47d6b95890d9d08cd9b43"} Nov 29 06:52:45 crc kubenswrapper[4947]: E1129 06:52:45.699398 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:09a6d0613ee2d3c1c809fc36c22678458ac271e0da87c970aec0a5339f5423f7\\\"\"" pod="openstack-operators/infra-operator-controller-manager-57548d458d-4ffsj" podUID="b1873231-ef75-414f-85a0-9536e7e45d24" Nov 29 06:52:45 crc kubenswrapper[4947]: I1129 06:52:45.701359 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-256pd" event={"ID":"3b8c773c-790f-4897-bf54-8ea2a8780a9a","Type":"ContainerStarted","Data":"0ae104e7f0c5ead57364c66bf799e9d8479b2179b3436f7b677554330a65d261"} Nov 29 06:52:45 crc kubenswrapper[4947]: I1129 06:52:45.704176 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-wwvll" event={"ID":"0502c0dc-a197-41c4-a69a-ee8b633f4cb6","Type":"ContainerStarted","Data":"63a0d6a965ec7e8315166fb76ea7db0518ba1c8628f9b62db5008941b3f11693"} Nov 29 06:52:45 crc kubenswrapper[4947]: I1129 06:52:45.704288 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-wwvll" Nov 29 06:52:45 crc kubenswrapper[4947]: I1129 06:52:45.705908 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-9bjj9" event={"ID":"7834eae4-c153-4d24-be4b-cfeb03744cff","Type":"ContainerStarted","Data":"653716e14f406e207252bb07c2a49f889c37ab641ec780027c0b583e569dd998"} Nov 29 06:52:45 crc kubenswrapper[4947]: I1129 06:52:45.706163 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-998648c74-9bjj9" Nov 29 06:52:45 crc kubenswrapper[4947]: I1129 06:52:45.708622 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-cpl5k" event={"ID":"e9138b46-80df-4e49-a519-807c3037d727","Type":"ContainerStarted","Data":"2cb132ba3270f24c0142acfe50f921be10dd5c01aa18e36a2e6efa40f62b4186"} Nov 29 06:52:45 crc kubenswrapper[4947]: I1129 06:52:45.708902 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-cpl5k" Nov 29 06:52:45 crc kubenswrapper[4947]: I1129 06:52:45.710737 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-r7gd9" event={"ID":"edb90e2e-84a1-4544-87b4-7e26c9dfd9bc","Type":"ContainerStarted","Data":"c00be02c482e6065ffbc8041c370af0c94ab614b67f73ec0bb3a1b67a713fd99"} Nov 29 06:52:45 crc kubenswrapper[4947]: I1129 06:52:45.713041 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-cpl5k" Nov 29 06:52:45 crc kubenswrapper[4947]: I1129 06:52:45.715866 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd44g9x2" event={"ID":"4af0248a-8357-4dce-95fc-1ed6384dc3f2","Type":"ContainerStarted","Data":"c5215b459df07134f565f41987f35a281687a79ed057ddf92249c761cdc76a0e"} Nov 29 06:52:45 crc kubenswrapper[4947]: I1129 06:52:45.717045 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-27k89" event={"ID":"fcd59aa5-5edf-4200-aa2d-298f8b452fff","Type":"ContainerStarted","Data":"eb62eea0c7a2ec6231028b6f88e0f05f1a802ed85c3de4b9fd5e71a14fbeb07b"} Nov 29 06:52:45 crc kubenswrapper[4947]: E1129 06:52:45.717778 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:14cfad6ea2e7f7ecc4cb2aafceb9c61514b3d04b66668832d1e4ac3b19f1ab81\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd44g9x2" podUID="4af0248a-8357-4dce-95fc-1ed6384dc3f2" Nov 29 06:52:45 crc kubenswrapper[4947]: I1129 06:52:45.726698 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jkq9j" podStartSLOduration=4.155211253 podStartE2EDuration="1m8.726670982s" podCreationTimestamp="2025-11-29 06:51:37 +0000 UTC" firstStartedPulling="2025-11-29 06:51:39.689673908 +0000 UTC m=+1050.734055989" lastFinishedPulling="2025-11-29 06:52:44.261133617 +0000 UTC m=+1115.305515718" observedRunningTime="2025-11-29 06:52:45.72059372 +0000 UTC m=+1116.764975801" watchObservedRunningTime="2025-11-29 06:52:45.726670982 +0000 UTC m=+1116.771053063" Nov 29 06:52:45 crc kubenswrapper[4947]: I1129 06:52:45.861735 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-cpl5k" podStartSLOduration=24.371216273 podStartE2EDuration="1m8.861718101s" podCreationTimestamp="2025-11-29 06:51:37 +0000 UTC" firstStartedPulling="2025-11-29 06:51:39.464973719 +0000 UTC m=+1050.509355800" lastFinishedPulling="2025-11-29 06:52:23.955475547 +0000 UTC m=+1094.999857628" observedRunningTime="2025-11-29 06:52:45.857732141 +0000 UTC m=+1116.902114222" watchObservedRunningTime="2025-11-29 06:52:45.861718101 +0000 UTC m=+1116.906100182" Nov 29 06:52:45 crc kubenswrapper[4947]: I1129 06:52:45.905274 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-998648c74-9bjj9" podStartSLOduration=4.754146072 podStartE2EDuration="1m8.90523633s" podCreationTimestamp="2025-11-29 06:51:37 +0000 UTC" firstStartedPulling="2025-11-29 06:51:40.109816083 +0000 UTC m=+1051.154198174" lastFinishedPulling="2025-11-29 06:52:44.260906331 +0000 UTC m=+1115.305288432" observedRunningTime="2025-11-29 06:52:45.899144747 +0000 UTC m=+1116.943526838" watchObservedRunningTime="2025-11-29 06:52:45.90523633 +0000 UTC m=+1116.949618411" Nov 29 06:52:45 crc kubenswrapper[4947]: I1129 06:52:45.941801 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-wwvll" podStartSLOduration=4.121246628 podStartE2EDuration="1m8.941759044s" podCreationTimestamp="2025-11-29 06:51:37 +0000 UTC" firstStartedPulling="2025-11-29 06:51:39.440484978 +0000 UTC m=+1050.484867059" lastFinishedPulling="2025-11-29 06:52:44.260997384 +0000 UTC m=+1115.305379475" observedRunningTime="2025-11-29 06:52:45.937549008 +0000 UTC m=+1116.981931089" watchObservedRunningTime="2025-11-29 06:52:45.941759044 +0000 UTC m=+1116.986141125" Nov 29 06:52:46 crc kubenswrapper[4947]: I1129 06:52:46.725268 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jkq9j" Nov 29 06:52:46 crc kubenswrapper[4947]: I1129 06:52:46.751894 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lhstd" podStartSLOduration=3.79797208 podStartE2EDuration="1m8.751873862s" podCreationTimestamp="2025-11-29 06:51:38 +0000 UTC" firstStartedPulling="2025-11-29 06:51:40.08408051 +0000 UTC m=+1051.128462581" lastFinishedPulling="2025-11-29 06:52:45.037982282 +0000 UTC m=+1116.082364363" observedRunningTime="2025-11-29 06:52:46.751107743 +0000 UTC m=+1117.795489824" watchObservedRunningTime="2025-11-29 06:52:46.751873862 +0000 UTC m=+1117.796255943" Nov 29 06:52:46 crc kubenswrapper[4947]: I1129 06:52:46.774589 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-r7gd9" podStartSLOduration=4.802052285 podStartE2EDuration="1m9.774566s" podCreationTimestamp="2025-11-29 06:51:37 +0000 UTC" firstStartedPulling="2025-11-29 06:51:40.062173645 +0000 UTC m=+1051.106555726" lastFinishedPulling="2025-11-29 06:52:45.03468736 +0000 UTC m=+1116.079069441" observedRunningTime="2025-11-29 06:52:46.77217444 +0000 UTC m=+1117.816556521" watchObservedRunningTime="2025-11-29 06:52:46.774566 +0000 UTC m=+1117.818948081" Nov 29 06:52:46 crc kubenswrapper[4947]: I1129 06:52:46.847921 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-256pd" podStartSLOduration=4.010497557 podStartE2EDuration="1m8.847896674s" podCreationTimestamp="2025-11-29 06:51:38 +0000 UTC" firstStartedPulling="2025-11-29 06:51:40.203319434 +0000 UTC m=+1051.247701515" lastFinishedPulling="2025-11-29 06:52:45.040718551 +0000 UTC m=+1116.085100632" observedRunningTime="2025-11-29 06:52:46.839581916 +0000 UTC m=+1117.883963997" watchObservedRunningTime="2025-11-29 06:52:46.847896674 +0000 UTC m=+1117.892278755" Nov 29 06:52:47 crc kubenswrapper[4947]: E1129 06:52:47.114897 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:09a6d0613ee2d3c1c809fc36c22678458ac271e0da87c970aec0a5339f5423f7\\\"\"" pod="openstack-operators/infra-operator-controller-manager-57548d458d-4ffsj" podUID="b1873231-ef75-414f-85a0-9536e7e45d24" Nov 29 06:52:47 crc kubenswrapper[4947]: E1129 06:52:47.115038 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:14cfad6ea2e7f7ecc4cb2aafceb9c61514b3d04b66668832d1e4ac3b19f1ab81\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd44g9x2" podUID="4af0248a-8357-4dce-95fc-1ed6384dc3f2" Nov 29 06:52:48 crc kubenswrapper[4947]: I1129 06:52:48.788923 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-r7gd9" Nov 29 06:52:49 crc kubenswrapper[4947]: I1129 06:52:49.075900 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lhstd" Nov 29 06:52:53 crc kubenswrapper[4947]: I1129 06:52:53.808721 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-27k89" Nov 29 06:52:53 crc kubenswrapper[4947]: I1129 06:52:53.834816 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-27k89" podStartSLOduration=3.04406568 podStartE2EDuration="1m16.834794868s" podCreationTimestamp="2025-11-29 06:51:37 +0000 UTC" firstStartedPulling="2025-11-29 06:51:39.676875823 +0000 UTC m=+1050.721257904" lastFinishedPulling="2025-11-29 06:52:53.467605011 +0000 UTC m=+1124.511987092" observedRunningTime="2025-11-29 06:52:53.832809019 +0000 UTC m=+1124.877191100" watchObservedRunningTime="2025-11-29 06:52:53.834794868 +0000 UTC m=+1124.879176949" Nov 29 06:52:54 crc kubenswrapper[4947]: I1129 06:52:54.828027 4947 generic.go:334] "Generic (PLEG): container finished" podID="7c85c02b-9761-4495-b695-e251bcf72ed1" containerID="cdcfddbb2fe7cb1842a5c38f3ea2fe1e7be425fd081eb719c0580ca9aaf44515" exitCode=0 Nov 29 06:52:54 crc kubenswrapper[4947]: I1129 06:52:54.828105 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tkhhz" event={"ID":"7c85c02b-9761-4495-b695-e251bcf72ed1","Type":"ContainerDied","Data":"cdcfddbb2fe7cb1842a5c38f3ea2fe1e7be425fd081eb719c0580ca9aaf44515"} Nov 29 06:52:54 crc kubenswrapper[4947]: I1129 06:52:54.832790 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-cfpc5" event={"ID":"f5486cef-a49e-43c1-b4b2-798ee149238f","Type":"ContainerStarted","Data":"9aadf9b648fe558dd40afe2d677d64ba233728b69d86ae1405a303b7cc83ef0d"} Nov 29 06:52:54 crc kubenswrapper[4947]: I1129 06:52:54.833513 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-cfpc5" Nov 29 06:52:54 crc kubenswrapper[4947]: I1129 06:52:54.833533 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-cfpc5" event={"ID":"f5486cef-a49e-43c1-b4b2-798ee149238f","Type":"ContainerStarted","Data":"1c722a988072d7a138006d8ca57e84f827e2542e643427f3842c8c408772b940"} Nov 29 06:52:54 crc kubenswrapper[4947]: I1129 06:52:54.836572 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-p6ml6" event={"ID":"8f81b728-2bf1-4638-91f7-717ab75349f3","Type":"ContainerStarted","Data":"2056a126f26ac8d74902e41045e173f2b8a8a978e2e1d117704bb410631471a3"} Nov 29 06:52:54 crc kubenswrapper[4947]: I1129 06:52:54.837698 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-p6ml6" Nov 29 06:52:54 crc kubenswrapper[4947]: I1129 06:52:54.837874 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-p6ml6" event={"ID":"8f81b728-2bf1-4638-91f7-717ab75349f3","Type":"ContainerStarted","Data":"593c4e5b4af1bffc0ec21234e2a390d83777f8cde644017794c41b0f4a7725a9"} Nov 29 06:52:54 crc kubenswrapper[4947]: I1129 06:52:54.841296 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-mmx2k" event={"ID":"401765a6-8ea5-478e-bd29-5c2c717b57d3","Type":"ContainerStarted","Data":"972d44cda1b52384512c86aa1e69ad5401189ed63f78b33e00c32af7096f0728"} Nov 29 06:52:54 crc kubenswrapper[4947]: I1129 06:52:54.841419 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-mmx2k" event={"ID":"401765a6-8ea5-478e-bd29-5c2c717b57d3","Type":"ContainerStarted","Data":"491309323fb48ee3a5d2a3dbc4890baf6f4705bf756c47429b005cb6fa89916b"} Nov 29 06:52:54 crc kubenswrapper[4947]: I1129 06:52:54.841454 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-mmx2k" Nov 29 06:52:54 crc kubenswrapper[4947]: I1129 06:52:54.844054 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-27k89" event={"ID":"fcd59aa5-5edf-4200-aa2d-298f8b452fff","Type":"ContainerStarted","Data":"ca3116a8b0e40ca75d6def377cfb2902acefdff18b469c762b0b50b4f3243844"} Nov 29 06:52:54 crc kubenswrapper[4947]: I1129 06:52:54.890548 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-p6ml6" podStartSLOduration=3.859317235 podStartE2EDuration="1m17.890520321s" podCreationTimestamp="2025-11-29 06:51:37 +0000 UTC" firstStartedPulling="2025-11-29 06:51:39.441003161 +0000 UTC m=+1050.485385242" lastFinishedPulling="2025-11-29 06:52:53.472206237 +0000 UTC m=+1124.516588328" observedRunningTime="2025-11-29 06:52:54.889528247 +0000 UTC m=+1125.933910328" watchObservedRunningTime="2025-11-29 06:52:54.890520321 +0000 UTC m=+1125.934902402" Nov 29 06:52:54 crc kubenswrapper[4947]: I1129 06:52:54.910665 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-cfpc5" podStartSLOduration=4.080921338 podStartE2EDuration="1m17.910648685s" podCreationTimestamp="2025-11-29 06:51:37 +0000 UTC" firstStartedPulling="2025-11-29 06:51:39.638693935 +0000 UTC m=+1050.683076006" lastFinishedPulling="2025-11-29 06:52:53.468421272 +0000 UTC m=+1124.512803353" observedRunningTime="2025-11-29 06:52:54.909555058 +0000 UTC m=+1125.953937139" watchObservedRunningTime="2025-11-29 06:52:54.910648685 +0000 UTC m=+1125.955030766" Nov 29 06:52:54 crc kubenswrapper[4947]: I1129 06:52:54.931315 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-mmx2k" podStartSLOduration=4.573202826 podStartE2EDuration="1m17.931289271s" podCreationTimestamp="2025-11-29 06:51:37 +0000 UTC" firstStartedPulling="2025-11-29 06:51:40.110392778 +0000 UTC m=+1051.154774859" lastFinishedPulling="2025-11-29 06:52:53.468479233 +0000 UTC m=+1124.512861304" observedRunningTime="2025-11-29 06:52:54.928464971 +0000 UTC m=+1125.972847052" watchObservedRunningTime="2025-11-29 06:52:54.931289271 +0000 UTC m=+1125.975671352" Nov 29 06:52:57 crc kubenswrapper[4947]: I1129 06:52:57.870110 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tkhhz" event={"ID":"7c85c02b-9761-4495-b695-e251bcf72ed1","Type":"ContainerStarted","Data":"ea7f8fdebb82f5d7f17c07b6a2c772ba58019d056510c1adfafe923c153fa4d7"} Nov 29 06:52:57 crc kubenswrapper[4947]: I1129 06:52:57.896506 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tkhhz" podStartSLOduration=15.536821197 podStartE2EDuration="30.896477107s" podCreationTimestamp="2025-11-29 06:52:27 +0000 UTC" firstStartedPulling="2025-11-29 06:52:41.796350761 +0000 UTC m=+1112.840732842" lastFinishedPulling="2025-11-29 06:52:57.156006671 +0000 UTC m=+1128.200388752" observedRunningTime="2025-11-29 06:52:57.891707267 +0000 UTC m=+1128.936089348" watchObservedRunningTime="2025-11-29 06:52:57.896477107 +0000 UTC m=+1128.940859188" Nov 29 06:52:58 crc kubenswrapper[4947]: I1129 06:52:58.160060 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-p6ml6" Nov 29 06:52:58 crc kubenswrapper[4947]: I1129 06:52:58.277100 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tkhhz" Nov 29 06:52:58 crc kubenswrapper[4947]: I1129 06:52:58.277163 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tkhhz" Nov 29 06:52:58 crc kubenswrapper[4947]: I1129 06:52:58.325769 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-wwvll" Nov 29 06:52:58 crc kubenswrapper[4947]: I1129 06:52:58.393757 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-cfpc5" Nov 29 06:52:58 crc kubenswrapper[4947]: I1129 06:52:58.540869 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-27k89" Nov 29 06:52:58 crc kubenswrapper[4947]: I1129 06:52:58.570754 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jkq9j" Nov 29 06:52:58 crc kubenswrapper[4947]: I1129 06:52:58.763246 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-mmx2k" Nov 29 06:52:58 crc kubenswrapper[4947]: I1129 06:52:58.791873 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-r7gd9" Nov 29 06:52:58 crc kubenswrapper[4947]: I1129 06:52:58.855029 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-998648c74-9bjj9" Nov 29 06:52:59 crc kubenswrapper[4947]: I1129 06:52:59.079368 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lhstd" Nov 29 06:52:59 crc kubenswrapper[4947]: I1129 06:52:59.318612 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-tkhhz" podUID="7c85c02b-9761-4495-b695-e251bcf72ed1" containerName="registry-server" probeResult="failure" output=< Nov 29 06:52:59 crc kubenswrapper[4947]: timeout: failed to connect service ":50051" within 1s Nov 29 06:52:59 crc kubenswrapper[4947]: > Nov 29 06:53:00 crc kubenswrapper[4947]: I1129 06:53:00.897823 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-4ffsj" event={"ID":"b1873231-ef75-414f-85a0-9536e7e45d24","Type":"ContainerStarted","Data":"0e03497d2203a15bdb583ef9de1027023c0bc626db2fe0f68a75dcc3e7061750"} Nov 29 06:53:00 crc kubenswrapper[4947]: I1129 06:53:00.899446 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-57548d458d-4ffsj" Nov 29 06:53:00 crc kubenswrapper[4947]: I1129 06:53:00.924793 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-57548d458d-4ffsj" podStartSLOduration=41.924645224 podStartE2EDuration="1m23.92477102s" podCreationTimestamp="2025-11-29 06:51:37 +0000 UTC" firstStartedPulling="2025-11-29 06:52:18.28495742 +0000 UTC m=+1089.329339501" lastFinishedPulling="2025-11-29 06:53:00.285083206 +0000 UTC m=+1131.329465297" observedRunningTime="2025-11-29 06:53:00.920564135 +0000 UTC m=+1131.964946216" watchObservedRunningTime="2025-11-29 06:53:00.92477102 +0000 UTC m=+1131.969153101" Nov 29 06:53:02 crc kubenswrapper[4947]: I1129 06:53:02.916533 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd44g9x2" event={"ID":"4af0248a-8357-4dce-95fc-1ed6384dc3f2","Type":"ContainerStarted","Data":"b3026caaa861dd3f4ca83b0a9f7109f3aff1345b10080cecee4189ddd8daebfc"} Nov 29 06:53:02 crc kubenswrapper[4947]: I1129 06:53:02.917087 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd44g9x2" Nov 29 06:53:02 crc kubenswrapper[4947]: I1129 06:53:02.948307 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd44g9x2" podStartSLOduration=39.153698197 podStartE2EDuration="1m24.948285646s" podCreationTimestamp="2025-11-29 06:51:38 +0000 UTC" firstStartedPulling="2025-11-29 06:52:16.062555398 +0000 UTC m=+1087.106937519" lastFinishedPulling="2025-11-29 06:53:01.857142877 +0000 UTC m=+1132.901524968" observedRunningTime="2025-11-29 06:53:02.944203653 +0000 UTC m=+1133.988585744" watchObservedRunningTime="2025-11-29 06:53:02.948285646 +0000 UTC m=+1133.992667737" Nov 29 06:53:08 crc kubenswrapper[4947]: I1129 06:53:08.332313 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tkhhz" Nov 29 06:53:08 crc kubenswrapper[4947]: I1129 06:53:08.385179 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tkhhz" Nov 29 06:53:08 crc kubenswrapper[4947]: I1129 06:53:08.575243 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tkhhz"] Nov 29 06:53:09 crc kubenswrapper[4947]: I1129 06:53:09.967010 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tkhhz" podUID="7c85c02b-9761-4495-b695-e251bcf72ed1" containerName="registry-server" containerID="cri-o://ea7f8fdebb82f5d7f17c07b6a2c772ba58019d056510c1adfafe923c153fa4d7" gracePeriod=2 Nov 29 06:53:10 crc kubenswrapper[4947]: I1129 06:53:10.398074 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-57548d458d-4ffsj" Nov 29 06:53:10 crc kubenswrapper[4947]: I1129 06:53:10.457129 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tkhhz" Nov 29 06:53:10 crc kubenswrapper[4947]: I1129 06:53:10.594439 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c85c02b-9761-4495-b695-e251bcf72ed1-utilities\") pod \"7c85c02b-9761-4495-b695-e251bcf72ed1\" (UID: \"7c85c02b-9761-4495-b695-e251bcf72ed1\") " Nov 29 06:53:10 crc kubenswrapper[4947]: I1129 06:53:10.594654 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99rcf\" (UniqueName: \"kubernetes.io/projected/7c85c02b-9761-4495-b695-e251bcf72ed1-kube-api-access-99rcf\") pod \"7c85c02b-9761-4495-b695-e251bcf72ed1\" (UID: \"7c85c02b-9761-4495-b695-e251bcf72ed1\") " Nov 29 06:53:10 crc kubenswrapper[4947]: I1129 06:53:10.594764 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c85c02b-9761-4495-b695-e251bcf72ed1-catalog-content\") pod \"7c85c02b-9761-4495-b695-e251bcf72ed1\" (UID: \"7c85c02b-9761-4495-b695-e251bcf72ed1\") " Nov 29 06:53:10 crc kubenswrapper[4947]: I1129 06:53:10.595489 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c85c02b-9761-4495-b695-e251bcf72ed1-utilities" (OuterVolumeSpecName: "utilities") pod "7c85c02b-9761-4495-b695-e251bcf72ed1" (UID: "7c85c02b-9761-4495-b695-e251bcf72ed1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:53:10 crc kubenswrapper[4947]: I1129 06:53:10.602629 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c85c02b-9761-4495-b695-e251bcf72ed1-kube-api-access-99rcf" (OuterVolumeSpecName: "kube-api-access-99rcf") pod "7c85c02b-9761-4495-b695-e251bcf72ed1" (UID: "7c85c02b-9761-4495-b695-e251bcf72ed1"). InnerVolumeSpecName "kube-api-access-99rcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:53:10 crc kubenswrapper[4947]: I1129 06:53:10.619729 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c85c02b-9761-4495-b695-e251bcf72ed1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7c85c02b-9761-4495-b695-e251bcf72ed1" (UID: "7c85c02b-9761-4495-b695-e251bcf72ed1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:53:10 crc kubenswrapper[4947]: I1129 06:53:10.696354 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c85c02b-9761-4495-b695-e251bcf72ed1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 06:53:10 crc kubenswrapper[4947]: I1129 06:53:10.696397 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c85c02b-9761-4495-b695-e251bcf72ed1-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 06:53:10 crc kubenswrapper[4947]: I1129 06:53:10.696413 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99rcf\" (UniqueName: \"kubernetes.io/projected/7c85c02b-9761-4495-b695-e251bcf72ed1-kube-api-access-99rcf\") on node \"crc\" DevicePath \"\"" Nov 29 06:53:10 crc kubenswrapper[4947]: I1129 06:53:10.977986 4947 generic.go:334] "Generic (PLEG): container finished" podID="7c85c02b-9761-4495-b695-e251bcf72ed1" containerID="ea7f8fdebb82f5d7f17c07b6a2c772ba58019d056510c1adfafe923c153fa4d7" exitCode=0 Nov 29 06:53:10 crc kubenswrapper[4947]: I1129 06:53:10.978065 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tkhhz" event={"ID":"7c85c02b-9761-4495-b695-e251bcf72ed1","Type":"ContainerDied","Data":"ea7f8fdebb82f5d7f17c07b6a2c772ba58019d056510c1adfafe923c153fa4d7"} Nov 29 06:53:10 crc kubenswrapper[4947]: I1129 06:53:10.978088 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tkhhz" Nov 29 06:53:10 crc kubenswrapper[4947]: I1129 06:53:10.978112 4947 scope.go:117] "RemoveContainer" containerID="ea7f8fdebb82f5d7f17c07b6a2c772ba58019d056510c1adfafe923c153fa4d7" Nov 29 06:53:10 crc kubenswrapper[4947]: I1129 06:53:10.978101 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tkhhz" event={"ID":"7c85c02b-9761-4495-b695-e251bcf72ed1","Type":"ContainerDied","Data":"d9c2836731d451c9add8b965a151765dd84932987b48b7c46905e0f25e3fca8b"} Nov 29 06:53:11 crc kubenswrapper[4947]: I1129 06:53:11.000208 4947 scope.go:117] "RemoveContainer" containerID="cdcfddbb2fe7cb1842a5c38f3ea2fe1e7be425fd081eb719c0580ca9aaf44515" Nov 29 06:53:11 crc kubenswrapper[4947]: I1129 06:53:11.024468 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tkhhz"] Nov 29 06:53:11 crc kubenswrapper[4947]: I1129 06:53:11.033562 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tkhhz"] Nov 29 06:53:11 crc kubenswrapper[4947]: I1129 06:53:11.041235 4947 scope.go:117] "RemoveContainer" containerID="b62348e8b01b8e08a4a6ca4ba79d2b0f048ea97b902de91fe273cb86e00594ba" Nov 29 06:53:11 crc kubenswrapper[4947]: I1129 06:53:11.064744 4947 scope.go:117] "RemoveContainer" containerID="ea7f8fdebb82f5d7f17c07b6a2c772ba58019d056510c1adfafe923c153fa4d7" Nov 29 06:53:11 crc kubenswrapper[4947]: E1129 06:53:11.065284 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea7f8fdebb82f5d7f17c07b6a2c772ba58019d056510c1adfafe923c153fa4d7\": container with ID starting with ea7f8fdebb82f5d7f17c07b6a2c772ba58019d056510c1adfafe923c153fa4d7 not found: ID does not exist" containerID="ea7f8fdebb82f5d7f17c07b6a2c772ba58019d056510c1adfafe923c153fa4d7" Nov 29 06:53:11 crc kubenswrapper[4947]: I1129 06:53:11.065327 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea7f8fdebb82f5d7f17c07b6a2c772ba58019d056510c1adfafe923c153fa4d7"} err="failed to get container status \"ea7f8fdebb82f5d7f17c07b6a2c772ba58019d056510c1adfafe923c153fa4d7\": rpc error: code = NotFound desc = could not find container \"ea7f8fdebb82f5d7f17c07b6a2c772ba58019d056510c1adfafe923c153fa4d7\": container with ID starting with ea7f8fdebb82f5d7f17c07b6a2c772ba58019d056510c1adfafe923c153fa4d7 not found: ID does not exist" Nov 29 06:53:11 crc kubenswrapper[4947]: I1129 06:53:11.065395 4947 scope.go:117] "RemoveContainer" containerID="cdcfddbb2fe7cb1842a5c38f3ea2fe1e7be425fd081eb719c0580ca9aaf44515" Nov 29 06:53:11 crc kubenswrapper[4947]: E1129 06:53:11.065757 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdcfddbb2fe7cb1842a5c38f3ea2fe1e7be425fd081eb719c0580ca9aaf44515\": container with ID starting with cdcfddbb2fe7cb1842a5c38f3ea2fe1e7be425fd081eb719c0580ca9aaf44515 not found: ID does not exist" containerID="cdcfddbb2fe7cb1842a5c38f3ea2fe1e7be425fd081eb719c0580ca9aaf44515" Nov 29 06:53:11 crc kubenswrapper[4947]: I1129 06:53:11.065788 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdcfddbb2fe7cb1842a5c38f3ea2fe1e7be425fd081eb719c0580ca9aaf44515"} err="failed to get container status \"cdcfddbb2fe7cb1842a5c38f3ea2fe1e7be425fd081eb719c0580ca9aaf44515\": rpc error: code = NotFound desc = could not find container \"cdcfddbb2fe7cb1842a5c38f3ea2fe1e7be425fd081eb719c0580ca9aaf44515\": container with ID starting with cdcfddbb2fe7cb1842a5c38f3ea2fe1e7be425fd081eb719c0580ca9aaf44515 not found: ID does not exist" Nov 29 06:53:11 crc kubenswrapper[4947]: I1129 06:53:11.065806 4947 scope.go:117] "RemoveContainer" containerID="b62348e8b01b8e08a4a6ca4ba79d2b0f048ea97b902de91fe273cb86e00594ba" Nov 29 06:53:11 crc kubenswrapper[4947]: E1129 06:53:11.066324 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b62348e8b01b8e08a4a6ca4ba79d2b0f048ea97b902de91fe273cb86e00594ba\": container with ID starting with b62348e8b01b8e08a4a6ca4ba79d2b0f048ea97b902de91fe273cb86e00594ba not found: ID does not exist" containerID="b62348e8b01b8e08a4a6ca4ba79d2b0f048ea97b902de91fe273cb86e00594ba" Nov 29 06:53:11 crc kubenswrapper[4947]: I1129 06:53:11.066352 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b62348e8b01b8e08a4a6ca4ba79d2b0f048ea97b902de91fe273cb86e00594ba"} err="failed to get container status \"b62348e8b01b8e08a4a6ca4ba79d2b0f048ea97b902de91fe273cb86e00594ba\": rpc error: code = NotFound desc = could not find container \"b62348e8b01b8e08a4a6ca4ba79d2b0f048ea97b902de91fe273cb86e00594ba\": container with ID starting with b62348e8b01b8e08a4a6ca4ba79d2b0f048ea97b902de91fe273cb86e00594ba not found: ID does not exist" Nov 29 06:53:11 crc kubenswrapper[4947]: I1129 06:53:11.191291 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c85c02b-9761-4495-b695-e251bcf72ed1" path="/var/lib/kubelet/pods/7c85c02b-9761-4495-b695-e251bcf72ed1/volumes" Nov 29 06:53:14 crc kubenswrapper[4947]: I1129 06:53:14.846441 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd44g9x2" Nov 29 06:53:31 crc kubenswrapper[4947]: I1129 06:53:31.989077 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-cz49j"] Nov 29 06:53:31 crc kubenswrapper[4947]: E1129 06:53:31.990092 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c85c02b-9761-4495-b695-e251bcf72ed1" containerName="registry-server" Nov 29 06:53:31 crc kubenswrapper[4947]: I1129 06:53:31.990108 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c85c02b-9761-4495-b695-e251bcf72ed1" containerName="registry-server" Nov 29 06:53:31 crc kubenswrapper[4947]: E1129 06:53:31.990138 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c85c02b-9761-4495-b695-e251bcf72ed1" containerName="extract-utilities" Nov 29 06:53:31 crc kubenswrapper[4947]: I1129 06:53:31.990145 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c85c02b-9761-4495-b695-e251bcf72ed1" containerName="extract-utilities" Nov 29 06:53:31 crc kubenswrapper[4947]: E1129 06:53:31.990178 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c85c02b-9761-4495-b695-e251bcf72ed1" containerName="extract-content" Nov 29 06:53:31 crc kubenswrapper[4947]: I1129 06:53:31.990186 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c85c02b-9761-4495-b695-e251bcf72ed1" containerName="extract-content" Nov 29 06:53:31 crc kubenswrapper[4947]: I1129 06:53:31.990514 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c85c02b-9761-4495-b695-e251bcf72ed1" containerName="registry-server" Nov 29 06:53:31 crc kubenswrapper[4947]: I1129 06:53:31.991529 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-cz49j" Nov 29 06:53:31 crc kubenswrapper[4947]: I1129 06:53:31.998098 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 29 06:53:31 crc kubenswrapper[4947]: I1129 06:53:31.998167 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 29 06:53:31 crc kubenswrapper[4947]: I1129 06:53:31.998114 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 29 06:53:31 crc kubenswrapper[4947]: I1129 06:53:31.998463 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-b9f5q" Nov 29 06:53:32 crc kubenswrapper[4947]: I1129 06:53:32.011832 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-cz49j"] Nov 29 06:53:32 crc kubenswrapper[4947]: I1129 06:53:32.084842 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-22brq"] Nov 29 06:53:32 crc kubenswrapper[4947]: I1129 06:53:32.086423 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-22brq" Nov 29 06:53:32 crc kubenswrapper[4947]: I1129 06:53:32.089560 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 29 06:53:32 crc kubenswrapper[4947]: I1129 06:53:32.112653 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-22brq"] Nov 29 06:53:32 crc kubenswrapper[4947]: I1129 06:53:32.120659 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a24c29c-f137-4066-8f99-a4714b59e044-config\") pod \"dnsmasq-dns-675f4bcbfc-cz49j\" (UID: \"5a24c29c-f137-4066-8f99-a4714b59e044\") " pod="openstack/dnsmasq-dns-675f4bcbfc-cz49j" Nov 29 06:53:32 crc kubenswrapper[4947]: I1129 06:53:32.121426 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsr5w\" (UniqueName: \"kubernetes.io/projected/5a24c29c-f137-4066-8f99-a4714b59e044-kube-api-access-rsr5w\") pod \"dnsmasq-dns-675f4bcbfc-cz49j\" (UID: \"5a24c29c-f137-4066-8f99-a4714b59e044\") " pod="openstack/dnsmasq-dns-675f4bcbfc-cz49j" Nov 29 06:53:32 crc kubenswrapper[4947]: I1129 06:53:32.222629 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxn59\" (UniqueName: \"kubernetes.io/projected/2e8aa085-3c9e-4f38-9fe4-2b16f6d598b0-kube-api-access-vxn59\") pod \"dnsmasq-dns-78dd6ddcc-22brq\" (UID: \"2e8aa085-3c9e-4f38-9fe4-2b16f6d598b0\") " pod="openstack/dnsmasq-dns-78dd6ddcc-22brq" Nov 29 06:53:32 crc kubenswrapper[4947]: I1129 06:53:32.222727 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a24c29c-f137-4066-8f99-a4714b59e044-config\") pod \"dnsmasq-dns-675f4bcbfc-cz49j\" (UID: \"5a24c29c-f137-4066-8f99-a4714b59e044\") " pod="openstack/dnsmasq-dns-675f4bcbfc-cz49j" Nov 29 06:53:32 crc kubenswrapper[4947]: I1129 06:53:32.222755 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e8aa085-3c9e-4f38-9fe4-2b16f6d598b0-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-22brq\" (UID: \"2e8aa085-3c9e-4f38-9fe4-2b16f6d598b0\") " pod="openstack/dnsmasq-dns-78dd6ddcc-22brq" Nov 29 06:53:32 crc kubenswrapper[4947]: I1129 06:53:32.222776 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e8aa085-3c9e-4f38-9fe4-2b16f6d598b0-config\") pod \"dnsmasq-dns-78dd6ddcc-22brq\" (UID: \"2e8aa085-3c9e-4f38-9fe4-2b16f6d598b0\") " pod="openstack/dnsmasq-dns-78dd6ddcc-22brq" Nov 29 06:53:32 crc kubenswrapper[4947]: I1129 06:53:32.222926 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsr5w\" (UniqueName: \"kubernetes.io/projected/5a24c29c-f137-4066-8f99-a4714b59e044-kube-api-access-rsr5w\") pod \"dnsmasq-dns-675f4bcbfc-cz49j\" (UID: \"5a24c29c-f137-4066-8f99-a4714b59e044\") " pod="openstack/dnsmasq-dns-675f4bcbfc-cz49j" Nov 29 06:53:32 crc kubenswrapper[4947]: I1129 06:53:32.223674 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a24c29c-f137-4066-8f99-a4714b59e044-config\") pod \"dnsmasq-dns-675f4bcbfc-cz49j\" (UID: \"5a24c29c-f137-4066-8f99-a4714b59e044\") " pod="openstack/dnsmasq-dns-675f4bcbfc-cz49j" Nov 29 06:53:32 crc kubenswrapper[4947]: I1129 06:53:32.248099 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsr5w\" (UniqueName: \"kubernetes.io/projected/5a24c29c-f137-4066-8f99-a4714b59e044-kube-api-access-rsr5w\") pod \"dnsmasq-dns-675f4bcbfc-cz49j\" (UID: \"5a24c29c-f137-4066-8f99-a4714b59e044\") " pod="openstack/dnsmasq-dns-675f4bcbfc-cz49j" Nov 29 06:53:32 crc kubenswrapper[4947]: I1129 06:53:32.324434 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxn59\" (UniqueName: \"kubernetes.io/projected/2e8aa085-3c9e-4f38-9fe4-2b16f6d598b0-kube-api-access-vxn59\") pod \"dnsmasq-dns-78dd6ddcc-22brq\" (UID: \"2e8aa085-3c9e-4f38-9fe4-2b16f6d598b0\") " pod="openstack/dnsmasq-dns-78dd6ddcc-22brq" Nov 29 06:53:32 crc kubenswrapper[4947]: I1129 06:53:32.324545 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e8aa085-3c9e-4f38-9fe4-2b16f6d598b0-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-22brq\" (UID: \"2e8aa085-3c9e-4f38-9fe4-2b16f6d598b0\") " pod="openstack/dnsmasq-dns-78dd6ddcc-22brq" Nov 29 06:53:32 crc kubenswrapper[4947]: I1129 06:53:32.324576 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e8aa085-3c9e-4f38-9fe4-2b16f6d598b0-config\") pod \"dnsmasq-dns-78dd6ddcc-22brq\" (UID: \"2e8aa085-3c9e-4f38-9fe4-2b16f6d598b0\") " pod="openstack/dnsmasq-dns-78dd6ddcc-22brq" Nov 29 06:53:32 crc kubenswrapper[4947]: I1129 06:53:32.326003 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e8aa085-3c9e-4f38-9fe4-2b16f6d598b0-config\") pod \"dnsmasq-dns-78dd6ddcc-22brq\" (UID: \"2e8aa085-3c9e-4f38-9fe4-2b16f6d598b0\") " pod="openstack/dnsmasq-dns-78dd6ddcc-22brq" Nov 29 06:53:32 crc kubenswrapper[4947]: I1129 06:53:32.327152 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e8aa085-3c9e-4f38-9fe4-2b16f6d598b0-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-22brq\" (UID: \"2e8aa085-3c9e-4f38-9fe4-2b16f6d598b0\") " pod="openstack/dnsmasq-dns-78dd6ddcc-22brq" Nov 29 06:53:32 crc kubenswrapper[4947]: I1129 06:53:32.332424 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-cz49j" Nov 29 06:53:32 crc kubenswrapper[4947]: I1129 06:53:32.347843 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxn59\" (UniqueName: \"kubernetes.io/projected/2e8aa085-3c9e-4f38-9fe4-2b16f6d598b0-kube-api-access-vxn59\") pod \"dnsmasq-dns-78dd6ddcc-22brq\" (UID: \"2e8aa085-3c9e-4f38-9fe4-2b16f6d598b0\") " pod="openstack/dnsmasq-dns-78dd6ddcc-22brq" Nov 29 06:53:32 crc kubenswrapper[4947]: I1129 06:53:32.405069 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-22brq" Nov 29 06:53:32 crc kubenswrapper[4947]: I1129 06:53:32.838799 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-cz49j"] Nov 29 06:53:32 crc kubenswrapper[4947]: I1129 06:53:32.934900 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-22brq"] Nov 29 06:53:33 crc kubenswrapper[4947]: I1129 06:53:33.188611 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-cz49j" event={"ID":"5a24c29c-f137-4066-8f99-a4714b59e044","Type":"ContainerStarted","Data":"549543a89348b89a4fb0ce1135fb4ab4f892f9f16fca8c08ab8201a7bd4aaa38"} Nov 29 06:53:33 crc kubenswrapper[4947]: I1129 06:53:33.188949 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-22brq" event={"ID":"2e8aa085-3c9e-4f38-9fe4-2b16f6d598b0","Type":"ContainerStarted","Data":"025062be4034d66b7d42828a8ab211dddbd999e997a34d1ff4afbd76f8993e14"} Nov 29 06:53:34 crc kubenswrapper[4947]: I1129 06:53:34.869537 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-cz49j"] Nov 29 06:53:34 crc kubenswrapper[4947]: I1129 06:53:34.934245 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-lfrj7"] Nov 29 06:53:34 crc kubenswrapper[4947]: I1129 06:53:34.942605 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-lfrj7" Nov 29 06:53:34 crc kubenswrapper[4947]: I1129 06:53:34.949629 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-lfrj7"] Nov 29 06:53:34 crc kubenswrapper[4947]: I1129 06:53:34.973967 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/565b2507-ddbb-4657-b879-082fb62e1284-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-lfrj7\" (UID: \"565b2507-ddbb-4657-b879-082fb62e1284\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lfrj7" Nov 29 06:53:34 crc kubenswrapper[4947]: I1129 06:53:34.974049 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/565b2507-ddbb-4657-b879-082fb62e1284-config\") pod \"dnsmasq-dns-5ccc8479f9-lfrj7\" (UID: \"565b2507-ddbb-4657-b879-082fb62e1284\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lfrj7" Nov 29 06:53:34 crc kubenswrapper[4947]: I1129 06:53:34.974098 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d64vv\" (UniqueName: \"kubernetes.io/projected/565b2507-ddbb-4657-b879-082fb62e1284-kube-api-access-d64vv\") pod \"dnsmasq-dns-5ccc8479f9-lfrj7\" (UID: \"565b2507-ddbb-4657-b879-082fb62e1284\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lfrj7" Nov 29 06:53:35 crc kubenswrapper[4947]: I1129 06:53:35.085688 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/565b2507-ddbb-4657-b879-082fb62e1284-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-lfrj7\" (UID: \"565b2507-ddbb-4657-b879-082fb62e1284\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lfrj7" Nov 29 06:53:35 crc kubenswrapper[4947]: I1129 06:53:35.085754 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/565b2507-ddbb-4657-b879-082fb62e1284-config\") pod \"dnsmasq-dns-5ccc8479f9-lfrj7\" (UID: \"565b2507-ddbb-4657-b879-082fb62e1284\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lfrj7" Nov 29 06:53:35 crc kubenswrapper[4947]: I1129 06:53:35.085796 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d64vv\" (UniqueName: \"kubernetes.io/projected/565b2507-ddbb-4657-b879-082fb62e1284-kube-api-access-d64vv\") pod \"dnsmasq-dns-5ccc8479f9-lfrj7\" (UID: \"565b2507-ddbb-4657-b879-082fb62e1284\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lfrj7" Nov 29 06:53:35 crc kubenswrapper[4947]: I1129 06:53:35.087428 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/565b2507-ddbb-4657-b879-082fb62e1284-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-lfrj7\" (UID: \"565b2507-ddbb-4657-b879-082fb62e1284\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lfrj7" Nov 29 06:53:35 crc kubenswrapper[4947]: I1129 06:53:35.088118 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/565b2507-ddbb-4657-b879-082fb62e1284-config\") pod \"dnsmasq-dns-5ccc8479f9-lfrj7\" (UID: \"565b2507-ddbb-4657-b879-082fb62e1284\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lfrj7" Nov 29 06:53:35 crc kubenswrapper[4947]: I1129 06:53:35.110963 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d64vv\" (UniqueName: \"kubernetes.io/projected/565b2507-ddbb-4657-b879-082fb62e1284-kube-api-access-d64vv\") pod \"dnsmasq-dns-5ccc8479f9-lfrj7\" (UID: \"565b2507-ddbb-4657-b879-082fb62e1284\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lfrj7" Nov 29 06:53:35 crc kubenswrapper[4947]: I1129 06:53:35.277901 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-lfrj7" Nov 29 06:53:35 crc kubenswrapper[4947]: I1129 06:53:35.401828 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-22brq"] Nov 29 06:53:35 crc kubenswrapper[4947]: I1129 06:53:35.443340 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jrnft"] Nov 29 06:53:35 crc kubenswrapper[4947]: I1129 06:53:35.445396 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-jrnft" Nov 29 06:53:35 crc kubenswrapper[4947]: I1129 06:53:35.451620 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jrnft"] Nov 29 06:53:35 crc kubenswrapper[4947]: I1129 06:53:35.598318 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0de3454e-aa59-4cbf-855f-3a161faa93eb-config\") pod \"dnsmasq-dns-57d769cc4f-jrnft\" (UID: \"0de3454e-aa59-4cbf-855f-3a161faa93eb\") " pod="openstack/dnsmasq-dns-57d769cc4f-jrnft" Nov 29 06:53:35 crc kubenswrapper[4947]: I1129 06:53:35.598384 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6t4k\" (UniqueName: \"kubernetes.io/projected/0de3454e-aa59-4cbf-855f-3a161faa93eb-kube-api-access-d6t4k\") pod \"dnsmasq-dns-57d769cc4f-jrnft\" (UID: \"0de3454e-aa59-4cbf-855f-3a161faa93eb\") " pod="openstack/dnsmasq-dns-57d769cc4f-jrnft" Nov 29 06:53:35 crc kubenswrapper[4947]: I1129 06:53:35.598550 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0de3454e-aa59-4cbf-855f-3a161faa93eb-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-jrnft\" (UID: \"0de3454e-aa59-4cbf-855f-3a161faa93eb\") " pod="openstack/dnsmasq-dns-57d769cc4f-jrnft" Nov 29 06:53:35 crc kubenswrapper[4947]: I1129 06:53:35.699646 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0de3454e-aa59-4cbf-855f-3a161faa93eb-config\") pod \"dnsmasq-dns-57d769cc4f-jrnft\" (UID: \"0de3454e-aa59-4cbf-855f-3a161faa93eb\") " pod="openstack/dnsmasq-dns-57d769cc4f-jrnft" Nov 29 06:53:35 crc kubenswrapper[4947]: I1129 06:53:35.699709 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6t4k\" (UniqueName: \"kubernetes.io/projected/0de3454e-aa59-4cbf-855f-3a161faa93eb-kube-api-access-d6t4k\") pod \"dnsmasq-dns-57d769cc4f-jrnft\" (UID: \"0de3454e-aa59-4cbf-855f-3a161faa93eb\") " pod="openstack/dnsmasq-dns-57d769cc4f-jrnft" Nov 29 06:53:35 crc kubenswrapper[4947]: I1129 06:53:35.699768 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0de3454e-aa59-4cbf-855f-3a161faa93eb-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-jrnft\" (UID: \"0de3454e-aa59-4cbf-855f-3a161faa93eb\") " pod="openstack/dnsmasq-dns-57d769cc4f-jrnft" Nov 29 06:53:35 crc kubenswrapper[4947]: I1129 06:53:35.700720 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0de3454e-aa59-4cbf-855f-3a161faa93eb-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-jrnft\" (UID: \"0de3454e-aa59-4cbf-855f-3a161faa93eb\") " pod="openstack/dnsmasq-dns-57d769cc4f-jrnft" Nov 29 06:53:35 crc kubenswrapper[4947]: I1129 06:53:35.700994 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0de3454e-aa59-4cbf-855f-3a161faa93eb-config\") pod \"dnsmasq-dns-57d769cc4f-jrnft\" (UID: \"0de3454e-aa59-4cbf-855f-3a161faa93eb\") " pod="openstack/dnsmasq-dns-57d769cc4f-jrnft" Nov 29 06:53:35 crc kubenswrapper[4947]: I1129 06:53:35.738083 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6t4k\" (UniqueName: \"kubernetes.io/projected/0de3454e-aa59-4cbf-855f-3a161faa93eb-kube-api-access-d6t4k\") pod \"dnsmasq-dns-57d769cc4f-jrnft\" (UID: \"0de3454e-aa59-4cbf-855f-3a161faa93eb\") " pod="openstack/dnsmasq-dns-57d769cc4f-jrnft" Nov 29 06:53:35 crc kubenswrapper[4947]: I1129 06:53:35.770040 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-jrnft" Nov 29 06:53:35 crc kubenswrapper[4947]: I1129 06:53:35.928575 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-lfrj7"] Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.195375 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.197038 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.204025 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.204155 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.207516 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.218628 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.219075 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.219306 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.219307 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-xvs4b" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.224015 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-lfrj7" event={"ID":"565b2507-ddbb-4657-b879-082fb62e1284","Type":"ContainerStarted","Data":"1a252ac3550346ec54d4e105b1025e327e0b56a112aff008fb30f415ea8f879c"} Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.228869 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.247967 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jrnft"] Nov 29 06:53:36 crc kubenswrapper[4947]: W1129 06:53:36.262840 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0de3454e_aa59_4cbf_855f_3a161faa93eb.slice/crio-17f65377fb75177c08ac60b7c62a9c1b9487eb46944d20a54cfc460e33f0ab6d WatchSource:0}: Error finding container 17f65377fb75177c08ac60b7c62a9c1b9487eb46944d20a54cfc460e33f0ab6d: Status 404 returned error can't find the container with id 17f65377fb75177c08ac60b7c62a9c1b9487eb46944d20a54cfc460e33f0ab6d Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.319537 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.319598 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.319629 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.319654 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.319689 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.319710 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.319735 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.319797 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.319813 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.319830 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.319854 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87qj4\" (UniqueName: \"kubernetes.io/projected/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-kube-api-access-87qj4\") pod \"rabbitmq-cell1-server-0\" (UID: \"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.421140 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.421190 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.421208 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.421241 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87qj4\" (UniqueName: \"kubernetes.io/projected/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-kube-api-access-87qj4\") pod \"rabbitmq-cell1-server-0\" (UID: \"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.421282 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.421302 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.421322 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.421339 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.421365 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.421382 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.421417 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.422245 4947 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-cell1-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.422510 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.422554 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.423628 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.425049 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.425600 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.428022 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.428646 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.430692 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.430916 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.444385 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87qj4\" (UniqueName: \"kubernetes.io/projected/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-kube-api-access-87qj4\") pod \"rabbitmq-cell1-server-0\" (UID: \"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.447557 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.547032 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.571563 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.574133 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.578696 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.578883 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.579008 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-j9h7d" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.579126 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.579333 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.579465 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.579593 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.584043 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.726676 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1df9108b-7e5b-4dd6-bd7e-787381428bce-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1df9108b-7e5b-4dd6-bd7e-787381428bce\") " pod="openstack/rabbitmq-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.726729 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1df9108b-7e5b-4dd6-bd7e-787381428bce-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1df9108b-7e5b-4dd6-bd7e-787381428bce\") " pod="openstack/rabbitmq-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.726761 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1df9108b-7e5b-4dd6-bd7e-787381428bce-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1df9108b-7e5b-4dd6-bd7e-787381428bce\") " pod="openstack/rabbitmq-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.726785 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1df9108b-7e5b-4dd6-bd7e-787381428bce-config-data\") pod \"rabbitmq-server-0\" (UID: \"1df9108b-7e5b-4dd6-bd7e-787381428bce\") " pod="openstack/rabbitmq-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.726818 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hflkc\" (UniqueName: \"kubernetes.io/projected/1df9108b-7e5b-4dd6-bd7e-787381428bce-kube-api-access-hflkc\") pod \"rabbitmq-server-0\" (UID: \"1df9108b-7e5b-4dd6-bd7e-787381428bce\") " pod="openstack/rabbitmq-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.726853 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1df9108b-7e5b-4dd6-bd7e-787381428bce-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1df9108b-7e5b-4dd6-bd7e-787381428bce\") " pod="openstack/rabbitmq-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.726874 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1df9108b-7e5b-4dd6-bd7e-787381428bce-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1df9108b-7e5b-4dd6-bd7e-787381428bce\") " pod="openstack/rabbitmq-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.726896 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"1df9108b-7e5b-4dd6-bd7e-787381428bce\") " pod="openstack/rabbitmq-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.726916 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1df9108b-7e5b-4dd6-bd7e-787381428bce-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1df9108b-7e5b-4dd6-bd7e-787381428bce\") " pod="openstack/rabbitmq-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.726942 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1df9108b-7e5b-4dd6-bd7e-787381428bce-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1df9108b-7e5b-4dd6-bd7e-787381428bce\") " pod="openstack/rabbitmq-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.726963 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1df9108b-7e5b-4dd6-bd7e-787381428bce-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1df9108b-7e5b-4dd6-bd7e-787381428bce\") " pod="openstack/rabbitmq-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.832384 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1df9108b-7e5b-4dd6-bd7e-787381428bce-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1df9108b-7e5b-4dd6-bd7e-787381428bce\") " pod="openstack/rabbitmq-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.832473 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1df9108b-7e5b-4dd6-bd7e-787381428bce-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1df9108b-7e5b-4dd6-bd7e-787381428bce\") " pod="openstack/rabbitmq-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.832496 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1df9108b-7e5b-4dd6-bd7e-787381428bce-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1df9108b-7e5b-4dd6-bd7e-787381428bce\") " pod="openstack/rabbitmq-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.832581 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1df9108b-7e5b-4dd6-bd7e-787381428bce-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1df9108b-7e5b-4dd6-bd7e-787381428bce\") " pod="openstack/rabbitmq-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.832611 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1df9108b-7e5b-4dd6-bd7e-787381428bce-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1df9108b-7e5b-4dd6-bd7e-787381428bce\") " pod="openstack/rabbitmq-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.832650 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1df9108b-7e5b-4dd6-bd7e-787381428bce-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1df9108b-7e5b-4dd6-bd7e-787381428bce\") " pod="openstack/rabbitmq-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.832960 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1df9108b-7e5b-4dd6-bd7e-787381428bce-config-data\") pod \"rabbitmq-server-0\" (UID: \"1df9108b-7e5b-4dd6-bd7e-787381428bce\") " pod="openstack/rabbitmq-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.833034 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hflkc\" (UniqueName: \"kubernetes.io/projected/1df9108b-7e5b-4dd6-bd7e-787381428bce-kube-api-access-hflkc\") pod \"rabbitmq-server-0\" (UID: \"1df9108b-7e5b-4dd6-bd7e-787381428bce\") " pod="openstack/rabbitmq-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.833098 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1df9108b-7e5b-4dd6-bd7e-787381428bce-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1df9108b-7e5b-4dd6-bd7e-787381428bce\") " pod="openstack/rabbitmq-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.833125 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1df9108b-7e5b-4dd6-bd7e-787381428bce-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1df9108b-7e5b-4dd6-bd7e-787381428bce\") " pod="openstack/rabbitmq-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.833162 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"1df9108b-7e5b-4dd6-bd7e-787381428bce\") " pod="openstack/rabbitmq-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.833399 4947 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"1df9108b-7e5b-4dd6-bd7e-787381428bce\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.837860 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1df9108b-7e5b-4dd6-bd7e-787381428bce-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1df9108b-7e5b-4dd6-bd7e-787381428bce\") " pod="openstack/rabbitmq-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.838185 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1df9108b-7e5b-4dd6-bd7e-787381428bce-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1df9108b-7e5b-4dd6-bd7e-787381428bce\") " pod="openstack/rabbitmq-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.837964 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1df9108b-7e5b-4dd6-bd7e-787381428bce-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1df9108b-7e5b-4dd6-bd7e-787381428bce\") " pod="openstack/rabbitmq-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.838504 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1df9108b-7e5b-4dd6-bd7e-787381428bce-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1df9108b-7e5b-4dd6-bd7e-787381428bce\") " pod="openstack/rabbitmq-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.839076 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1df9108b-7e5b-4dd6-bd7e-787381428bce-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1df9108b-7e5b-4dd6-bd7e-787381428bce\") " pod="openstack/rabbitmq-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.839198 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1df9108b-7e5b-4dd6-bd7e-787381428bce-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1df9108b-7e5b-4dd6-bd7e-787381428bce\") " pod="openstack/rabbitmq-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.840891 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1df9108b-7e5b-4dd6-bd7e-787381428bce-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1df9108b-7e5b-4dd6-bd7e-787381428bce\") " pod="openstack/rabbitmq-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.841570 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1df9108b-7e5b-4dd6-bd7e-787381428bce-config-data\") pod \"rabbitmq-server-0\" (UID: \"1df9108b-7e5b-4dd6-bd7e-787381428bce\") " pod="openstack/rabbitmq-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.847727 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.848207 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1df9108b-7e5b-4dd6-bd7e-787381428bce-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1df9108b-7e5b-4dd6-bd7e-787381428bce\") " pod="openstack/rabbitmq-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: W1129 06:53:36.859331 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9b892d6_7b7d_4259_9dbd_6d0c0d8b12a8.slice/crio-d960731a22a937b28d835a5850d4d039e926a582af834bacc53628877272a038 WatchSource:0}: Error finding container d960731a22a937b28d835a5850d4d039e926a582af834bacc53628877272a038: Status 404 returned error can't find the container with id d960731a22a937b28d835a5850d4d039e926a582af834bacc53628877272a038 Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.861684 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"1df9108b-7e5b-4dd6-bd7e-787381428bce\") " pod="openstack/rabbitmq-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.868094 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hflkc\" (UniqueName: \"kubernetes.io/projected/1df9108b-7e5b-4dd6-bd7e-787381428bce-kube-api-access-hflkc\") pod \"rabbitmq-server-0\" (UID: \"1df9108b-7e5b-4dd6-bd7e-787381428bce\") " pod="openstack/rabbitmq-server-0" Nov 29 06:53:36 crc kubenswrapper[4947]: I1129 06:53:36.930346 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 29 06:53:37 crc kubenswrapper[4947]: I1129 06:53:37.233714 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-jrnft" event={"ID":"0de3454e-aa59-4cbf-855f-3a161faa93eb","Type":"ContainerStarted","Data":"17f65377fb75177c08ac60b7c62a9c1b9487eb46944d20a54cfc460e33f0ab6d"} Nov 29 06:53:37 crc kubenswrapper[4947]: I1129 06:53:37.250664 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8","Type":"ContainerStarted","Data":"d960731a22a937b28d835a5850d4d039e926a582af834bacc53628877272a038"} Nov 29 06:53:37 crc kubenswrapper[4947]: I1129 06:53:37.544407 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 29 06:53:37 crc kubenswrapper[4947]: W1129 06:53:37.560348 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1df9108b_7e5b_4dd6_bd7e_787381428bce.slice/crio-7b5d98b30de299d7d151cb0014a2db5c1f871768f887428c580707732d5b07de WatchSource:0}: Error finding container 7b5d98b30de299d7d151cb0014a2db5c1f871768f887428c580707732d5b07de: Status 404 returned error can't find the container with id 7b5d98b30de299d7d151cb0014a2db5c1f871768f887428c580707732d5b07de Nov 29 06:53:38 crc kubenswrapper[4947]: I1129 06:53:38.083347 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 29 06:53:38 crc kubenswrapper[4947]: I1129 06:53:38.086429 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 29 06:53:38 crc kubenswrapper[4947]: I1129 06:53:38.094777 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 29 06:53:38 crc kubenswrapper[4947]: I1129 06:53:38.098701 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 29 06:53:38 crc kubenswrapper[4947]: I1129 06:53:38.099261 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-85zmr" Nov 29 06:53:38 crc kubenswrapper[4947]: I1129 06:53:38.099624 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 29 06:53:38 crc kubenswrapper[4947]: I1129 06:53:38.100245 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 29 06:53:38 crc kubenswrapper[4947]: I1129 06:53:38.108078 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 29 06:53:38 crc kubenswrapper[4947]: I1129 06:53:38.263053 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1df9108b-7e5b-4dd6-bd7e-787381428bce","Type":"ContainerStarted","Data":"7b5d98b30de299d7d151cb0014a2db5c1f871768f887428c580707732d5b07de"} Nov 29 06:53:38 crc kubenswrapper[4947]: I1129 06:53:38.275869 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"bc968903-97f7-437d-882d-1bb4278dab13\") " pod="openstack/openstack-galera-0" Nov 29 06:53:38 crc kubenswrapper[4947]: I1129 06:53:38.275971 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bc968903-97f7-437d-882d-1bb4278dab13-config-data-default\") pod \"openstack-galera-0\" (UID: \"bc968903-97f7-437d-882d-1bb4278dab13\") " pod="openstack/openstack-galera-0" Nov 29 06:53:38 crc kubenswrapper[4947]: I1129 06:53:38.276002 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc968903-97f7-437d-882d-1bb4278dab13-operator-scripts\") pod \"openstack-galera-0\" (UID: \"bc968903-97f7-437d-882d-1bb4278dab13\") " pod="openstack/openstack-galera-0" Nov 29 06:53:38 crc kubenswrapper[4947]: I1129 06:53:38.276037 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc968903-97f7-437d-882d-1bb4278dab13-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"bc968903-97f7-437d-882d-1bb4278dab13\") " pod="openstack/openstack-galera-0" Nov 29 06:53:38 crc kubenswrapper[4947]: I1129 06:53:38.276060 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7zp6\" (UniqueName: \"kubernetes.io/projected/bc968903-97f7-437d-882d-1bb4278dab13-kube-api-access-m7zp6\") pod \"openstack-galera-0\" (UID: \"bc968903-97f7-437d-882d-1bb4278dab13\") " pod="openstack/openstack-galera-0" Nov 29 06:53:38 crc kubenswrapper[4947]: I1129 06:53:38.276083 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc968903-97f7-437d-882d-1bb4278dab13-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"bc968903-97f7-437d-882d-1bb4278dab13\") " pod="openstack/openstack-galera-0" Nov 29 06:53:38 crc kubenswrapper[4947]: I1129 06:53:38.276115 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bc968903-97f7-437d-882d-1bb4278dab13-kolla-config\") pod \"openstack-galera-0\" (UID: \"bc968903-97f7-437d-882d-1bb4278dab13\") " pod="openstack/openstack-galera-0" Nov 29 06:53:38 crc kubenswrapper[4947]: I1129 06:53:38.276147 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bc968903-97f7-437d-882d-1bb4278dab13-config-data-generated\") pod \"openstack-galera-0\" (UID: \"bc968903-97f7-437d-882d-1bb4278dab13\") " pod="openstack/openstack-galera-0" Nov 29 06:53:38 crc kubenswrapper[4947]: I1129 06:53:38.378296 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"bc968903-97f7-437d-882d-1bb4278dab13\") " pod="openstack/openstack-galera-0" Nov 29 06:53:38 crc kubenswrapper[4947]: I1129 06:53:38.378398 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bc968903-97f7-437d-882d-1bb4278dab13-config-data-default\") pod \"openstack-galera-0\" (UID: \"bc968903-97f7-437d-882d-1bb4278dab13\") " pod="openstack/openstack-galera-0" Nov 29 06:53:38 crc kubenswrapper[4947]: I1129 06:53:38.378437 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc968903-97f7-437d-882d-1bb4278dab13-operator-scripts\") pod \"openstack-galera-0\" (UID: \"bc968903-97f7-437d-882d-1bb4278dab13\") " pod="openstack/openstack-galera-0" Nov 29 06:53:38 crc kubenswrapper[4947]: I1129 06:53:38.378467 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc968903-97f7-437d-882d-1bb4278dab13-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"bc968903-97f7-437d-882d-1bb4278dab13\") " pod="openstack/openstack-galera-0" Nov 29 06:53:38 crc kubenswrapper[4947]: I1129 06:53:38.378489 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7zp6\" (UniqueName: \"kubernetes.io/projected/bc968903-97f7-437d-882d-1bb4278dab13-kube-api-access-m7zp6\") pod \"openstack-galera-0\" (UID: \"bc968903-97f7-437d-882d-1bb4278dab13\") " pod="openstack/openstack-galera-0" Nov 29 06:53:38 crc kubenswrapper[4947]: I1129 06:53:38.378517 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc968903-97f7-437d-882d-1bb4278dab13-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"bc968903-97f7-437d-882d-1bb4278dab13\") " pod="openstack/openstack-galera-0" Nov 29 06:53:38 crc kubenswrapper[4947]: I1129 06:53:38.378536 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bc968903-97f7-437d-882d-1bb4278dab13-kolla-config\") pod \"openstack-galera-0\" (UID: \"bc968903-97f7-437d-882d-1bb4278dab13\") " pod="openstack/openstack-galera-0" Nov 29 06:53:38 crc kubenswrapper[4947]: I1129 06:53:38.378563 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bc968903-97f7-437d-882d-1bb4278dab13-config-data-generated\") pod \"openstack-galera-0\" (UID: \"bc968903-97f7-437d-882d-1bb4278dab13\") " pod="openstack/openstack-galera-0" Nov 29 06:53:38 crc kubenswrapper[4947]: I1129 06:53:38.379565 4947 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"bc968903-97f7-437d-882d-1bb4278dab13\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-galera-0" Nov 29 06:53:38 crc kubenswrapper[4947]: I1129 06:53:38.383495 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bc968903-97f7-437d-882d-1bb4278dab13-config-data-default\") pod \"openstack-galera-0\" (UID: \"bc968903-97f7-437d-882d-1bb4278dab13\") " pod="openstack/openstack-galera-0" Nov 29 06:53:38 crc kubenswrapper[4947]: I1129 06:53:38.383688 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bc968903-97f7-437d-882d-1bb4278dab13-kolla-config\") pod \"openstack-galera-0\" (UID: \"bc968903-97f7-437d-882d-1bb4278dab13\") " pod="openstack/openstack-galera-0" Nov 29 06:53:38 crc kubenswrapper[4947]: I1129 06:53:38.384003 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bc968903-97f7-437d-882d-1bb4278dab13-config-data-generated\") pod \"openstack-galera-0\" (UID: \"bc968903-97f7-437d-882d-1bb4278dab13\") " pod="openstack/openstack-galera-0" Nov 29 06:53:38 crc kubenswrapper[4947]: I1129 06:53:38.480661 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc968903-97f7-437d-882d-1bb4278dab13-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"bc968903-97f7-437d-882d-1bb4278dab13\") " pod="openstack/openstack-galera-0" Nov 29 06:53:38 crc kubenswrapper[4947]: I1129 06:53:38.481332 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc968903-97f7-437d-882d-1bb4278dab13-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"bc968903-97f7-437d-882d-1bb4278dab13\") " pod="openstack/openstack-galera-0" Nov 29 06:53:38 crc kubenswrapper[4947]: I1129 06:53:38.532166 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc968903-97f7-437d-882d-1bb4278dab13-operator-scripts\") pod \"openstack-galera-0\" (UID: \"bc968903-97f7-437d-882d-1bb4278dab13\") " pod="openstack/openstack-galera-0" Nov 29 06:53:38 crc kubenswrapper[4947]: I1129 06:53:38.560079 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7zp6\" (UniqueName: \"kubernetes.io/projected/bc968903-97f7-437d-882d-1bb4278dab13-kube-api-access-m7zp6\") pod \"openstack-galera-0\" (UID: \"bc968903-97f7-437d-882d-1bb4278dab13\") " pod="openstack/openstack-galera-0" Nov 29 06:53:38 crc kubenswrapper[4947]: I1129 06:53:38.583334 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"bc968903-97f7-437d-882d-1bb4278dab13\") " pod="openstack/openstack-galera-0" Nov 29 06:53:38 crc kubenswrapper[4947]: I1129 06:53:38.731728 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 29 06:53:39 crc kubenswrapper[4947]: I1129 06:53:39.613063 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 29 06:53:39 crc kubenswrapper[4947]: I1129 06:53:39.618948 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 29 06:53:39 crc kubenswrapper[4947]: I1129 06:53:39.626605 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 29 06:53:39 crc kubenswrapper[4947]: I1129 06:53:39.626781 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-296ql" Nov 29 06:53:39 crc kubenswrapper[4947]: I1129 06:53:39.627212 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 29 06:53:39 crc kubenswrapper[4947]: I1129 06:53:39.627381 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 29 06:53:39 crc kubenswrapper[4947]: I1129 06:53:39.633160 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 29 06:53:39 crc kubenswrapper[4947]: I1129 06:53:39.652061 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f1620f7-4f3f-47bf-995a-f4eef1c9cf0f-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"1f1620f7-4f3f-47bf-995a-f4eef1c9cf0f\") " pod="openstack/openstack-cell1-galera-0" Nov 29 06:53:39 crc kubenswrapper[4947]: I1129 06:53:39.652613 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f1620f7-4f3f-47bf-995a-f4eef1c9cf0f-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"1f1620f7-4f3f-47bf-995a-f4eef1c9cf0f\") " pod="openstack/openstack-cell1-galera-0" Nov 29 06:53:39 crc kubenswrapper[4947]: I1129 06:53:39.652674 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1f1620f7-4f3f-47bf-995a-f4eef1c9cf0f-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"1f1620f7-4f3f-47bf-995a-f4eef1c9cf0f\") " pod="openstack/openstack-cell1-galera-0" Nov 29 06:53:39 crc kubenswrapper[4947]: I1129 06:53:39.652719 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1f1620f7-4f3f-47bf-995a-f4eef1c9cf0f-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"1f1620f7-4f3f-47bf-995a-f4eef1c9cf0f\") " pod="openstack/openstack-cell1-galera-0" Nov 29 06:53:39 crc kubenswrapper[4947]: I1129 06:53:39.652755 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1f1620f7-4f3f-47bf-995a-f4eef1c9cf0f-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"1f1620f7-4f3f-47bf-995a-f4eef1c9cf0f\") " pod="openstack/openstack-cell1-galera-0" Nov 29 06:53:39 crc kubenswrapper[4947]: I1129 06:53:39.652779 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f1620f7-4f3f-47bf-995a-f4eef1c9cf0f-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"1f1620f7-4f3f-47bf-995a-f4eef1c9cf0f\") " pod="openstack/openstack-cell1-galera-0" Nov 29 06:53:39 crc kubenswrapper[4947]: I1129 06:53:39.652807 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96vcg\" (UniqueName: \"kubernetes.io/projected/1f1620f7-4f3f-47bf-995a-f4eef1c9cf0f-kube-api-access-96vcg\") pod \"openstack-cell1-galera-0\" (UID: \"1f1620f7-4f3f-47bf-995a-f4eef1c9cf0f\") " pod="openstack/openstack-cell1-galera-0" Nov 29 06:53:39 crc kubenswrapper[4947]: I1129 06:53:39.653282 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"1f1620f7-4f3f-47bf-995a-f4eef1c9cf0f\") " pod="openstack/openstack-cell1-galera-0" Nov 29 06:53:39 crc kubenswrapper[4947]: I1129 06:53:39.755524 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1f1620f7-4f3f-47bf-995a-f4eef1c9cf0f-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"1f1620f7-4f3f-47bf-995a-f4eef1c9cf0f\") " pod="openstack/openstack-cell1-galera-0" Nov 29 06:53:39 crc kubenswrapper[4947]: I1129 06:53:39.755583 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1f1620f7-4f3f-47bf-995a-f4eef1c9cf0f-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"1f1620f7-4f3f-47bf-995a-f4eef1c9cf0f\") " pod="openstack/openstack-cell1-galera-0" Nov 29 06:53:39 crc kubenswrapper[4947]: I1129 06:53:39.755610 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f1620f7-4f3f-47bf-995a-f4eef1c9cf0f-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"1f1620f7-4f3f-47bf-995a-f4eef1c9cf0f\") " pod="openstack/openstack-cell1-galera-0" Nov 29 06:53:39 crc kubenswrapper[4947]: I1129 06:53:39.755634 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96vcg\" (UniqueName: \"kubernetes.io/projected/1f1620f7-4f3f-47bf-995a-f4eef1c9cf0f-kube-api-access-96vcg\") pod \"openstack-cell1-galera-0\" (UID: \"1f1620f7-4f3f-47bf-995a-f4eef1c9cf0f\") " pod="openstack/openstack-cell1-galera-0" Nov 29 06:53:39 crc kubenswrapper[4947]: I1129 06:53:39.755674 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"1f1620f7-4f3f-47bf-995a-f4eef1c9cf0f\") " pod="openstack/openstack-cell1-galera-0" Nov 29 06:53:39 crc kubenswrapper[4947]: I1129 06:53:39.755709 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f1620f7-4f3f-47bf-995a-f4eef1c9cf0f-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"1f1620f7-4f3f-47bf-995a-f4eef1c9cf0f\") " pod="openstack/openstack-cell1-galera-0" Nov 29 06:53:39 crc kubenswrapper[4947]: I1129 06:53:39.755730 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f1620f7-4f3f-47bf-995a-f4eef1c9cf0f-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"1f1620f7-4f3f-47bf-995a-f4eef1c9cf0f\") " pod="openstack/openstack-cell1-galera-0" Nov 29 06:53:39 crc kubenswrapper[4947]: I1129 06:53:39.755759 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1f1620f7-4f3f-47bf-995a-f4eef1c9cf0f-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"1f1620f7-4f3f-47bf-995a-f4eef1c9cf0f\") " pod="openstack/openstack-cell1-galera-0" Nov 29 06:53:39 crc kubenswrapper[4947]: I1129 06:53:39.756383 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1f1620f7-4f3f-47bf-995a-f4eef1c9cf0f-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"1f1620f7-4f3f-47bf-995a-f4eef1c9cf0f\") " pod="openstack/openstack-cell1-galera-0" Nov 29 06:53:39 crc kubenswrapper[4947]: I1129 06:53:39.757358 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1f1620f7-4f3f-47bf-995a-f4eef1c9cf0f-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"1f1620f7-4f3f-47bf-995a-f4eef1c9cf0f\") " pod="openstack/openstack-cell1-galera-0" Nov 29 06:53:39 crc kubenswrapper[4947]: I1129 06:53:39.758044 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1f1620f7-4f3f-47bf-995a-f4eef1c9cf0f-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"1f1620f7-4f3f-47bf-995a-f4eef1c9cf0f\") " pod="openstack/openstack-cell1-galera-0" Nov 29 06:53:39 crc kubenswrapper[4947]: I1129 06:53:39.759433 4947 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"1f1620f7-4f3f-47bf-995a-f4eef1c9cf0f\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/openstack-cell1-galera-0" Nov 29 06:53:39 crc kubenswrapper[4947]: I1129 06:53:39.766783 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f1620f7-4f3f-47bf-995a-f4eef1c9cf0f-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"1f1620f7-4f3f-47bf-995a-f4eef1c9cf0f\") " pod="openstack/openstack-cell1-galera-0" Nov 29 06:53:39 crc kubenswrapper[4947]: I1129 06:53:39.769028 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f1620f7-4f3f-47bf-995a-f4eef1c9cf0f-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"1f1620f7-4f3f-47bf-995a-f4eef1c9cf0f\") " pod="openstack/openstack-cell1-galera-0" Nov 29 06:53:39 crc kubenswrapper[4947]: I1129 06:53:39.785980 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96vcg\" (UniqueName: \"kubernetes.io/projected/1f1620f7-4f3f-47bf-995a-f4eef1c9cf0f-kube-api-access-96vcg\") pod \"openstack-cell1-galera-0\" (UID: \"1f1620f7-4f3f-47bf-995a-f4eef1c9cf0f\") " pod="openstack/openstack-cell1-galera-0" Nov 29 06:53:39 crc kubenswrapper[4947]: I1129 06:53:39.787818 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f1620f7-4f3f-47bf-995a-f4eef1c9cf0f-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"1f1620f7-4f3f-47bf-995a-f4eef1c9cf0f\") " pod="openstack/openstack-cell1-galera-0" Nov 29 06:53:39 crc kubenswrapper[4947]: I1129 06:53:39.826740 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"1f1620f7-4f3f-47bf-995a-f4eef1c9cf0f\") " pod="openstack/openstack-cell1-galera-0" Nov 29 06:53:39 crc kubenswrapper[4947]: I1129 06:53:39.865072 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 29 06:53:39 crc kubenswrapper[4947]: I1129 06:53:39.871505 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 29 06:53:39 crc kubenswrapper[4947]: I1129 06:53:39.874433 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 29 06:53:39 crc kubenswrapper[4947]: I1129 06:53:39.874686 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 29 06:53:39 crc kubenswrapper[4947]: I1129 06:53:39.877406 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-p2grx" Nov 29 06:53:39 crc kubenswrapper[4947]: I1129 06:53:39.880189 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 29 06:53:39 crc kubenswrapper[4947]: I1129 06:53:39.960295 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 29 06:53:40 crc kubenswrapper[4947]: I1129 06:53:40.062606 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13265680-d7d8-4091-8dab-29ac0243dc05-combined-ca-bundle\") pod \"memcached-0\" (UID: \"13265680-d7d8-4091-8dab-29ac0243dc05\") " pod="openstack/memcached-0" Nov 29 06:53:40 crc kubenswrapper[4947]: I1129 06:53:40.062656 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/13265680-d7d8-4091-8dab-29ac0243dc05-memcached-tls-certs\") pod \"memcached-0\" (UID: \"13265680-d7d8-4091-8dab-29ac0243dc05\") " pod="openstack/memcached-0" Nov 29 06:53:40 crc kubenswrapper[4947]: I1129 06:53:40.063442 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6wqv\" (UniqueName: \"kubernetes.io/projected/13265680-d7d8-4091-8dab-29ac0243dc05-kube-api-access-g6wqv\") pod \"memcached-0\" (UID: \"13265680-d7d8-4091-8dab-29ac0243dc05\") " pod="openstack/memcached-0" Nov 29 06:53:40 crc kubenswrapper[4947]: I1129 06:53:40.063534 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/13265680-d7d8-4091-8dab-29ac0243dc05-config-data\") pod \"memcached-0\" (UID: \"13265680-d7d8-4091-8dab-29ac0243dc05\") " pod="openstack/memcached-0" Nov 29 06:53:40 crc kubenswrapper[4947]: I1129 06:53:40.063575 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/13265680-d7d8-4091-8dab-29ac0243dc05-kolla-config\") pod \"memcached-0\" (UID: \"13265680-d7d8-4091-8dab-29ac0243dc05\") " pod="openstack/memcached-0" Nov 29 06:53:40 crc kubenswrapper[4947]: I1129 06:53:40.165499 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13265680-d7d8-4091-8dab-29ac0243dc05-combined-ca-bundle\") pod \"memcached-0\" (UID: \"13265680-d7d8-4091-8dab-29ac0243dc05\") " pod="openstack/memcached-0" Nov 29 06:53:40 crc kubenswrapper[4947]: I1129 06:53:40.165558 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/13265680-d7d8-4091-8dab-29ac0243dc05-memcached-tls-certs\") pod \"memcached-0\" (UID: \"13265680-d7d8-4091-8dab-29ac0243dc05\") " pod="openstack/memcached-0" Nov 29 06:53:40 crc kubenswrapper[4947]: I1129 06:53:40.165610 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6wqv\" (UniqueName: \"kubernetes.io/projected/13265680-d7d8-4091-8dab-29ac0243dc05-kube-api-access-g6wqv\") pod \"memcached-0\" (UID: \"13265680-d7d8-4091-8dab-29ac0243dc05\") " pod="openstack/memcached-0" Nov 29 06:53:40 crc kubenswrapper[4947]: I1129 06:53:40.165639 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/13265680-d7d8-4091-8dab-29ac0243dc05-config-data\") pod \"memcached-0\" (UID: \"13265680-d7d8-4091-8dab-29ac0243dc05\") " pod="openstack/memcached-0" Nov 29 06:53:40 crc kubenswrapper[4947]: I1129 06:53:40.165675 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/13265680-d7d8-4091-8dab-29ac0243dc05-kolla-config\") pod \"memcached-0\" (UID: \"13265680-d7d8-4091-8dab-29ac0243dc05\") " pod="openstack/memcached-0" Nov 29 06:53:40 crc kubenswrapper[4947]: I1129 06:53:40.166564 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/13265680-d7d8-4091-8dab-29ac0243dc05-kolla-config\") pod \"memcached-0\" (UID: \"13265680-d7d8-4091-8dab-29ac0243dc05\") " pod="openstack/memcached-0" Nov 29 06:53:40 crc kubenswrapper[4947]: I1129 06:53:40.167945 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/13265680-d7d8-4091-8dab-29ac0243dc05-config-data\") pod \"memcached-0\" (UID: \"13265680-d7d8-4091-8dab-29ac0243dc05\") " pod="openstack/memcached-0" Nov 29 06:53:40 crc kubenswrapper[4947]: I1129 06:53:40.170844 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/13265680-d7d8-4091-8dab-29ac0243dc05-memcached-tls-certs\") pod \"memcached-0\" (UID: \"13265680-d7d8-4091-8dab-29ac0243dc05\") " pod="openstack/memcached-0" Nov 29 06:53:40 crc kubenswrapper[4947]: I1129 06:53:40.181777 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13265680-d7d8-4091-8dab-29ac0243dc05-combined-ca-bundle\") pod \"memcached-0\" (UID: \"13265680-d7d8-4091-8dab-29ac0243dc05\") " pod="openstack/memcached-0" Nov 29 06:53:40 crc kubenswrapper[4947]: I1129 06:53:40.191134 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6wqv\" (UniqueName: \"kubernetes.io/projected/13265680-d7d8-4091-8dab-29ac0243dc05-kube-api-access-g6wqv\") pod \"memcached-0\" (UID: \"13265680-d7d8-4091-8dab-29ac0243dc05\") " pod="openstack/memcached-0" Nov 29 06:53:40 crc kubenswrapper[4947]: I1129 06:53:40.208723 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 29 06:53:41 crc kubenswrapper[4947]: I1129 06:53:41.984718 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 29 06:53:41 crc kubenswrapper[4947]: I1129 06:53:41.986200 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 29 06:53:41 crc kubenswrapper[4947]: I1129 06:53:41.988738 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-r5thp" Nov 29 06:53:42 crc kubenswrapper[4947]: I1129 06:53:42.015950 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 29 06:53:42 crc kubenswrapper[4947]: I1129 06:53:42.108566 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwlxg\" (UniqueName: \"kubernetes.io/projected/cc514c2a-183d-404e-ba5d-7641695da78c-kube-api-access-qwlxg\") pod \"kube-state-metrics-0\" (UID: \"cc514c2a-183d-404e-ba5d-7641695da78c\") " pod="openstack/kube-state-metrics-0" Nov 29 06:53:42 crc kubenswrapper[4947]: I1129 06:53:42.209659 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwlxg\" (UniqueName: \"kubernetes.io/projected/cc514c2a-183d-404e-ba5d-7641695da78c-kube-api-access-qwlxg\") pod \"kube-state-metrics-0\" (UID: \"cc514c2a-183d-404e-ba5d-7641695da78c\") " pod="openstack/kube-state-metrics-0" Nov 29 06:53:42 crc kubenswrapper[4947]: I1129 06:53:42.240159 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwlxg\" (UniqueName: \"kubernetes.io/projected/cc514c2a-183d-404e-ba5d-7641695da78c-kube-api-access-qwlxg\") pod \"kube-state-metrics-0\" (UID: \"cc514c2a-183d-404e-ba5d-7641695da78c\") " pod="openstack/kube-state-metrics-0" Nov 29 06:53:42 crc kubenswrapper[4947]: I1129 06:53:42.305807 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 29 06:53:45 crc kubenswrapper[4947]: I1129 06:53:45.278470 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-sn9qf"] Nov 29 06:53:45 crc kubenswrapper[4947]: I1129 06:53:45.280373 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sn9qf" Nov 29 06:53:45 crc kubenswrapper[4947]: I1129 06:53:45.286581 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 29 06:53:45 crc kubenswrapper[4947]: I1129 06:53:45.286584 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 29 06:53:45 crc kubenswrapper[4947]: I1129 06:53:45.287075 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-4cpkb" Nov 29 06:53:45 crc kubenswrapper[4947]: I1129 06:53:45.291007 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-ztbsf"] Nov 29 06:53:45 crc kubenswrapper[4947]: I1129 06:53:45.294693 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-ztbsf" Nov 29 06:53:45 crc kubenswrapper[4947]: I1129 06:53:45.311991 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sn9qf"] Nov 29 06:53:45 crc kubenswrapper[4947]: I1129 06:53:45.329192 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-ztbsf"] Nov 29 06:53:45 crc kubenswrapper[4947]: I1129 06:53:45.389584 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fc369426-cee0-4a95-aa63-9b8d4df05e7a-var-run-ovn\") pod \"ovn-controller-sn9qf\" (UID: \"fc369426-cee0-4a95-aa63-9b8d4df05e7a\") " pod="openstack/ovn-controller-sn9qf" Nov 29 06:53:45 crc kubenswrapper[4947]: I1129 06:53:45.389649 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc369426-cee0-4a95-aa63-9b8d4df05e7a-ovn-controller-tls-certs\") pod \"ovn-controller-sn9qf\" (UID: \"fc369426-cee0-4a95-aa63-9b8d4df05e7a\") " pod="openstack/ovn-controller-sn9qf" Nov 29 06:53:45 crc kubenswrapper[4947]: I1129 06:53:45.389705 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5fd9f589-89f8-44d3-9e3e-17546dc61f7b-var-lib\") pod \"ovn-controller-ovs-ztbsf\" (UID: \"5fd9f589-89f8-44d3-9e3e-17546dc61f7b\") " pod="openstack/ovn-controller-ovs-ztbsf" Nov 29 06:53:45 crc kubenswrapper[4947]: I1129 06:53:45.389757 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5fd9f589-89f8-44d3-9e3e-17546dc61f7b-var-log\") pod \"ovn-controller-ovs-ztbsf\" (UID: \"5fd9f589-89f8-44d3-9e3e-17546dc61f7b\") " pod="openstack/ovn-controller-ovs-ztbsf" Nov 29 06:53:45 crc kubenswrapper[4947]: I1129 06:53:45.389807 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fc369426-cee0-4a95-aa63-9b8d4df05e7a-var-log-ovn\") pod \"ovn-controller-sn9qf\" (UID: \"fc369426-cee0-4a95-aa63-9b8d4df05e7a\") " pod="openstack/ovn-controller-sn9qf" Nov 29 06:53:45 crc kubenswrapper[4947]: I1129 06:53:45.389832 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz2tz\" (UniqueName: \"kubernetes.io/projected/fc369426-cee0-4a95-aa63-9b8d4df05e7a-kube-api-access-vz2tz\") pod \"ovn-controller-sn9qf\" (UID: \"fc369426-cee0-4a95-aa63-9b8d4df05e7a\") " pod="openstack/ovn-controller-sn9qf" Nov 29 06:53:45 crc kubenswrapper[4947]: I1129 06:53:45.389880 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5fd9f589-89f8-44d3-9e3e-17546dc61f7b-scripts\") pod \"ovn-controller-ovs-ztbsf\" (UID: \"5fd9f589-89f8-44d3-9e3e-17546dc61f7b\") " pod="openstack/ovn-controller-ovs-ztbsf" Nov 29 06:53:45 crc kubenswrapper[4947]: I1129 06:53:45.389948 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcvcg\" (UniqueName: \"kubernetes.io/projected/5fd9f589-89f8-44d3-9e3e-17546dc61f7b-kube-api-access-tcvcg\") pod \"ovn-controller-ovs-ztbsf\" (UID: \"5fd9f589-89f8-44d3-9e3e-17546dc61f7b\") " pod="openstack/ovn-controller-ovs-ztbsf" Nov 29 06:53:45 crc kubenswrapper[4947]: I1129 06:53:45.389968 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fc369426-cee0-4a95-aa63-9b8d4df05e7a-var-run\") pod \"ovn-controller-sn9qf\" (UID: \"fc369426-cee0-4a95-aa63-9b8d4df05e7a\") " pod="openstack/ovn-controller-sn9qf" Nov 29 06:53:45 crc kubenswrapper[4947]: I1129 06:53:45.390022 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5fd9f589-89f8-44d3-9e3e-17546dc61f7b-var-run\") pod \"ovn-controller-ovs-ztbsf\" (UID: \"5fd9f589-89f8-44d3-9e3e-17546dc61f7b\") " pod="openstack/ovn-controller-ovs-ztbsf" Nov 29 06:53:45 crc kubenswrapper[4947]: I1129 06:53:45.390047 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5fd9f589-89f8-44d3-9e3e-17546dc61f7b-etc-ovs\") pod \"ovn-controller-ovs-ztbsf\" (UID: \"5fd9f589-89f8-44d3-9e3e-17546dc61f7b\") " pod="openstack/ovn-controller-ovs-ztbsf" Nov 29 06:53:45 crc kubenswrapper[4947]: I1129 06:53:45.390065 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fc369426-cee0-4a95-aa63-9b8d4df05e7a-scripts\") pod \"ovn-controller-sn9qf\" (UID: \"fc369426-cee0-4a95-aa63-9b8d4df05e7a\") " pod="openstack/ovn-controller-sn9qf" Nov 29 06:53:45 crc kubenswrapper[4947]: I1129 06:53:45.390117 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc369426-cee0-4a95-aa63-9b8d4df05e7a-combined-ca-bundle\") pod \"ovn-controller-sn9qf\" (UID: \"fc369426-cee0-4a95-aa63-9b8d4df05e7a\") " pod="openstack/ovn-controller-sn9qf" Nov 29 06:53:45 crc kubenswrapper[4947]: I1129 06:53:45.495385 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5fd9f589-89f8-44d3-9e3e-17546dc61f7b-var-lib\") pod \"ovn-controller-ovs-ztbsf\" (UID: \"5fd9f589-89f8-44d3-9e3e-17546dc61f7b\") " pod="openstack/ovn-controller-ovs-ztbsf" Nov 29 06:53:45 crc kubenswrapper[4947]: I1129 06:53:45.495479 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5fd9f589-89f8-44d3-9e3e-17546dc61f7b-var-log\") pod \"ovn-controller-ovs-ztbsf\" (UID: \"5fd9f589-89f8-44d3-9e3e-17546dc61f7b\") " pod="openstack/ovn-controller-ovs-ztbsf" Nov 29 06:53:45 crc kubenswrapper[4947]: I1129 06:53:45.495525 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fc369426-cee0-4a95-aa63-9b8d4df05e7a-var-log-ovn\") pod \"ovn-controller-sn9qf\" (UID: \"fc369426-cee0-4a95-aa63-9b8d4df05e7a\") " pod="openstack/ovn-controller-sn9qf" Nov 29 06:53:45 crc kubenswrapper[4947]: I1129 06:53:45.495569 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vz2tz\" (UniqueName: \"kubernetes.io/projected/fc369426-cee0-4a95-aa63-9b8d4df05e7a-kube-api-access-vz2tz\") pod \"ovn-controller-sn9qf\" (UID: \"fc369426-cee0-4a95-aa63-9b8d4df05e7a\") " pod="openstack/ovn-controller-sn9qf" Nov 29 06:53:45 crc kubenswrapper[4947]: I1129 06:53:45.495616 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5fd9f589-89f8-44d3-9e3e-17546dc61f7b-scripts\") pod \"ovn-controller-ovs-ztbsf\" (UID: \"5fd9f589-89f8-44d3-9e3e-17546dc61f7b\") " pod="openstack/ovn-controller-ovs-ztbsf" Nov 29 06:53:45 crc kubenswrapper[4947]: I1129 06:53:45.495694 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcvcg\" (UniqueName: \"kubernetes.io/projected/5fd9f589-89f8-44d3-9e3e-17546dc61f7b-kube-api-access-tcvcg\") pod \"ovn-controller-ovs-ztbsf\" (UID: \"5fd9f589-89f8-44d3-9e3e-17546dc61f7b\") " pod="openstack/ovn-controller-ovs-ztbsf" Nov 29 06:53:45 crc kubenswrapper[4947]: I1129 06:53:45.495727 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fc369426-cee0-4a95-aa63-9b8d4df05e7a-var-run\") pod \"ovn-controller-sn9qf\" (UID: \"fc369426-cee0-4a95-aa63-9b8d4df05e7a\") " pod="openstack/ovn-controller-sn9qf" Nov 29 06:53:45 crc kubenswrapper[4947]: I1129 06:53:45.495788 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5fd9f589-89f8-44d3-9e3e-17546dc61f7b-var-run\") pod \"ovn-controller-ovs-ztbsf\" (UID: \"5fd9f589-89f8-44d3-9e3e-17546dc61f7b\") " pod="openstack/ovn-controller-ovs-ztbsf" Nov 29 06:53:45 crc kubenswrapper[4947]: I1129 06:53:45.495832 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5fd9f589-89f8-44d3-9e3e-17546dc61f7b-etc-ovs\") pod \"ovn-controller-ovs-ztbsf\" (UID: \"5fd9f589-89f8-44d3-9e3e-17546dc61f7b\") " pod="openstack/ovn-controller-ovs-ztbsf" Nov 29 06:53:45 crc kubenswrapper[4947]: I1129 06:53:45.495862 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fc369426-cee0-4a95-aa63-9b8d4df05e7a-scripts\") pod \"ovn-controller-sn9qf\" (UID: \"fc369426-cee0-4a95-aa63-9b8d4df05e7a\") " pod="openstack/ovn-controller-sn9qf" Nov 29 06:53:45 crc kubenswrapper[4947]: I1129 06:53:45.495904 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc369426-cee0-4a95-aa63-9b8d4df05e7a-combined-ca-bundle\") pod \"ovn-controller-sn9qf\" (UID: \"fc369426-cee0-4a95-aa63-9b8d4df05e7a\") " pod="openstack/ovn-controller-sn9qf" Nov 29 06:53:45 crc kubenswrapper[4947]: I1129 06:53:45.495978 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc369426-cee0-4a95-aa63-9b8d4df05e7a-ovn-controller-tls-certs\") pod \"ovn-controller-sn9qf\" (UID: \"fc369426-cee0-4a95-aa63-9b8d4df05e7a\") " pod="openstack/ovn-controller-sn9qf" Nov 29 06:53:45 crc kubenswrapper[4947]: I1129 06:53:45.496013 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fc369426-cee0-4a95-aa63-9b8d4df05e7a-var-run-ovn\") pod \"ovn-controller-sn9qf\" (UID: \"fc369426-cee0-4a95-aa63-9b8d4df05e7a\") " pod="openstack/ovn-controller-sn9qf" Nov 29 06:53:45 crc kubenswrapper[4947]: I1129 06:53:45.496180 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5fd9f589-89f8-44d3-9e3e-17546dc61f7b-var-lib\") pod \"ovn-controller-ovs-ztbsf\" (UID: \"5fd9f589-89f8-44d3-9e3e-17546dc61f7b\") " pod="openstack/ovn-controller-ovs-ztbsf" Nov 29 06:53:45 crc kubenswrapper[4947]: I1129 06:53:45.496313 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fc369426-cee0-4a95-aa63-9b8d4df05e7a-var-log-ovn\") pod \"ovn-controller-sn9qf\" (UID: \"fc369426-cee0-4a95-aa63-9b8d4df05e7a\") " pod="openstack/ovn-controller-sn9qf" Nov 29 06:53:45 crc kubenswrapper[4947]: I1129 06:53:45.496496 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5fd9f589-89f8-44d3-9e3e-17546dc61f7b-var-log\") pod \"ovn-controller-ovs-ztbsf\" (UID: \"5fd9f589-89f8-44d3-9e3e-17546dc61f7b\") " pod="openstack/ovn-controller-ovs-ztbsf" Nov 29 06:53:45 crc kubenswrapper[4947]: I1129 06:53:45.496506 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fc369426-cee0-4a95-aa63-9b8d4df05e7a-var-run-ovn\") pod \"ovn-controller-sn9qf\" (UID: \"fc369426-cee0-4a95-aa63-9b8d4df05e7a\") " pod="openstack/ovn-controller-sn9qf" Nov 29 06:53:45 crc kubenswrapper[4947]: I1129 06:53:45.496519 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5fd9f589-89f8-44d3-9e3e-17546dc61f7b-etc-ovs\") pod \"ovn-controller-ovs-ztbsf\" (UID: \"5fd9f589-89f8-44d3-9e3e-17546dc61f7b\") " pod="openstack/ovn-controller-ovs-ztbsf" Nov 29 06:53:45 crc kubenswrapper[4947]: I1129 06:53:45.496649 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fc369426-cee0-4a95-aa63-9b8d4df05e7a-var-run\") pod \"ovn-controller-sn9qf\" (UID: \"fc369426-cee0-4a95-aa63-9b8d4df05e7a\") " pod="openstack/ovn-controller-sn9qf" Nov 29 06:53:45 crc kubenswrapper[4947]: I1129 06:53:45.496730 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5fd9f589-89f8-44d3-9e3e-17546dc61f7b-var-run\") pod \"ovn-controller-ovs-ztbsf\" (UID: \"5fd9f589-89f8-44d3-9e3e-17546dc61f7b\") " pod="openstack/ovn-controller-ovs-ztbsf" Nov 29 06:53:45 crc kubenswrapper[4947]: I1129 06:53:45.499194 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fc369426-cee0-4a95-aa63-9b8d4df05e7a-scripts\") pod \"ovn-controller-sn9qf\" (UID: \"fc369426-cee0-4a95-aa63-9b8d4df05e7a\") " pod="openstack/ovn-controller-sn9qf" Nov 29 06:53:45 crc kubenswrapper[4947]: I1129 06:53:45.499835 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5fd9f589-89f8-44d3-9e3e-17546dc61f7b-scripts\") pod \"ovn-controller-ovs-ztbsf\" (UID: \"5fd9f589-89f8-44d3-9e3e-17546dc61f7b\") " pod="openstack/ovn-controller-ovs-ztbsf" Nov 29 06:53:45 crc kubenswrapper[4947]: I1129 06:53:45.507401 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc369426-cee0-4a95-aa63-9b8d4df05e7a-ovn-controller-tls-certs\") pod \"ovn-controller-sn9qf\" (UID: \"fc369426-cee0-4a95-aa63-9b8d4df05e7a\") " pod="openstack/ovn-controller-sn9qf" Nov 29 06:53:45 crc kubenswrapper[4947]: I1129 06:53:45.507482 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc369426-cee0-4a95-aa63-9b8d4df05e7a-combined-ca-bundle\") pod \"ovn-controller-sn9qf\" (UID: \"fc369426-cee0-4a95-aa63-9b8d4df05e7a\") " pod="openstack/ovn-controller-sn9qf" Nov 29 06:53:45 crc kubenswrapper[4947]: I1129 06:53:45.529412 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcvcg\" (UniqueName: \"kubernetes.io/projected/5fd9f589-89f8-44d3-9e3e-17546dc61f7b-kube-api-access-tcvcg\") pod \"ovn-controller-ovs-ztbsf\" (UID: \"5fd9f589-89f8-44d3-9e3e-17546dc61f7b\") " pod="openstack/ovn-controller-ovs-ztbsf" Nov 29 06:53:45 crc kubenswrapper[4947]: I1129 06:53:45.532566 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vz2tz\" (UniqueName: \"kubernetes.io/projected/fc369426-cee0-4a95-aa63-9b8d4df05e7a-kube-api-access-vz2tz\") pod \"ovn-controller-sn9qf\" (UID: \"fc369426-cee0-4a95-aa63-9b8d4df05e7a\") " pod="openstack/ovn-controller-sn9qf" Nov 29 06:53:45 crc kubenswrapper[4947]: I1129 06:53:45.619054 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sn9qf" Nov 29 06:53:45 crc kubenswrapper[4947]: I1129 06:53:45.642463 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-ztbsf" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.028858 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.030644 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.034741 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.034977 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.035130 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.035262 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.035390 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-8xwp6" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.042194 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.168282 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/03658f76-d11a-45a6-a60c-f43b6127225b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"03658f76-d11a-45a6-a60c-f43b6127225b\") " pod="openstack/ovsdbserver-sb-0" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.168717 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03658f76-d11a-45a6-a60c-f43b6127225b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"03658f76-d11a-45a6-a60c-f43b6127225b\") " pod="openstack/ovsdbserver-sb-0" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.168841 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/03658f76-d11a-45a6-a60c-f43b6127225b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"03658f76-d11a-45a6-a60c-f43b6127225b\") " pod="openstack/ovsdbserver-sb-0" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.168886 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/03658f76-d11a-45a6-a60c-f43b6127225b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"03658f76-d11a-45a6-a60c-f43b6127225b\") " pod="openstack/ovsdbserver-sb-0" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.168933 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"03658f76-d11a-45a6-a60c-f43b6127225b\") " pod="openstack/ovsdbserver-sb-0" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.168969 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/03658f76-d11a-45a6-a60c-f43b6127225b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"03658f76-d11a-45a6-a60c-f43b6127225b\") " pod="openstack/ovsdbserver-sb-0" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.169039 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5nkd\" (UniqueName: \"kubernetes.io/projected/03658f76-d11a-45a6-a60c-f43b6127225b-kube-api-access-f5nkd\") pod \"ovsdbserver-sb-0\" (UID: \"03658f76-d11a-45a6-a60c-f43b6127225b\") " pod="openstack/ovsdbserver-sb-0" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.169139 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03658f76-d11a-45a6-a60c-f43b6127225b-config\") pod \"ovsdbserver-sb-0\" (UID: \"03658f76-d11a-45a6-a60c-f43b6127225b\") " pod="openstack/ovsdbserver-sb-0" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.230163 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.231670 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.238487 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.239406 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.239511 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.243266 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.254754 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-lmcqz" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.270631 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/03658f76-d11a-45a6-a60c-f43b6127225b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"03658f76-d11a-45a6-a60c-f43b6127225b\") " pod="openstack/ovsdbserver-sb-0" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.270682 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03658f76-d11a-45a6-a60c-f43b6127225b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"03658f76-d11a-45a6-a60c-f43b6127225b\") " pod="openstack/ovsdbserver-sb-0" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.270709 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/03658f76-d11a-45a6-a60c-f43b6127225b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"03658f76-d11a-45a6-a60c-f43b6127225b\") " pod="openstack/ovsdbserver-sb-0" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.270739 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/03658f76-d11a-45a6-a60c-f43b6127225b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"03658f76-d11a-45a6-a60c-f43b6127225b\") " pod="openstack/ovsdbserver-sb-0" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.270767 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"03658f76-d11a-45a6-a60c-f43b6127225b\") " pod="openstack/ovsdbserver-sb-0" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.270793 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/03658f76-d11a-45a6-a60c-f43b6127225b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"03658f76-d11a-45a6-a60c-f43b6127225b\") " pod="openstack/ovsdbserver-sb-0" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.270816 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5nkd\" (UniqueName: \"kubernetes.io/projected/03658f76-d11a-45a6-a60c-f43b6127225b-kube-api-access-f5nkd\") pod \"ovsdbserver-sb-0\" (UID: \"03658f76-d11a-45a6-a60c-f43b6127225b\") " pod="openstack/ovsdbserver-sb-0" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.270850 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03658f76-d11a-45a6-a60c-f43b6127225b-config\") pod \"ovsdbserver-sb-0\" (UID: \"03658f76-d11a-45a6-a60c-f43b6127225b\") " pod="openstack/ovsdbserver-sb-0" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.271457 4947 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"03658f76-d11a-45a6-a60c-f43b6127225b\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/ovsdbserver-sb-0" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.271700 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03658f76-d11a-45a6-a60c-f43b6127225b-config\") pod \"ovsdbserver-sb-0\" (UID: \"03658f76-d11a-45a6-a60c-f43b6127225b\") " pod="openstack/ovsdbserver-sb-0" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.273109 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/03658f76-d11a-45a6-a60c-f43b6127225b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"03658f76-d11a-45a6-a60c-f43b6127225b\") " pod="openstack/ovsdbserver-sb-0" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.273723 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/03658f76-d11a-45a6-a60c-f43b6127225b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"03658f76-d11a-45a6-a60c-f43b6127225b\") " pod="openstack/ovsdbserver-sb-0" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.278735 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03658f76-d11a-45a6-a60c-f43b6127225b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"03658f76-d11a-45a6-a60c-f43b6127225b\") " pod="openstack/ovsdbserver-sb-0" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.279681 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/03658f76-d11a-45a6-a60c-f43b6127225b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"03658f76-d11a-45a6-a60c-f43b6127225b\") " pod="openstack/ovsdbserver-sb-0" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.281361 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/03658f76-d11a-45a6-a60c-f43b6127225b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"03658f76-d11a-45a6-a60c-f43b6127225b\") " pod="openstack/ovsdbserver-sb-0" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.290163 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5nkd\" (UniqueName: \"kubernetes.io/projected/03658f76-d11a-45a6-a60c-f43b6127225b-kube-api-access-f5nkd\") pod \"ovsdbserver-sb-0\" (UID: \"03658f76-d11a-45a6-a60c-f43b6127225b\") " pod="openstack/ovsdbserver-sb-0" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.300382 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"03658f76-d11a-45a6-a60c-f43b6127225b\") " pod="openstack/ovsdbserver-sb-0" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.369560 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.372014 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0e8a035-2cb6-413e-8c9a-86535635ae03-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c0e8a035-2cb6-413e-8c9a-86535635ae03\") " pod="openstack/ovsdbserver-nb-0" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.372164 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c0e8a035-2cb6-413e-8c9a-86535635ae03\") " pod="openstack/ovsdbserver-nb-0" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.372294 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c0e8a035-2cb6-413e-8c9a-86535635ae03-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c0e8a035-2cb6-413e-8c9a-86535635ae03\") " pod="openstack/ovsdbserver-nb-0" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.372384 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c0e8a035-2cb6-413e-8c9a-86535635ae03-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c0e8a035-2cb6-413e-8c9a-86535635ae03\") " pod="openstack/ovsdbserver-nb-0" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.372454 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkkk5\" (UniqueName: \"kubernetes.io/projected/c0e8a035-2cb6-413e-8c9a-86535635ae03-kube-api-access-hkkk5\") pod \"ovsdbserver-nb-0\" (UID: \"c0e8a035-2cb6-413e-8c9a-86535635ae03\") " pod="openstack/ovsdbserver-nb-0" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.372577 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0e8a035-2cb6-413e-8c9a-86535635ae03-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c0e8a035-2cb6-413e-8c9a-86535635ae03\") " pod="openstack/ovsdbserver-nb-0" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.372702 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0e8a035-2cb6-413e-8c9a-86535635ae03-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c0e8a035-2cb6-413e-8c9a-86535635ae03\") " pod="openstack/ovsdbserver-nb-0" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.372995 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0e8a035-2cb6-413e-8c9a-86535635ae03-config\") pod \"ovsdbserver-nb-0\" (UID: \"c0e8a035-2cb6-413e-8c9a-86535635ae03\") " pod="openstack/ovsdbserver-nb-0" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.474668 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c0e8a035-2cb6-413e-8c9a-86535635ae03-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c0e8a035-2cb6-413e-8c9a-86535635ae03\") " pod="openstack/ovsdbserver-nb-0" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.474711 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c0e8a035-2cb6-413e-8c9a-86535635ae03-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c0e8a035-2cb6-413e-8c9a-86535635ae03\") " pod="openstack/ovsdbserver-nb-0" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.474736 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkkk5\" (UniqueName: \"kubernetes.io/projected/c0e8a035-2cb6-413e-8c9a-86535635ae03-kube-api-access-hkkk5\") pod \"ovsdbserver-nb-0\" (UID: \"c0e8a035-2cb6-413e-8c9a-86535635ae03\") " pod="openstack/ovsdbserver-nb-0" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.474804 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0e8a035-2cb6-413e-8c9a-86535635ae03-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c0e8a035-2cb6-413e-8c9a-86535635ae03\") " pod="openstack/ovsdbserver-nb-0" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.474850 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0e8a035-2cb6-413e-8c9a-86535635ae03-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c0e8a035-2cb6-413e-8c9a-86535635ae03\") " pod="openstack/ovsdbserver-nb-0" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.474874 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0e8a035-2cb6-413e-8c9a-86535635ae03-config\") pod \"ovsdbserver-nb-0\" (UID: \"c0e8a035-2cb6-413e-8c9a-86535635ae03\") " pod="openstack/ovsdbserver-nb-0" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.474891 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0e8a035-2cb6-413e-8c9a-86535635ae03-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c0e8a035-2cb6-413e-8c9a-86535635ae03\") " pod="openstack/ovsdbserver-nb-0" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.474910 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c0e8a035-2cb6-413e-8c9a-86535635ae03\") " pod="openstack/ovsdbserver-nb-0" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.475047 4947 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c0e8a035-2cb6-413e-8c9a-86535635ae03\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-nb-0" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.475414 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c0e8a035-2cb6-413e-8c9a-86535635ae03-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c0e8a035-2cb6-413e-8c9a-86535635ae03\") " pod="openstack/ovsdbserver-nb-0" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.477011 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0e8a035-2cb6-413e-8c9a-86535635ae03-config\") pod \"ovsdbserver-nb-0\" (UID: \"c0e8a035-2cb6-413e-8c9a-86535635ae03\") " pod="openstack/ovsdbserver-nb-0" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.477032 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c0e8a035-2cb6-413e-8c9a-86535635ae03-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c0e8a035-2cb6-413e-8c9a-86535635ae03\") " pod="openstack/ovsdbserver-nb-0" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.480019 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0e8a035-2cb6-413e-8c9a-86535635ae03-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c0e8a035-2cb6-413e-8c9a-86535635ae03\") " pod="openstack/ovsdbserver-nb-0" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.480360 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0e8a035-2cb6-413e-8c9a-86535635ae03-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c0e8a035-2cb6-413e-8c9a-86535635ae03\") " pod="openstack/ovsdbserver-nb-0" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.483789 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0e8a035-2cb6-413e-8c9a-86535635ae03-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c0e8a035-2cb6-413e-8c9a-86535635ae03\") " pod="openstack/ovsdbserver-nb-0" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.497214 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkkk5\" (UniqueName: \"kubernetes.io/projected/c0e8a035-2cb6-413e-8c9a-86535635ae03-kube-api-access-hkkk5\") pod \"ovsdbserver-nb-0\" (UID: \"c0e8a035-2cb6-413e-8c9a-86535635ae03\") " pod="openstack/ovsdbserver-nb-0" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.497464 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c0e8a035-2cb6-413e-8c9a-86535635ae03\") " pod="openstack/ovsdbserver-nb-0" Nov 29 06:53:49 crc kubenswrapper[4947]: I1129 06:53:49.560960 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 29 06:54:00 crc kubenswrapper[4947]: I1129 06:54:00.518027 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 29 06:54:01 crc kubenswrapper[4947]: I1129 06:54:01.883904 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-ztbsf"] Nov 29 06:54:02 crc kubenswrapper[4947]: W1129 06:54:02.419252 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f1620f7_4f3f_47bf_995a_f4eef1c9cf0f.slice/crio-653f2797740bc2d3b215b6a77cbfe6d536d47db4c3dd03db9e1ce4aac38ccff1 WatchSource:0}: Error finding container 653f2797740bc2d3b215b6a77cbfe6d536d47db4c3dd03db9e1ce4aac38ccff1: Status 404 returned error can't find the container with id 653f2797740bc2d3b215b6a77cbfe6d536d47db4c3dd03db9e1ce4aac38ccff1 Nov 29 06:54:02 crc kubenswrapper[4947]: E1129 06:54:02.461489 4947 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 29 06:54:02 crc kubenswrapper[4947]: E1129 06:54:02.462035 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vxn59,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-22brq_openstack(2e8aa085-3c9e-4f38-9fe4-2b16f6d598b0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 06:54:02 crc kubenswrapper[4947]: E1129 06:54:02.463384 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-22brq" podUID="2e8aa085-3c9e-4f38-9fe4-2b16f6d598b0" Nov 29 06:54:02 crc kubenswrapper[4947]: E1129 06:54:02.493335 4947 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 29 06:54:02 crc kubenswrapper[4947]: E1129 06:54:02.493555 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d6t4k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-jrnft_openstack(0de3454e-aa59-4cbf-855f-3a161faa93eb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 06:54:02 crc kubenswrapper[4947]: E1129 06:54:02.494827 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-jrnft" podUID="0de3454e-aa59-4cbf-855f-3a161faa93eb" Nov 29 06:54:02 crc kubenswrapper[4947]: I1129 06:54:02.710943 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ztbsf" event={"ID":"5fd9f589-89f8-44d3-9e3e-17546dc61f7b","Type":"ContainerStarted","Data":"218076f10951685258eda5116ca8234ea853a1e84b4b9697169615057d802a2a"} Nov 29 06:54:02 crc kubenswrapper[4947]: I1129 06:54:02.711935 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1f1620f7-4f3f-47bf-995a-f4eef1c9cf0f","Type":"ContainerStarted","Data":"653f2797740bc2d3b215b6a77cbfe6d536d47db4c3dd03db9e1ce4aac38ccff1"} Nov 29 06:54:02 crc kubenswrapper[4947]: E1129 06:54:02.721425 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-jrnft" podUID="0de3454e-aa59-4cbf-855f-3a161faa93eb" Nov 29 06:54:02 crc kubenswrapper[4947]: E1129 06:54:02.800612 4947 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 29 06:54:02 crc kubenswrapper[4947]: E1129 06:54:02.800815 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d64vv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5ccc8479f9-lfrj7_openstack(565b2507-ddbb-4657-b879-082fb62e1284): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 06:54:02 crc kubenswrapper[4947]: E1129 06:54:02.802203 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5ccc8479f9-lfrj7" podUID="565b2507-ddbb-4657-b879-082fb62e1284" Nov 29 06:54:02 crc kubenswrapper[4947]: E1129 06:54:02.807281 4947 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 29 06:54:02 crc kubenswrapper[4947]: E1129 06:54:02.807452 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rsr5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-cz49j_openstack(5a24c29c-f137-4066-8f99-a4714b59e044): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 06:54:02 crc kubenswrapper[4947]: E1129 06:54:02.814072 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-cz49j" podUID="5a24c29c-f137-4066-8f99-a4714b59e044" Nov 29 06:54:03 crc kubenswrapper[4947]: I1129 06:54:03.145376 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-22brq" Nov 29 06:54:03 crc kubenswrapper[4947]: I1129 06:54:03.309166 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 29 06:54:03 crc kubenswrapper[4947]: I1129 06:54:03.321614 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxn59\" (UniqueName: \"kubernetes.io/projected/2e8aa085-3c9e-4f38-9fe4-2b16f6d598b0-kube-api-access-vxn59\") pod \"2e8aa085-3c9e-4f38-9fe4-2b16f6d598b0\" (UID: \"2e8aa085-3c9e-4f38-9fe4-2b16f6d598b0\") " Nov 29 06:54:03 crc kubenswrapper[4947]: I1129 06:54:03.321775 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e8aa085-3c9e-4f38-9fe4-2b16f6d598b0-config\") pod \"2e8aa085-3c9e-4f38-9fe4-2b16f6d598b0\" (UID: \"2e8aa085-3c9e-4f38-9fe4-2b16f6d598b0\") " Nov 29 06:54:03 crc kubenswrapper[4947]: I1129 06:54:03.321861 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e8aa085-3c9e-4f38-9fe4-2b16f6d598b0-dns-svc\") pod \"2e8aa085-3c9e-4f38-9fe4-2b16f6d598b0\" (UID: \"2e8aa085-3c9e-4f38-9fe4-2b16f6d598b0\") " Nov 29 06:54:03 crc kubenswrapper[4947]: I1129 06:54:03.323884 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e8aa085-3c9e-4f38-9fe4-2b16f6d598b0-config" (OuterVolumeSpecName: "config") pod "2e8aa085-3c9e-4f38-9fe4-2b16f6d598b0" (UID: "2e8aa085-3c9e-4f38-9fe4-2b16f6d598b0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:54:03 crc kubenswrapper[4947]: I1129 06:54:03.324245 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e8aa085-3c9e-4f38-9fe4-2b16f6d598b0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2e8aa085-3c9e-4f38-9fe4-2b16f6d598b0" (UID: "2e8aa085-3c9e-4f38-9fe4-2b16f6d598b0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:54:03 crc kubenswrapper[4947]: I1129 06:54:03.332911 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e8aa085-3c9e-4f38-9fe4-2b16f6d598b0-kube-api-access-vxn59" (OuterVolumeSpecName: "kube-api-access-vxn59") pod "2e8aa085-3c9e-4f38-9fe4-2b16f6d598b0" (UID: "2e8aa085-3c9e-4f38-9fe4-2b16f6d598b0"). InnerVolumeSpecName "kube-api-access-vxn59". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:54:03 crc kubenswrapper[4947]: I1129 06:54:03.410480 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 29 06:54:03 crc kubenswrapper[4947]: I1129 06:54:03.420799 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 29 06:54:03 crc kubenswrapper[4947]: I1129 06:54:03.424702 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e8aa085-3c9e-4f38-9fe4-2b16f6d598b0-config\") on node \"crc\" DevicePath \"\"" Nov 29 06:54:03 crc kubenswrapper[4947]: I1129 06:54:03.424768 4947 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e8aa085-3c9e-4f38-9fe4-2b16f6d598b0-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 06:54:03 crc kubenswrapper[4947]: I1129 06:54:03.424784 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxn59\" (UniqueName: \"kubernetes.io/projected/2e8aa085-3c9e-4f38-9fe4-2b16f6d598b0-kube-api-access-vxn59\") on node \"crc\" DevicePath \"\"" Nov 29 06:54:03 crc kubenswrapper[4947]: W1129 06:54:03.429947 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc514c2a_183d_404e_ba5d_7641695da78c.slice/crio-852273845a4fdfb4e96f5e5d493b67b2f729c09a6e6965b956777d5e731a83d1 WatchSource:0}: Error finding container 852273845a4fdfb4e96f5e5d493b67b2f729c09a6e6965b956777d5e731a83d1: Status 404 returned error can't find the container with id 852273845a4fdfb4e96f5e5d493b67b2f729c09a6e6965b956777d5e731a83d1 Nov 29 06:54:03 crc kubenswrapper[4947]: I1129 06:54:03.430610 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sn9qf"] Nov 29 06:54:03 crc kubenswrapper[4947]: I1129 06:54:03.515377 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 29 06:54:03 crc kubenswrapper[4947]: W1129 06:54:03.520614 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0e8a035_2cb6_413e_8c9a_86535635ae03.slice/crio-e6918b99ddf3599801e9e8741bbcc37f830302a78e871600f386c7800690379a WatchSource:0}: Error finding container e6918b99ddf3599801e9e8741bbcc37f830302a78e871600f386c7800690379a: Status 404 returned error can't find the container with id e6918b99ddf3599801e9e8741bbcc37f830302a78e871600f386c7800690379a Nov 29 06:54:03 crc kubenswrapper[4947]: I1129 06:54:03.729244 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sn9qf" event={"ID":"fc369426-cee0-4a95-aa63-9b8d4df05e7a","Type":"ContainerStarted","Data":"c26da7da7c3e29f418da4a55295971724b1592ada6658f309206aa014e4ab7b3"} Nov 29 06:54:03 crc kubenswrapper[4947]: I1129 06:54:03.737747 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"13265680-d7d8-4091-8dab-29ac0243dc05","Type":"ContainerStarted","Data":"028e7b9995bb97f31173d9b786eef771b24e3fe46e684889a037bf49271a4b06"} Nov 29 06:54:03 crc kubenswrapper[4947]: I1129 06:54:03.739840 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cc514c2a-183d-404e-ba5d-7641695da78c","Type":"ContainerStarted","Data":"852273845a4fdfb4e96f5e5d493b67b2f729c09a6e6965b956777d5e731a83d1"} Nov 29 06:54:03 crc kubenswrapper[4947]: I1129 06:54:03.741569 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"bc968903-97f7-437d-882d-1bb4278dab13","Type":"ContainerStarted","Data":"7bc3bd6e17225a72dee113532061c43fb8a4e0806538b69a2e48eefe675d11c8"} Nov 29 06:54:03 crc kubenswrapper[4947]: I1129 06:54:03.743507 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-22brq" event={"ID":"2e8aa085-3c9e-4f38-9fe4-2b16f6d598b0","Type":"ContainerDied","Data":"025062be4034d66b7d42828a8ab211dddbd999e997a34d1ff4afbd76f8993e14"} Nov 29 06:54:03 crc kubenswrapper[4947]: I1129 06:54:03.743562 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-22brq" Nov 29 06:54:03 crc kubenswrapper[4947]: I1129 06:54:03.745294 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c0e8a035-2cb6-413e-8c9a-86535635ae03","Type":"ContainerStarted","Data":"e6918b99ddf3599801e9e8741bbcc37f830302a78e871600f386c7800690379a"} Nov 29 06:54:03 crc kubenswrapper[4947]: E1129 06:54:03.747988 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-5ccc8479f9-lfrj7" podUID="565b2507-ddbb-4657-b879-082fb62e1284" Nov 29 06:54:03 crc kubenswrapper[4947]: I1129 06:54:03.838687 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-22brq"] Nov 29 06:54:03 crc kubenswrapper[4947]: I1129 06:54:03.847634 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-22brq"] Nov 29 06:54:04 crc kubenswrapper[4947]: I1129 06:54:04.270550 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-cz49j" Nov 29 06:54:04 crc kubenswrapper[4947]: I1129 06:54:04.449739 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a24c29c-f137-4066-8f99-a4714b59e044-config\") pod \"5a24c29c-f137-4066-8f99-a4714b59e044\" (UID: \"5a24c29c-f137-4066-8f99-a4714b59e044\") " Nov 29 06:54:04 crc kubenswrapper[4947]: I1129 06:54:04.449914 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsr5w\" (UniqueName: \"kubernetes.io/projected/5a24c29c-f137-4066-8f99-a4714b59e044-kube-api-access-rsr5w\") pod \"5a24c29c-f137-4066-8f99-a4714b59e044\" (UID: \"5a24c29c-f137-4066-8f99-a4714b59e044\") " Nov 29 06:54:04 crc kubenswrapper[4947]: I1129 06:54:04.451734 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a24c29c-f137-4066-8f99-a4714b59e044-config" (OuterVolumeSpecName: "config") pod "5a24c29c-f137-4066-8f99-a4714b59e044" (UID: "5a24c29c-f137-4066-8f99-a4714b59e044"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:54:04 crc kubenswrapper[4947]: I1129 06:54:04.457294 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a24c29c-f137-4066-8f99-a4714b59e044-kube-api-access-rsr5w" (OuterVolumeSpecName: "kube-api-access-rsr5w") pod "5a24c29c-f137-4066-8f99-a4714b59e044" (UID: "5a24c29c-f137-4066-8f99-a4714b59e044"). InnerVolumeSpecName "kube-api-access-rsr5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:54:04 crc kubenswrapper[4947]: I1129 06:54:04.553944 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a24c29c-f137-4066-8f99-a4714b59e044-config\") on node \"crc\" DevicePath \"\"" Nov 29 06:54:04 crc kubenswrapper[4947]: I1129 06:54:04.554304 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rsr5w\" (UniqueName: \"kubernetes.io/projected/5a24c29c-f137-4066-8f99-a4714b59e044-kube-api-access-rsr5w\") on node \"crc\" DevicePath \"\"" Nov 29 06:54:04 crc kubenswrapper[4947]: I1129 06:54:04.562046 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 29 06:54:04 crc kubenswrapper[4947]: I1129 06:54:04.766947 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8","Type":"ContainerStarted","Data":"715f470ce1721879c0311ceda3a06eeda1c4246dd6ce79c6e137a07f44a5fe37"} Nov 29 06:54:04 crc kubenswrapper[4947]: I1129 06:54:04.768693 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-cz49j" event={"ID":"5a24c29c-f137-4066-8f99-a4714b59e044","Type":"ContainerDied","Data":"549543a89348b89a4fb0ce1135fb4ab4f892f9f16fca8c08ab8201a7bd4aaa38"} Nov 29 06:54:04 crc kubenswrapper[4947]: I1129 06:54:04.768772 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-cz49j" Nov 29 06:54:04 crc kubenswrapper[4947]: I1129 06:54:04.771353 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1df9108b-7e5b-4dd6-bd7e-787381428bce","Type":"ContainerStarted","Data":"c557257ddf99194f29d76a34461fba82b4700930bfc653296e8ab52b9ef25318"} Nov 29 06:54:04 crc kubenswrapper[4947]: I1129 06:54:04.930518 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-cz49j"] Nov 29 06:54:04 crc kubenswrapper[4947]: I1129 06:54:04.937674 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-cz49j"] Nov 29 06:54:05 crc kubenswrapper[4947]: I1129 06:54:05.189935 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e8aa085-3c9e-4f38-9fe4-2b16f6d598b0" path="/var/lib/kubelet/pods/2e8aa085-3c9e-4f38-9fe4-2b16f6d598b0/volumes" Nov 29 06:54:05 crc kubenswrapper[4947]: I1129 06:54:05.190401 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a24c29c-f137-4066-8f99-a4714b59e044" path="/var/lib/kubelet/pods/5a24c29c-f137-4066-8f99-a4714b59e044/volumes" Nov 29 06:54:05 crc kubenswrapper[4947]: I1129 06:54:05.779709 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"03658f76-d11a-45a6-a60c-f43b6127225b","Type":"ContainerStarted","Data":"db0368df29de45c2eec81871bd014251149889964d833a1d338b868597a4d144"} Nov 29 06:54:11 crc kubenswrapper[4947]: I1129 06:54:11.830688 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"03658f76-d11a-45a6-a60c-f43b6127225b","Type":"ContainerStarted","Data":"a35eb066bdabf07abea613431249355b8dd6818e4c53bad6fa334982a0a6ac47"} Nov 29 06:54:11 crc kubenswrapper[4947]: I1129 06:54:11.834385 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cc514c2a-183d-404e-ba5d-7641695da78c","Type":"ContainerStarted","Data":"1b4e43ccec099e72145483dfd5425f13d8f76c9fab6b04efea0336625cdf037c"} Nov 29 06:54:11 crc kubenswrapper[4947]: I1129 06:54:11.834498 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 29 06:54:11 crc kubenswrapper[4947]: I1129 06:54:11.839705 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"bc968903-97f7-437d-882d-1bb4278dab13","Type":"ContainerStarted","Data":"a30c0e9f6280f68ac964a0ec0cf5c6b1681fc6d418953849d87e83d6be46c32e"} Nov 29 06:54:11 crc kubenswrapper[4947]: I1129 06:54:11.843305 4947 generic.go:334] "Generic (PLEG): container finished" podID="5fd9f589-89f8-44d3-9e3e-17546dc61f7b" containerID="510c9f15e68089d3f29eca66f95501832c7e722156675655083544db99b58114" exitCode=0 Nov 29 06:54:11 crc kubenswrapper[4947]: I1129 06:54:11.843412 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ztbsf" event={"ID":"5fd9f589-89f8-44d3-9e3e-17546dc61f7b","Type":"ContainerDied","Data":"510c9f15e68089d3f29eca66f95501832c7e722156675655083544db99b58114"} Nov 29 06:54:11 crc kubenswrapper[4947]: I1129 06:54:11.845741 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c0e8a035-2cb6-413e-8c9a-86535635ae03","Type":"ContainerStarted","Data":"61d0459e23c7257c22a65efb79e3cf805f630e75c0e91237296894b5b724c762"} Nov 29 06:54:11 crc kubenswrapper[4947]: I1129 06:54:11.847538 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sn9qf" event={"ID":"fc369426-cee0-4a95-aa63-9b8d4df05e7a","Type":"ContainerStarted","Data":"df86c8df9630bad0cc88f910e7cd6fc63ea975aaaeac5076c02820d4244f1189"} Nov 29 06:54:11 crc kubenswrapper[4947]: I1129 06:54:11.847687 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-sn9qf" Nov 29 06:54:11 crc kubenswrapper[4947]: I1129 06:54:11.849627 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"13265680-d7d8-4091-8dab-29ac0243dc05","Type":"ContainerStarted","Data":"920eaa7cfc124a0c15bef19c79cfda77afd47e35cbfc9b69aae58c07389d3631"} Nov 29 06:54:11 crc kubenswrapper[4947]: I1129 06:54:11.850611 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 29 06:54:11 crc kubenswrapper[4947]: I1129 06:54:11.852251 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1f1620f7-4f3f-47bf-995a-f4eef1c9cf0f","Type":"ContainerStarted","Data":"ae730fdb7ea3ca9120837552b17b8c92d94406479d9d0c66a59aa8dc80b1f484"} Nov 29 06:54:11 crc kubenswrapper[4947]: I1129 06:54:11.871203 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=23.484495803 podStartE2EDuration="30.871169558s" podCreationTimestamp="2025-11-29 06:53:41 +0000 UTC" firstStartedPulling="2025-11-29 06:54:03.434472942 +0000 UTC m=+1194.478855023" lastFinishedPulling="2025-11-29 06:54:10.821146697 +0000 UTC m=+1201.865528778" observedRunningTime="2025-11-29 06:54:11.860306976 +0000 UTC m=+1202.904689057" watchObservedRunningTime="2025-11-29 06:54:11.871169558 +0000 UTC m=+1202.915551639" Nov 29 06:54:11 crc kubenswrapper[4947]: I1129 06:54:11.934781 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-sn9qf" podStartSLOduration=20.198398953 podStartE2EDuration="26.934750929s" podCreationTimestamp="2025-11-29 06:53:45 +0000 UTC" firstStartedPulling="2025-11-29 06:54:03.435090197 +0000 UTC m=+1194.479472278" lastFinishedPulling="2025-11-29 06:54:10.171442173 +0000 UTC m=+1201.215824254" observedRunningTime="2025-11-29 06:54:11.928836881 +0000 UTC m=+1202.973218962" watchObservedRunningTime="2025-11-29 06:54:11.934750929 +0000 UTC m=+1202.979133010" Nov 29 06:54:11 crc kubenswrapper[4947]: I1129 06:54:11.958514 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=26.279529204 podStartE2EDuration="32.958484273s" podCreationTimestamp="2025-11-29 06:53:39 +0000 UTC" firstStartedPulling="2025-11-29 06:54:03.317612239 +0000 UTC m=+1194.361994320" lastFinishedPulling="2025-11-29 06:54:09.996567298 +0000 UTC m=+1201.040949389" observedRunningTime="2025-11-29 06:54:11.947749994 +0000 UTC m=+1202.992132075" watchObservedRunningTime="2025-11-29 06:54:11.958484273 +0000 UTC m=+1203.002866354" Nov 29 06:54:12 crc kubenswrapper[4947]: I1129 06:54:12.864181 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ztbsf" event={"ID":"5fd9f589-89f8-44d3-9e3e-17546dc61f7b","Type":"ContainerStarted","Data":"7020d420f82e3ac44ec76e9738904b88db361785fe4ba2641d4c44dd76d98b0f"} Nov 29 06:54:12 crc kubenswrapper[4947]: I1129 06:54:12.864826 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ztbsf" event={"ID":"5fd9f589-89f8-44d3-9e3e-17546dc61f7b","Type":"ContainerStarted","Data":"ae1d6597a24cf2715e42c578d1f0537891e04614a844cd310bf98d5b58910347"} Nov 29 06:54:12 crc kubenswrapper[4947]: I1129 06:54:12.864945 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-ztbsf" Nov 29 06:54:12 crc kubenswrapper[4947]: I1129 06:54:12.865668 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-ztbsf" Nov 29 06:54:12 crc kubenswrapper[4947]: I1129 06:54:12.895931 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-ztbsf" podStartSLOduration=20.443796369 podStartE2EDuration="27.895901226s" podCreationTimestamp="2025-11-29 06:53:45 +0000 UTC" firstStartedPulling="2025-11-29 06:54:02.719536971 +0000 UTC m=+1193.763919052" lastFinishedPulling="2025-11-29 06:54:10.171641808 +0000 UTC m=+1201.216023909" observedRunningTime="2025-11-29 06:54:12.887150237 +0000 UTC m=+1203.931532318" watchObservedRunningTime="2025-11-29 06:54:12.895901226 +0000 UTC m=+1203.940283307" Nov 29 06:54:14 crc kubenswrapper[4947]: I1129 06:54:14.881360 4947 generic.go:334] "Generic (PLEG): container finished" podID="bc968903-97f7-437d-882d-1bb4278dab13" containerID="a30c0e9f6280f68ac964a0ec0cf5c6b1681fc6d418953849d87e83d6be46c32e" exitCode=0 Nov 29 06:54:14 crc kubenswrapper[4947]: I1129 06:54:14.881451 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"bc968903-97f7-437d-882d-1bb4278dab13","Type":"ContainerDied","Data":"a30c0e9f6280f68ac964a0ec0cf5c6b1681fc6d418953849d87e83d6be46c32e"} Nov 29 06:54:15 crc kubenswrapper[4947]: I1129 06:54:15.211436 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 29 06:54:15 crc kubenswrapper[4947]: I1129 06:54:15.892287 4947 generic.go:334] "Generic (PLEG): container finished" podID="1f1620f7-4f3f-47bf-995a-f4eef1c9cf0f" containerID="ae730fdb7ea3ca9120837552b17b8c92d94406479d9d0c66a59aa8dc80b1f484" exitCode=0 Nov 29 06:54:15 crc kubenswrapper[4947]: I1129 06:54:15.892406 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1f1620f7-4f3f-47bf-995a-f4eef1c9cf0f","Type":"ContainerDied","Data":"ae730fdb7ea3ca9120837552b17b8c92d94406479d9d0c66a59aa8dc80b1f484"} Nov 29 06:54:15 crc kubenswrapper[4947]: I1129 06:54:15.895467 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"03658f76-d11a-45a6-a60c-f43b6127225b","Type":"ContainerStarted","Data":"68c414a41750c7b21bd15f95f37d250c375c9ff174200911140495a119aef8b2"} Nov 29 06:54:15 crc kubenswrapper[4947]: I1129 06:54:15.898042 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"bc968903-97f7-437d-882d-1bb4278dab13","Type":"ContainerStarted","Data":"977251d314c4e79f9c257a6dda19e948a0ff314d04f902d3665d870f0d4d2b23"} Nov 29 06:54:15 crc kubenswrapper[4947]: I1129 06:54:15.903314 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c0e8a035-2cb6-413e-8c9a-86535635ae03","Type":"ContainerStarted","Data":"5338879a870dc389928092e6457a7334e7315222c21d9bca10d9f9064d23c194"} Nov 29 06:54:15 crc kubenswrapper[4947]: I1129 06:54:15.957159 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=16.767706488 podStartE2EDuration="27.957135534s" podCreationTimestamp="2025-11-29 06:53:48 +0000 UTC" firstStartedPulling="2025-11-29 06:54:03.523623782 +0000 UTC m=+1194.568005863" lastFinishedPulling="2025-11-29 06:54:14.713052828 +0000 UTC m=+1205.757434909" observedRunningTime="2025-11-29 06:54:15.941973835 +0000 UTC m=+1206.986355916" watchObservedRunningTime="2025-11-29 06:54:15.957135534 +0000 UTC m=+1207.001517615" Nov 29 06:54:15 crc kubenswrapper[4947]: I1129 06:54:15.984549 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=32.248885201 podStartE2EDuration="38.984527109s" podCreationTimestamp="2025-11-29 06:53:37 +0000 UTC" firstStartedPulling="2025-11-29 06:54:03.434865872 +0000 UTC m=+1194.479247953" lastFinishedPulling="2025-11-29 06:54:10.17050778 +0000 UTC m=+1201.214889861" observedRunningTime="2025-11-29 06:54:15.983801681 +0000 UTC m=+1207.028183772" watchObservedRunningTime="2025-11-29 06:54:15.984527109 +0000 UTC m=+1207.028909190" Nov 29 06:54:16 crc kubenswrapper[4947]: I1129 06:54:16.007744 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=19.271769949 podStartE2EDuration="29.00772513s" podCreationTimestamp="2025-11-29 06:53:47 +0000 UTC" firstStartedPulling="2025-11-29 06:54:04.983937538 +0000 UTC m=+1196.028319619" lastFinishedPulling="2025-11-29 06:54:14.719892719 +0000 UTC m=+1205.764274800" observedRunningTime="2025-11-29 06:54:16.006305604 +0000 UTC m=+1207.050687695" watchObservedRunningTime="2025-11-29 06:54:16.00772513 +0000 UTC m=+1207.052107211" Nov 29 06:54:16 crc kubenswrapper[4947]: I1129 06:54:16.370670 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 29 06:54:16 crc kubenswrapper[4947]: I1129 06:54:16.418793 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 29 06:54:16 crc kubenswrapper[4947]: E1129 06:54:16.507614 4947 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.47:60744->38.102.83.47:34209: write tcp 38.102.83.47:60744->38.102.83.47:34209: write: broken pipe Nov 29 06:54:16 crc kubenswrapper[4947]: I1129 06:54:16.561111 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 29 06:54:16 crc kubenswrapper[4947]: I1129 06:54:16.604182 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 29 06:54:16 crc kubenswrapper[4947]: E1129 06:54:16.740630 4947 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0de3454e_aa59_4cbf_855f_3a161faa93eb.slice/crio-conmon-8bcfa76e2fdec74607f3dadbf8af95bb3daef644b632664282de447308143a7a.scope\": RecentStats: unable to find data in memory cache]" Nov 29 06:54:16 crc kubenswrapper[4947]: I1129 06:54:16.916522 4947 generic.go:334] "Generic (PLEG): container finished" podID="0de3454e-aa59-4cbf-855f-3a161faa93eb" containerID="8bcfa76e2fdec74607f3dadbf8af95bb3daef644b632664282de447308143a7a" exitCode=0 Nov 29 06:54:16 crc kubenswrapper[4947]: I1129 06:54:16.916681 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-jrnft" event={"ID":"0de3454e-aa59-4cbf-855f-3a161faa93eb","Type":"ContainerDied","Data":"8bcfa76e2fdec74607f3dadbf8af95bb3daef644b632664282de447308143a7a"} Nov 29 06:54:16 crc kubenswrapper[4947]: I1129 06:54:16.921491 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1f1620f7-4f3f-47bf-995a-f4eef1c9cf0f","Type":"ContainerStarted","Data":"2e1d145e72bc0d4989a2b2c4067c1ad4f424f02f7f3fe88b91fc74339c7d7dff"} Nov 29 06:54:16 crc kubenswrapper[4947]: I1129 06:54:16.923555 4947 generic.go:334] "Generic (PLEG): container finished" podID="565b2507-ddbb-4657-b879-082fb62e1284" containerID="8fe17f0eb722f5c703070bbe19914ec3b53aff92480bfe0b6a0d5a121a8c801f" exitCode=0 Nov 29 06:54:16 crc kubenswrapper[4947]: I1129 06:54:16.923674 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-lfrj7" event={"ID":"565b2507-ddbb-4657-b879-082fb62e1284","Type":"ContainerDied","Data":"8fe17f0eb722f5c703070bbe19914ec3b53aff92480bfe0b6a0d5a121a8c801f"} Nov 29 06:54:16 crc kubenswrapper[4947]: I1129 06:54:16.924397 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 29 06:54:16 crc kubenswrapper[4947]: I1129 06:54:16.924432 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 29 06:54:16 crc kubenswrapper[4947]: I1129 06:54:16.985646 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 29 06:54:16 crc kubenswrapper[4947]: I1129 06:54:16.992549 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.006208 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=31.176189103 podStartE2EDuration="39.00618905s" podCreationTimestamp="2025-11-29 06:53:38 +0000 UTC" firstStartedPulling="2025-11-29 06:54:02.454836923 +0000 UTC m=+1193.499219004" lastFinishedPulling="2025-11-29 06:54:10.28483687 +0000 UTC m=+1201.329218951" observedRunningTime="2025-11-29 06:54:17.000513198 +0000 UTC m=+1208.044895299" watchObservedRunningTime="2025-11-29 06:54:17.00618905 +0000 UTC m=+1208.050571131" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.213263 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-lfrj7"] Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.270797 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-v9kgh"] Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.272360 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-v9kgh" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.277428 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.301379 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-v9kgh"] Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.309308 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-r7njh"] Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.310823 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-r7njh" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.313101 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.339470 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-r7njh"] Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.402529 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9ce8dcb-fa34-4847-b6dd-c97af246991c-config\") pod \"dnsmasq-dns-7f896c8c65-v9kgh\" (UID: \"f9ce8dcb-fa34-4847-b6dd-c97af246991c\") " pod="openstack/dnsmasq-dns-7f896c8c65-v9kgh" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.402605 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9ce8dcb-fa34-4847-b6dd-c97af246991c-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-v9kgh\" (UID: \"f9ce8dcb-fa34-4847-b6dd-c97af246991c\") " pod="openstack/dnsmasq-dns-7f896c8c65-v9kgh" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.402630 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e02dafe9-25b6-4837-8469-24d843eeff31-combined-ca-bundle\") pod \"ovn-controller-metrics-r7njh\" (UID: \"e02dafe9-25b6-4837-8469-24d843eeff31\") " pod="openstack/ovn-controller-metrics-r7njh" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.402739 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/e02dafe9-25b6-4837-8469-24d843eeff31-ovn-rundir\") pod \"ovn-controller-metrics-r7njh\" (UID: \"e02dafe9-25b6-4837-8469-24d843eeff31\") " pod="openstack/ovn-controller-metrics-r7njh" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.402796 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/e02dafe9-25b6-4837-8469-24d843eeff31-ovs-rundir\") pod \"ovn-controller-metrics-r7njh\" (UID: \"e02dafe9-25b6-4837-8469-24d843eeff31\") " pod="openstack/ovn-controller-metrics-r7njh" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.402825 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq45x\" (UniqueName: \"kubernetes.io/projected/f9ce8dcb-fa34-4847-b6dd-c97af246991c-kube-api-access-gq45x\") pod \"dnsmasq-dns-7f896c8c65-v9kgh\" (UID: \"f9ce8dcb-fa34-4847-b6dd-c97af246991c\") " pod="openstack/dnsmasq-dns-7f896c8c65-v9kgh" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.402841 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e02dafe9-25b6-4837-8469-24d843eeff31-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-r7njh\" (UID: \"e02dafe9-25b6-4837-8469-24d843eeff31\") " pod="openstack/ovn-controller-metrics-r7njh" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.402865 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwxms\" (UniqueName: \"kubernetes.io/projected/e02dafe9-25b6-4837-8469-24d843eeff31-kube-api-access-bwxms\") pod \"ovn-controller-metrics-r7njh\" (UID: \"e02dafe9-25b6-4837-8469-24d843eeff31\") " pod="openstack/ovn-controller-metrics-r7njh" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.402893 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f9ce8dcb-fa34-4847-b6dd-c97af246991c-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-v9kgh\" (UID: \"f9ce8dcb-fa34-4847-b6dd-c97af246991c\") " pod="openstack/dnsmasq-dns-7f896c8c65-v9kgh" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.402910 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e02dafe9-25b6-4837-8469-24d843eeff31-config\") pod \"ovn-controller-metrics-r7njh\" (UID: \"e02dafe9-25b6-4837-8469-24d843eeff31\") " pod="openstack/ovn-controller-metrics-r7njh" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.419662 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jrnft"] Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.462285 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-s8tfs"] Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.463951 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-s8tfs" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.473329 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.478867 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.480732 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.483536 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.483727 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-lhwtf" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.483913 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.484365 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.490681 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.498516 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-s8tfs"] Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.510127 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gq45x\" (UniqueName: \"kubernetes.io/projected/f9ce8dcb-fa34-4847-b6dd-c97af246991c-kube-api-access-gq45x\") pod \"dnsmasq-dns-7f896c8c65-v9kgh\" (UID: \"f9ce8dcb-fa34-4847-b6dd-c97af246991c\") " pod="openstack/dnsmasq-dns-7f896c8c65-v9kgh" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.510177 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e02dafe9-25b6-4837-8469-24d843eeff31-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-r7njh\" (UID: \"e02dafe9-25b6-4837-8469-24d843eeff31\") " pod="openstack/ovn-controller-metrics-r7njh" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.510203 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwxms\" (UniqueName: \"kubernetes.io/projected/e02dafe9-25b6-4837-8469-24d843eeff31-kube-api-access-bwxms\") pod \"ovn-controller-metrics-r7njh\" (UID: \"e02dafe9-25b6-4837-8469-24d843eeff31\") " pod="openstack/ovn-controller-metrics-r7njh" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.510250 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f9ce8dcb-fa34-4847-b6dd-c97af246991c-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-v9kgh\" (UID: \"f9ce8dcb-fa34-4847-b6dd-c97af246991c\") " pod="openstack/dnsmasq-dns-7f896c8c65-v9kgh" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.510274 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e02dafe9-25b6-4837-8469-24d843eeff31-config\") pod \"ovn-controller-metrics-r7njh\" (UID: \"e02dafe9-25b6-4837-8469-24d843eeff31\") " pod="openstack/ovn-controller-metrics-r7njh" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.510330 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9ce8dcb-fa34-4847-b6dd-c97af246991c-config\") pod \"dnsmasq-dns-7f896c8c65-v9kgh\" (UID: \"f9ce8dcb-fa34-4847-b6dd-c97af246991c\") " pod="openstack/dnsmasq-dns-7f896c8c65-v9kgh" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.510351 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9ce8dcb-fa34-4847-b6dd-c97af246991c-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-v9kgh\" (UID: \"f9ce8dcb-fa34-4847-b6dd-c97af246991c\") " pod="openstack/dnsmasq-dns-7f896c8c65-v9kgh" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.510371 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e02dafe9-25b6-4837-8469-24d843eeff31-combined-ca-bundle\") pod \"ovn-controller-metrics-r7njh\" (UID: \"e02dafe9-25b6-4837-8469-24d843eeff31\") " pod="openstack/ovn-controller-metrics-r7njh" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.510409 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/e02dafe9-25b6-4837-8469-24d843eeff31-ovn-rundir\") pod \"ovn-controller-metrics-r7njh\" (UID: \"e02dafe9-25b6-4837-8469-24d843eeff31\") " pod="openstack/ovn-controller-metrics-r7njh" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.510452 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/e02dafe9-25b6-4837-8469-24d843eeff31-ovs-rundir\") pod \"ovn-controller-metrics-r7njh\" (UID: \"e02dafe9-25b6-4837-8469-24d843eeff31\") " pod="openstack/ovn-controller-metrics-r7njh" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.510765 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/e02dafe9-25b6-4837-8469-24d843eeff31-ovs-rundir\") pod \"ovn-controller-metrics-r7njh\" (UID: \"e02dafe9-25b6-4837-8469-24d843eeff31\") " pod="openstack/ovn-controller-metrics-r7njh" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.511526 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e02dafe9-25b6-4837-8469-24d843eeff31-config\") pod \"ovn-controller-metrics-r7njh\" (UID: \"e02dafe9-25b6-4837-8469-24d843eeff31\") " pod="openstack/ovn-controller-metrics-r7njh" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.511659 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9ce8dcb-fa34-4847-b6dd-c97af246991c-config\") pod \"dnsmasq-dns-7f896c8c65-v9kgh\" (UID: \"f9ce8dcb-fa34-4847-b6dd-c97af246991c\") " pod="openstack/dnsmasq-dns-7f896c8c65-v9kgh" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.511716 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/e02dafe9-25b6-4837-8469-24d843eeff31-ovn-rundir\") pod \"ovn-controller-metrics-r7njh\" (UID: \"e02dafe9-25b6-4837-8469-24d843eeff31\") " pod="openstack/ovn-controller-metrics-r7njh" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.512283 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9ce8dcb-fa34-4847-b6dd-c97af246991c-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-v9kgh\" (UID: \"f9ce8dcb-fa34-4847-b6dd-c97af246991c\") " pod="openstack/dnsmasq-dns-7f896c8c65-v9kgh" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.513077 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f9ce8dcb-fa34-4847-b6dd-c97af246991c-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-v9kgh\" (UID: \"f9ce8dcb-fa34-4847-b6dd-c97af246991c\") " pod="openstack/dnsmasq-dns-7f896c8c65-v9kgh" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.520120 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e02dafe9-25b6-4837-8469-24d843eeff31-combined-ca-bundle\") pod \"ovn-controller-metrics-r7njh\" (UID: \"e02dafe9-25b6-4837-8469-24d843eeff31\") " pod="openstack/ovn-controller-metrics-r7njh" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.522851 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e02dafe9-25b6-4837-8469-24d843eeff31-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-r7njh\" (UID: \"e02dafe9-25b6-4837-8469-24d843eeff31\") " pod="openstack/ovn-controller-metrics-r7njh" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.547259 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwxms\" (UniqueName: \"kubernetes.io/projected/e02dafe9-25b6-4837-8469-24d843eeff31-kube-api-access-bwxms\") pod \"ovn-controller-metrics-r7njh\" (UID: \"e02dafe9-25b6-4837-8469-24d843eeff31\") " pod="openstack/ovn-controller-metrics-r7njh" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.549698 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gq45x\" (UniqueName: \"kubernetes.io/projected/f9ce8dcb-fa34-4847-b6dd-c97af246991c-kube-api-access-gq45x\") pod \"dnsmasq-dns-7f896c8c65-v9kgh\" (UID: \"f9ce8dcb-fa34-4847-b6dd-c97af246991c\") " pod="openstack/dnsmasq-dns-7f896c8c65-v9kgh" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.601117 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-v9kgh" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.616782 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/869b4713-f63d-4d82-aa68-e1addc0ae4eb-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"869b4713-f63d-4d82-aa68-e1addc0ae4eb\") " pod="openstack/ovn-northd-0" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.616853 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/869b4713-f63d-4d82-aa68-e1addc0ae4eb-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"869b4713-f63d-4d82-aa68-e1addc0ae4eb\") " pod="openstack/ovn-northd-0" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.616886 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/869b4713-f63d-4d82-aa68-e1addc0ae4eb-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"869b4713-f63d-4d82-aa68-e1addc0ae4eb\") " pod="openstack/ovn-northd-0" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.616921 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/415d7008-edae-402c-8138-2c069385d502-config\") pod \"dnsmasq-dns-86db49b7ff-s8tfs\" (UID: \"415d7008-edae-402c-8138-2c069385d502\") " pod="openstack/dnsmasq-dns-86db49b7ff-s8tfs" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.616950 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/415d7008-edae-402c-8138-2c069385d502-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-s8tfs\" (UID: \"415d7008-edae-402c-8138-2c069385d502\") " pod="openstack/dnsmasq-dns-86db49b7ff-s8tfs" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.616979 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/869b4713-f63d-4d82-aa68-e1addc0ae4eb-config\") pod \"ovn-northd-0\" (UID: \"869b4713-f63d-4d82-aa68-e1addc0ae4eb\") " pod="openstack/ovn-northd-0" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.617021 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/415d7008-edae-402c-8138-2c069385d502-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-s8tfs\" (UID: \"415d7008-edae-402c-8138-2c069385d502\") " pod="openstack/dnsmasq-dns-86db49b7ff-s8tfs" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.617045 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7qmz\" (UniqueName: \"kubernetes.io/projected/869b4713-f63d-4d82-aa68-e1addc0ae4eb-kube-api-access-g7qmz\") pod \"ovn-northd-0\" (UID: \"869b4713-f63d-4d82-aa68-e1addc0ae4eb\") " pod="openstack/ovn-northd-0" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.617074 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/869b4713-f63d-4d82-aa68-e1addc0ae4eb-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"869b4713-f63d-4d82-aa68-e1addc0ae4eb\") " pod="openstack/ovn-northd-0" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.617103 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/869b4713-f63d-4d82-aa68-e1addc0ae4eb-scripts\") pod \"ovn-northd-0\" (UID: \"869b4713-f63d-4d82-aa68-e1addc0ae4eb\") " pod="openstack/ovn-northd-0" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.617137 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cv5n\" (UniqueName: \"kubernetes.io/projected/415d7008-edae-402c-8138-2c069385d502-kube-api-access-2cv5n\") pod \"dnsmasq-dns-86db49b7ff-s8tfs\" (UID: \"415d7008-edae-402c-8138-2c069385d502\") " pod="openstack/dnsmasq-dns-86db49b7ff-s8tfs" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.617164 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/415d7008-edae-402c-8138-2c069385d502-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-s8tfs\" (UID: \"415d7008-edae-402c-8138-2c069385d502\") " pod="openstack/dnsmasq-dns-86db49b7ff-s8tfs" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.632914 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-r7njh" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.719011 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/415d7008-edae-402c-8138-2c069385d502-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-s8tfs\" (UID: \"415d7008-edae-402c-8138-2c069385d502\") " pod="openstack/dnsmasq-dns-86db49b7ff-s8tfs" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.719097 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/869b4713-f63d-4d82-aa68-e1addc0ae4eb-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"869b4713-f63d-4d82-aa68-e1addc0ae4eb\") " pod="openstack/ovn-northd-0" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.719118 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/869b4713-f63d-4d82-aa68-e1addc0ae4eb-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"869b4713-f63d-4d82-aa68-e1addc0ae4eb\") " pod="openstack/ovn-northd-0" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.719143 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/869b4713-f63d-4d82-aa68-e1addc0ae4eb-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"869b4713-f63d-4d82-aa68-e1addc0ae4eb\") " pod="openstack/ovn-northd-0" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.719169 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/415d7008-edae-402c-8138-2c069385d502-config\") pod \"dnsmasq-dns-86db49b7ff-s8tfs\" (UID: \"415d7008-edae-402c-8138-2c069385d502\") " pod="openstack/dnsmasq-dns-86db49b7ff-s8tfs" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.719188 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/415d7008-edae-402c-8138-2c069385d502-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-s8tfs\" (UID: \"415d7008-edae-402c-8138-2c069385d502\") " pod="openstack/dnsmasq-dns-86db49b7ff-s8tfs" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.719209 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/869b4713-f63d-4d82-aa68-e1addc0ae4eb-config\") pod \"ovn-northd-0\" (UID: \"869b4713-f63d-4d82-aa68-e1addc0ae4eb\") " pod="openstack/ovn-northd-0" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.719302 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/415d7008-edae-402c-8138-2c069385d502-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-s8tfs\" (UID: \"415d7008-edae-402c-8138-2c069385d502\") " pod="openstack/dnsmasq-dns-86db49b7ff-s8tfs" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.719364 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7qmz\" (UniqueName: \"kubernetes.io/projected/869b4713-f63d-4d82-aa68-e1addc0ae4eb-kube-api-access-g7qmz\") pod \"ovn-northd-0\" (UID: \"869b4713-f63d-4d82-aa68-e1addc0ae4eb\") " pod="openstack/ovn-northd-0" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.719387 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/869b4713-f63d-4d82-aa68-e1addc0ae4eb-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"869b4713-f63d-4d82-aa68-e1addc0ae4eb\") " pod="openstack/ovn-northd-0" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.719412 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/869b4713-f63d-4d82-aa68-e1addc0ae4eb-scripts\") pod \"ovn-northd-0\" (UID: \"869b4713-f63d-4d82-aa68-e1addc0ae4eb\") " pod="openstack/ovn-northd-0" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.719439 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cv5n\" (UniqueName: \"kubernetes.io/projected/415d7008-edae-402c-8138-2c069385d502-kube-api-access-2cv5n\") pod \"dnsmasq-dns-86db49b7ff-s8tfs\" (UID: \"415d7008-edae-402c-8138-2c069385d502\") " pod="openstack/dnsmasq-dns-86db49b7ff-s8tfs" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.720054 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/869b4713-f63d-4d82-aa68-e1addc0ae4eb-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"869b4713-f63d-4d82-aa68-e1addc0ae4eb\") " pod="openstack/ovn-northd-0" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.721048 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/415d7008-edae-402c-8138-2c069385d502-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-s8tfs\" (UID: \"415d7008-edae-402c-8138-2c069385d502\") " pod="openstack/dnsmasq-dns-86db49b7ff-s8tfs" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.726709 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/869b4713-f63d-4d82-aa68-e1addc0ae4eb-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"869b4713-f63d-4d82-aa68-e1addc0ae4eb\") " pod="openstack/ovn-northd-0" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.727800 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/415d7008-edae-402c-8138-2c069385d502-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-s8tfs\" (UID: \"415d7008-edae-402c-8138-2c069385d502\") " pod="openstack/dnsmasq-dns-86db49b7ff-s8tfs" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.728527 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/415d7008-edae-402c-8138-2c069385d502-config\") pod \"dnsmasq-dns-86db49b7ff-s8tfs\" (UID: \"415d7008-edae-402c-8138-2c069385d502\") " pod="openstack/dnsmasq-dns-86db49b7ff-s8tfs" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.729174 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/415d7008-edae-402c-8138-2c069385d502-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-s8tfs\" (UID: \"415d7008-edae-402c-8138-2c069385d502\") " pod="openstack/dnsmasq-dns-86db49b7ff-s8tfs" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.729906 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/869b4713-f63d-4d82-aa68-e1addc0ae4eb-config\") pod \"ovn-northd-0\" (UID: \"869b4713-f63d-4d82-aa68-e1addc0ae4eb\") " pod="openstack/ovn-northd-0" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.730135 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/869b4713-f63d-4d82-aa68-e1addc0ae4eb-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"869b4713-f63d-4d82-aa68-e1addc0ae4eb\") " pod="openstack/ovn-northd-0" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.730799 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/869b4713-f63d-4d82-aa68-e1addc0ae4eb-scripts\") pod \"ovn-northd-0\" (UID: \"869b4713-f63d-4d82-aa68-e1addc0ae4eb\") " pod="openstack/ovn-northd-0" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.735910 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/869b4713-f63d-4d82-aa68-e1addc0ae4eb-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"869b4713-f63d-4d82-aa68-e1addc0ae4eb\") " pod="openstack/ovn-northd-0" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.754862 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cv5n\" (UniqueName: \"kubernetes.io/projected/415d7008-edae-402c-8138-2c069385d502-kube-api-access-2cv5n\") pod \"dnsmasq-dns-86db49b7ff-s8tfs\" (UID: \"415d7008-edae-402c-8138-2c069385d502\") " pod="openstack/dnsmasq-dns-86db49b7ff-s8tfs" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.756770 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7qmz\" (UniqueName: \"kubernetes.io/projected/869b4713-f63d-4d82-aa68-e1addc0ae4eb-kube-api-access-g7qmz\") pod \"ovn-northd-0\" (UID: \"869b4713-f63d-4d82-aa68-e1addc0ae4eb\") " pod="openstack/ovn-northd-0" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.788815 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-s8tfs" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.821352 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.946152 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-lfrj7" event={"ID":"565b2507-ddbb-4657-b879-082fb62e1284","Type":"ContainerStarted","Data":"0582822cff1cfd5295468baceaab8f980f14b9fb71f7d8aabb7cb76afb56e046"} Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.946317 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ccc8479f9-lfrj7" podUID="565b2507-ddbb-4657-b879-082fb62e1284" containerName="dnsmasq-dns" containerID="cri-o://0582822cff1cfd5295468baceaab8f980f14b9fb71f7d8aabb7cb76afb56e046" gracePeriod=10 Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.946731 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ccc8479f9-lfrj7" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.953151 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-jrnft" event={"ID":"0de3454e-aa59-4cbf-855f-3a161faa93eb","Type":"ContainerStarted","Data":"c816df18d8b41e84d9a2ba8451c5b02efbe63dc1b96781141ba4183ee8f6fb24"} Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.961692 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-jrnft" podUID="0de3454e-aa59-4cbf-855f-3a161faa93eb" containerName="dnsmasq-dns" containerID="cri-o://c816df18d8b41e84d9a2ba8451c5b02efbe63dc1b96781141ba4183ee8f6fb24" gracePeriod=10 Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.961909 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-jrnft" Nov 29 06:54:17 crc kubenswrapper[4947]: I1129 06:54:17.979516 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ccc8479f9-lfrj7" podStartSLOduration=4.228748057 podStartE2EDuration="43.979498251s" podCreationTimestamp="2025-11-29 06:53:34 +0000 UTC" firstStartedPulling="2025-11-29 06:53:35.960929569 +0000 UTC m=+1167.005311650" lastFinishedPulling="2025-11-29 06:54:15.711679763 +0000 UTC m=+1206.756061844" observedRunningTime="2025-11-29 06:54:17.969761817 +0000 UTC m=+1209.014143898" watchObservedRunningTime="2025-11-29 06:54:17.979498251 +0000 UTC m=+1209.023880342" Nov 29 06:54:18 crc kubenswrapper[4947]: I1129 06:54:18.003850 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-jrnft" podStartSLOduration=-9223371993.851007 podStartE2EDuration="43.003768218s" podCreationTimestamp="2025-11-29 06:53:35 +0000 UTC" firstStartedPulling="2025-11-29 06:53:36.265341195 +0000 UTC m=+1167.309723276" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:54:17.99426244 +0000 UTC m=+1209.038644521" watchObservedRunningTime="2025-11-29 06:54:18.003768218 +0000 UTC m=+1209.048150299" Nov 29 06:54:18 crc kubenswrapper[4947]: I1129 06:54:18.128833 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-r7njh"] Nov 29 06:54:18 crc kubenswrapper[4947]: W1129 06:54:18.133784 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode02dafe9_25b6_4837_8469_24d843eeff31.slice/crio-fa0030045457768001f5564e1444b9d4a72069b13c85d5c4fe36edc1e69e6ad9 WatchSource:0}: Error finding container fa0030045457768001f5564e1444b9d4a72069b13c85d5c4fe36edc1e69e6ad9: Status 404 returned error can't find the container with id fa0030045457768001f5564e1444b9d4a72069b13c85d5c4fe36edc1e69e6ad9 Nov 29 06:54:18 crc kubenswrapper[4947]: I1129 06:54:18.234631 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-v9kgh"] Nov 29 06:54:18 crc kubenswrapper[4947]: I1129 06:54:18.345713 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 29 06:54:18 crc kubenswrapper[4947]: I1129 06:54:18.412125 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-s8tfs"] Nov 29 06:54:18 crc kubenswrapper[4947]: W1129 06:54:18.414449 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod415d7008_edae_402c_8138_2c069385d502.slice/crio-f475eb0bb03b7b1dd1276756993d4c6346c426f173ee18e0ab21a9339c340323 WatchSource:0}: Error finding container f475eb0bb03b7b1dd1276756993d4c6346c426f173ee18e0ab21a9339c340323: Status 404 returned error can't find the container with id f475eb0bb03b7b1dd1276756993d4c6346c426f173ee18e0ab21a9339c340323 Nov 29 06:54:18 crc kubenswrapper[4947]: I1129 06:54:18.731782 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 29 06:54:18 crc kubenswrapper[4947]: I1129 06:54:18.731846 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 29 06:54:18 crc kubenswrapper[4947]: I1129 06:54:18.962046 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-s8tfs" event={"ID":"415d7008-edae-402c-8138-2c069385d502","Type":"ContainerStarted","Data":"9567132af4e2a6dc3d88eab753d4bab2c217aba48ead8063efa76404ef6d58ad"} Nov 29 06:54:18 crc kubenswrapper[4947]: I1129 06:54:18.962089 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-s8tfs" event={"ID":"415d7008-edae-402c-8138-2c069385d502","Type":"ContainerStarted","Data":"f475eb0bb03b7b1dd1276756993d4c6346c426f173ee18e0ab21a9339c340323"} Nov 29 06:54:18 crc kubenswrapper[4947]: I1129 06:54:18.963546 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"869b4713-f63d-4d82-aa68-e1addc0ae4eb","Type":"ContainerStarted","Data":"e78fdfdad9a3197fc38df948c71f21c5ad0c2a9ae9b2ae76fc22ca33f9ef51fa"} Nov 29 06:54:18 crc kubenswrapper[4947]: I1129 06:54:18.965167 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-v9kgh" event={"ID":"f9ce8dcb-fa34-4847-b6dd-c97af246991c","Type":"ContainerStarted","Data":"e1d213d3c8e6563e5f1b2720185df493c3ab0b066c5ec82e32099a8c86fb1cf7"} Nov 29 06:54:18 crc kubenswrapper[4947]: I1129 06:54:18.965196 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-v9kgh" event={"ID":"f9ce8dcb-fa34-4847-b6dd-c97af246991c","Type":"ContainerStarted","Data":"8ed4ec2bd9d67adbed92cc001ff4ede6c64d37f7895023a852800e2613972f41"} Nov 29 06:54:18 crc kubenswrapper[4947]: I1129 06:54:18.967379 4947 generic.go:334] "Generic (PLEG): container finished" podID="565b2507-ddbb-4657-b879-082fb62e1284" containerID="0582822cff1cfd5295468baceaab8f980f14b9fb71f7d8aabb7cb76afb56e046" exitCode=0 Nov 29 06:54:18 crc kubenswrapper[4947]: I1129 06:54:18.967442 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-lfrj7" event={"ID":"565b2507-ddbb-4657-b879-082fb62e1284","Type":"ContainerDied","Data":"0582822cff1cfd5295468baceaab8f980f14b9fb71f7d8aabb7cb76afb56e046"} Nov 29 06:54:18 crc kubenswrapper[4947]: I1129 06:54:18.969098 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-r7njh" event={"ID":"e02dafe9-25b6-4837-8469-24d843eeff31","Type":"ContainerStarted","Data":"1ba969ca57b1fa80c4965d4cdbfd02dbf122cc4e2d1f3795aeb0a3dd78ee96eb"} Nov 29 06:54:18 crc kubenswrapper[4947]: I1129 06:54:18.969127 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-r7njh" event={"ID":"e02dafe9-25b6-4837-8469-24d843eeff31","Type":"ContainerStarted","Data":"fa0030045457768001f5564e1444b9d4a72069b13c85d5c4fe36edc1e69e6ad9"} Nov 29 06:54:18 crc kubenswrapper[4947]: I1129 06:54:18.970948 4947 generic.go:334] "Generic (PLEG): container finished" podID="0de3454e-aa59-4cbf-855f-3a161faa93eb" containerID="c816df18d8b41e84d9a2ba8451c5b02efbe63dc1b96781141ba4183ee8f6fb24" exitCode=0 Nov 29 06:54:18 crc kubenswrapper[4947]: I1129 06:54:18.970982 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-jrnft" event={"ID":"0de3454e-aa59-4cbf-855f-3a161faa93eb","Type":"ContainerDied","Data":"c816df18d8b41e84d9a2ba8451c5b02efbe63dc1b96781141ba4183ee8f6fb24"} Nov 29 06:54:19 crc kubenswrapper[4947]: I1129 06:54:19.433074 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-lfrj7" Nov 29 06:54:19 crc kubenswrapper[4947]: I1129 06:54:19.569148 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/565b2507-ddbb-4657-b879-082fb62e1284-dns-svc\") pod \"565b2507-ddbb-4657-b879-082fb62e1284\" (UID: \"565b2507-ddbb-4657-b879-082fb62e1284\") " Nov 29 06:54:19 crc kubenswrapper[4947]: I1129 06:54:19.569236 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d64vv\" (UniqueName: \"kubernetes.io/projected/565b2507-ddbb-4657-b879-082fb62e1284-kube-api-access-d64vv\") pod \"565b2507-ddbb-4657-b879-082fb62e1284\" (UID: \"565b2507-ddbb-4657-b879-082fb62e1284\") " Nov 29 06:54:19 crc kubenswrapper[4947]: I1129 06:54:19.569311 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/565b2507-ddbb-4657-b879-082fb62e1284-config\") pod \"565b2507-ddbb-4657-b879-082fb62e1284\" (UID: \"565b2507-ddbb-4657-b879-082fb62e1284\") " Nov 29 06:54:19 crc kubenswrapper[4947]: I1129 06:54:19.577621 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/565b2507-ddbb-4657-b879-082fb62e1284-kube-api-access-d64vv" (OuterVolumeSpecName: "kube-api-access-d64vv") pod "565b2507-ddbb-4657-b879-082fb62e1284" (UID: "565b2507-ddbb-4657-b879-082fb62e1284"). InnerVolumeSpecName "kube-api-access-d64vv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:54:19 crc kubenswrapper[4947]: I1129 06:54:19.619086 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/565b2507-ddbb-4657-b879-082fb62e1284-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "565b2507-ddbb-4657-b879-082fb62e1284" (UID: "565b2507-ddbb-4657-b879-082fb62e1284"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:54:19 crc kubenswrapper[4947]: I1129 06:54:19.624031 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/565b2507-ddbb-4657-b879-082fb62e1284-config" (OuterVolumeSpecName: "config") pod "565b2507-ddbb-4657-b879-082fb62e1284" (UID: "565b2507-ddbb-4657-b879-082fb62e1284"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:54:19 crc kubenswrapper[4947]: I1129 06:54:19.633007 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-jrnft" Nov 29 06:54:19 crc kubenswrapper[4947]: I1129 06:54:19.672095 4947 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/565b2507-ddbb-4657-b879-082fb62e1284-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 06:54:19 crc kubenswrapper[4947]: I1129 06:54:19.672160 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d64vv\" (UniqueName: \"kubernetes.io/projected/565b2507-ddbb-4657-b879-082fb62e1284-kube-api-access-d64vv\") on node \"crc\" DevicePath \"\"" Nov 29 06:54:19 crc kubenswrapper[4947]: I1129 06:54:19.672176 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/565b2507-ddbb-4657-b879-082fb62e1284-config\") on node \"crc\" DevicePath \"\"" Nov 29 06:54:19 crc kubenswrapper[4947]: I1129 06:54:19.772988 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6t4k\" (UniqueName: \"kubernetes.io/projected/0de3454e-aa59-4cbf-855f-3a161faa93eb-kube-api-access-d6t4k\") pod \"0de3454e-aa59-4cbf-855f-3a161faa93eb\" (UID: \"0de3454e-aa59-4cbf-855f-3a161faa93eb\") " Nov 29 06:54:19 crc kubenswrapper[4947]: I1129 06:54:19.773093 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0de3454e-aa59-4cbf-855f-3a161faa93eb-dns-svc\") pod \"0de3454e-aa59-4cbf-855f-3a161faa93eb\" (UID: \"0de3454e-aa59-4cbf-855f-3a161faa93eb\") " Nov 29 06:54:19 crc kubenswrapper[4947]: I1129 06:54:19.773115 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0de3454e-aa59-4cbf-855f-3a161faa93eb-config\") pod \"0de3454e-aa59-4cbf-855f-3a161faa93eb\" (UID: \"0de3454e-aa59-4cbf-855f-3a161faa93eb\") " Nov 29 06:54:19 crc kubenswrapper[4947]: I1129 06:54:19.778538 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0de3454e-aa59-4cbf-855f-3a161faa93eb-kube-api-access-d6t4k" (OuterVolumeSpecName: "kube-api-access-d6t4k") pod "0de3454e-aa59-4cbf-855f-3a161faa93eb" (UID: "0de3454e-aa59-4cbf-855f-3a161faa93eb"). InnerVolumeSpecName "kube-api-access-d6t4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:54:19 crc kubenswrapper[4947]: I1129 06:54:19.815583 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0de3454e-aa59-4cbf-855f-3a161faa93eb-config" (OuterVolumeSpecName: "config") pod "0de3454e-aa59-4cbf-855f-3a161faa93eb" (UID: "0de3454e-aa59-4cbf-855f-3a161faa93eb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:54:19 crc kubenswrapper[4947]: I1129 06:54:19.820298 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0de3454e-aa59-4cbf-855f-3a161faa93eb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0de3454e-aa59-4cbf-855f-3a161faa93eb" (UID: "0de3454e-aa59-4cbf-855f-3a161faa93eb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:54:19 crc kubenswrapper[4947]: I1129 06:54:19.876418 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6t4k\" (UniqueName: \"kubernetes.io/projected/0de3454e-aa59-4cbf-855f-3a161faa93eb-kube-api-access-d6t4k\") on node \"crc\" DevicePath \"\"" Nov 29 06:54:19 crc kubenswrapper[4947]: I1129 06:54:19.877082 4947 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0de3454e-aa59-4cbf-855f-3a161faa93eb-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 06:54:19 crc kubenswrapper[4947]: I1129 06:54:19.877180 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0de3454e-aa59-4cbf-855f-3a161faa93eb-config\") on node \"crc\" DevicePath \"\"" Nov 29 06:54:19 crc kubenswrapper[4947]: I1129 06:54:19.962161 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 29 06:54:19 crc kubenswrapper[4947]: I1129 06:54:19.963824 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 29 06:54:19 crc kubenswrapper[4947]: I1129 06:54:19.986060 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-jrnft" event={"ID":"0de3454e-aa59-4cbf-855f-3a161faa93eb","Type":"ContainerDied","Data":"17f65377fb75177c08ac60b7c62a9c1b9487eb46944d20a54cfc460e33f0ab6d"} Nov 29 06:54:19 crc kubenswrapper[4947]: I1129 06:54:19.986502 4947 scope.go:117] "RemoveContainer" containerID="c816df18d8b41e84d9a2ba8451c5b02efbe63dc1b96781141ba4183ee8f6fb24" Nov 29 06:54:19 crc kubenswrapper[4947]: I1129 06:54:19.986558 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-jrnft" Nov 29 06:54:19 crc kubenswrapper[4947]: I1129 06:54:19.991156 4947 generic.go:334] "Generic (PLEG): container finished" podID="415d7008-edae-402c-8138-2c069385d502" containerID="9567132af4e2a6dc3d88eab753d4bab2c217aba48ead8063efa76404ef6d58ad" exitCode=0 Nov 29 06:54:19 crc kubenswrapper[4947]: I1129 06:54:19.991313 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-s8tfs" event={"ID":"415d7008-edae-402c-8138-2c069385d502","Type":"ContainerDied","Data":"9567132af4e2a6dc3d88eab753d4bab2c217aba48ead8063efa76404ef6d58ad"} Nov 29 06:54:19 crc kubenswrapper[4947]: I1129 06:54:19.994185 4947 generic.go:334] "Generic (PLEG): container finished" podID="f9ce8dcb-fa34-4847-b6dd-c97af246991c" containerID="e1d213d3c8e6563e5f1b2720185df493c3ab0b066c5ec82e32099a8c86fb1cf7" exitCode=0 Nov 29 06:54:19 crc kubenswrapper[4947]: I1129 06:54:19.994306 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-v9kgh" event={"ID":"f9ce8dcb-fa34-4847-b6dd-c97af246991c","Type":"ContainerDied","Data":"e1d213d3c8e6563e5f1b2720185df493c3ab0b066c5ec82e32099a8c86fb1cf7"} Nov 29 06:54:19 crc kubenswrapper[4947]: I1129 06:54:19.998992 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-lfrj7" Nov 29 06:54:19 crc kubenswrapper[4947]: I1129 06:54:19.998970 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-lfrj7" event={"ID":"565b2507-ddbb-4657-b879-082fb62e1284","Type":"ContainerDied","Data":"1a252ac3550346ec54d4e105b1025e327e0b56a112aff008fb30f415ea8f879c"} Nov 29 06:54:20 crc kubenswrapper[4947]: I1129 06:54:20.083366 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-r7njh" podStartSLOduration=3.083331117 podStartE2EDuration="3.083331117s" podCreationTimestamp="2025-11-29 06:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:54:20.07505169 +0000 UTC m=+1211.119433771" watchObservedRunningTime="2025-11-29 06:54:20.083331117 +0000 UTC m=+1211.127713198" Nov 29 06:54:20 crc kubenswrapper[4947]: I1129 06:54:20.105424 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-lfrj7"] Nov 29 06:54:20 crc kubenswrapper[4947]: I1129 06:54:20.114565 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-lfrj7"] Nov 29 06:54:20 crc kubenswrapper[4947]: I1129 06:54:20.131525 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jrnft"] Nov 29 06:54:20 crc kubenswrapper[4947]: I1129 06:54:20.140634 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jrnft"] Nov 29 06:54:20 crc kubenswrapper[4947]: I1129 06:54:20.554595 4947 scope.go:117] "RemoveContainer" containerID="8bcfa76e2fdec74607f3dadbf8af95bb3daef644b632664282de447308143a7a" Nov 29 06:54:20 crc kubenswrapper[4947]: I1129 06:54:20.699116 4947 scope.go:117] "RemoveContainer" containerID="0582822cff1cfd5295468baceaab8f980f14b9fb71f7d8aabb7cb76afb56e046" Nov 29 06:54:20 crc kubenswrapper[4947]: I1129 06:54:20.779916 4947 scope.go:117] "RemoveContainer" containerID="8fe17f0eb722f5c703070bbe19914ec3b53aff92480bfe0b6a0d5a121a8c801f" Nov 29 06:54:21 crc kubenswrapper[4947]: I1129 06:54:21.190135 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0de3454e-aa59-4cbf-855f-3a161faa93eb" path="/var/lib/kubelet/pods/0de3454e-aa59-4cbf-855f-3a161faa93eb/volumes" Nov 29 06:54:21 crc kubenswrapper[4947]: I1129 06:54:21.190903 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="565b2507-ddbb-4657-b879-082fb62e1284" path="/var/lib/kubelet/pods/565b2507-ddbb-4657-b879-082fb62e1284/volumes" Nov 29 06:54:22 crc kubenswrapper[4947]: I1129 06:54:22.045812 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-s8tfs" event={"ID":"415d7008-edae-402c-8138-2c069385d502","Type":"ContainerStarted","Data":"702c25b9fc0f8fb61bbd2b4c53eadd1a17083f4fa927624a756f734a26cb553d"} Nov 29 06:54:22 crc kubenswrapper[4947]: I1129 06:54:22.047291 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-s8tfs" Nov 29 06:54:22 crc kubenswrapper[4947]: I1129 06:54:22.049293 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-v9kgh" event={"ID":"f9ce8dcb-fa34-4847-b6dd-c97af246991c","Type":"ContainerStarted","Data":"c405b0248c4ec4832a7594f3ce0e7b7ccce52a442dfcb441f054ae23559f0606"} Nov 29 06:54:22 crc kubenswrapper[4947]: I1129 06:54:22.049893 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7f896c8c65-v9kgh" Nov 29 06:54:22 crc kubenswrapper[4947]: I1129 06:54:22.074422 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-s8tfs" podStartSLOduration=5.074405141 podStartE2EDuration="5.074405141s" podCreationTimestamp="2025-11-29 06:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:54:22.065439026 +0000 UTC m=+1213.109821107" watchObservedRunningTime="2025-11-29 06:54:22.074405141 +0000 UTC m=+1213.118787222" Nov 29 06:54:22 crc kubenswrapper[4947]: I1129 06:54:22.339768 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 29 06:54:22 crc kubenswrapper[4947]: I1129 06:54:22.359442 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7f896c8c65-v9kgh" podStartSLOduration=5.359420561 podStartE2EDuration="5.359420561s" podCreationTimestamp="2025-11-29 06:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:54:22.091763205 +0000 UTC m=+1213.136145286" watchObservedRunningTime="2025-11-29 06:54:22.359420561 +0000 UTC m=+1213.403802642" Nov 29 06:54:22 crc kubenswrapper[4947]: I1129 06:54:22.987680 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 06:54:22 crc kubenswrapper[4947]: I1129 06:54:22.987770 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 06:54:27 crc kubenswrapper[4947]: I1129 06:54:27.604521 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7f896c8c65-v9kgh" Nov 29 06:54:27 crc kubenswrapper[4947]: I1129 06:54:27.791598 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-s8tfs" Nov 29 06:54:27 crc kubenswrapper[4947]: I1129 06:54:27.898824 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-v9kgh"] Nov 29 06:54:28 crc kubenswrapper[4947]: I1129 06:54:28.092855 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7f896c8c65-v9kgh" podUID="f9ce8dcb-fa34-4847-b6dd-c97af246991c" containerName="dnsmasq-dns" containerID="cri-o://c405b0248c4ec4832a7594f3ce0e7b7ccce52a442dfcb441f054ae23559f0606" gracePeriod=10 Nov 29 06:54:32 crc kubenswrapper[4947]: I1129 06:54:32.603077 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7f896c8c65-v9kgh" podUID="f9ce8dcb-fa34-4847-b6dd-c97af246991c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.108:5353: connect: connection refused" Nov 29 06:54:35 crc kubenswrapper[4947]: I1129 06:54:35.861585 4947 generic.go:334] "Generic (PLEG): container finished" podID="f9ce8dcb-fa34-4847-b6dd-c97af246991c" containerID="c405b0248c4ec4832a7594f3ce0e7b7ccce52a442dfcb441f054ae23559f0606" exitCode=-1 Nov 29 06:54:35 crc kubenswrapper[4947]: I1129 06:54:35.861682 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-v9kgh" event={"ID":"f9ce8dcb-fa34-4847-b6dd-c97af246991c","Type":"ContainerDied","Data":"c405b0248c4ec4832a7594f3ce0e7b7ccce52a442dfcb441f054ae23559f0606"} Nov 29 06:54:37 crc kubenswrapper[4947]: I1129 06:54:37.603205 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7f896c8c65-v9kgh" podUID="f9ce8dcb-fa34-4847-b6dd-c97af246991c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.108:5353: connect: connection refused" Nov 29 06:54:37 crc kubenswrapper[4947]: I1129 06:54:37.912888 4947 generic.go:334] "Generic (PLEG): container finished" podID="e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8" containerID="715f470ce1721879c0311ceda3a06eeda1c4246dd6ce79c6e137a07f44a5fe37" exitCode=0 Nov 29 06:54:37 crc kubenswrapper[4947]: I1129 06:54:37.913356 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8","Type":"ContainerDied","Data":"715f470ce1721879c0311ceda3a06eeda1c4246dd6ce79c6e137a07f44a5fe37"} Nov 29 06:54:37 crc kubenswrapper[4947]: I1129 06:54:37.975160 4947 generic.go:334] "Generic (PLEG): container finished" podID="1df9108b-7e5b-4dd6-bd7e-787381428bce" containerID="c557257ddf99194f29d76a34461fba82b4700930bfc653296e8ab52b9ef25318" exitCode=0 Nov 29 06:54:37 crc kubenswrapper[4947]: I1129 06:54:37.975237 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1df9108b-7e5b-4dd6-bd7e-787381428bce","Type":"ContainerDied","Data":"c557257ddf99194f29d76a34461fba82b4700930bfc653296e8ab52b9ef25318"} Nov 29 06:54:39 crc kubenswrapper[4947]: I1129 06:54:39.916420 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-v9kgh" Nov 29 06:54:39 crc kubenswrapper[4947]: I1129 06:54:39.996198 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-v9kgh" Nov 29 06:54:39 crc kubenswrapper[4947]: I1129 06:54:39.996193 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-v9kgh" event={"ID":"f9ce8dcb-fa34-4847-b6dd-c97af246991c","Type":"ContainerDied","Data":"8ed4ec2bd9d67adbed92cc001ff4ede6c64d37f7895023a852800e2613972f41"} Nov 29 06:54:39 crc kubenswrapper[4947]: I1129 06:54:39.996637 4947 scope.go:117] "RemoveContainer" containerID="c405b0248c4ec4832a7594f3ce0e7b7ccce52a442dfcb441f054ae23559f0606" Nov 29 06:54:40 crc kubenswrapper[4947]: I1129 06:54:40.023098 4947 scope.go:117] "RemoveContainer" containerID="e1d213d3c8e6563e5f1b2720185df493c3ab0b066c5ec82e32099a8c86fb1cf7" Nov 29 06:54:40 crc kubenswrapper[4947]: I1129 06:54:40.079535 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9ce8dcb-fa34-4847-b6dd-c97af246991c-config\") pod \"f9ce8dcb-fa34-4847-b6dd-c97af246991c\" (UID: \"f9ce8dcb-fa34-4847-b6dd-c97af246991c\") " Nov 29 06:54:40 crc kubenswrapper[4947]: I1129 06:54:40.079615 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f9ce8dcb-fa34-4847-b6dd-c97af246991c-ovsdbserver-sb\") pod \"f9ce8dcb-fa34-4847-b6dd-c97af246991c\" (UID: \"f9ce8dcb-fa34-4847-b6dd-c97af246991c\") " Nov 29 06:54:40 crc kubenswrapper[4947]: I1129 06:54:40.079686 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9ce8dcb-fa34-4847-b6dd-c97af246991c-dns-svc\") pod \"f9ce8dcb-fa34-4847-b6dd-c97af246991c\" (UID: \"f9ce8dcb-fa34-4847-b6dd-c97af246991c\") " Nov 29 06:54:40 crc kubenswrapper[4947]: I1129 06:54:40.079820 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gq45x\" (UniqueName: \"kubernetes.io/projected/f9ce8dcb-fa34-4847-b6dd-c97af246991c-kube-api-access-gq45x\") pod \"f9ce8dcb-fa34-4847-b6dd-c97af246991c\" (UID: \"f9ce8dcb-fa34-4847-b6dd-c97af246991c\") " Nov 29 06:54:40 crc kubenswrapper[4947]: I1129 06:54:40.089348 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9ce8dcb-fa34-4847-b6dd-c97af246991c-kube-api-access-gq45x" (OuterVolumeSpecName: "kube-api-access-gq45x") pod "f9ce8dcb-fa34-4847-b6dd-c97af246991c" (UID: "f9ce8dcb-fa34-4847-b6dd-c97af246991c"). InnerVolumeSpecName "kube-api-access-gq45x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:54:40 crc kubenswrapper[4947]: I1129 06:54:40.127688 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9ce8dcb-fa34-4847-b6dd-c97af246991c-config" (OuterVolumeSpecName: "config") pod "f9ce8dcb-fa34-4847-b6dd-c97af246991c" (UID: "f9ce8dcb-fa34-4847-b6dd-c97af246991c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:54:40 crc kubenswrapper[4947]: I1129 06:54:40.128641 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9ce8dcb-fa34-4847-b6dd-c97af246991c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f9ce8dcb-fa34-4847-b6dd-c97af246991c" (UID: "f9ce8dcb-fa34-4847-b6dd-c97af246991c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:54:40 crc kubenswrapper[4947]: I1129 06:54:40.143525 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9ce8dcb-fa34-4847-b6dd-c97af246991c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f9ce8dcb-fa34-4847-b6dd-c97af246991c" (UID: "f9ce8dcb-fa34-4847-b6dd-c97af246991c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:54:40 crc kubenswrapper[4947]: I1129 06:54:40.182554 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gq45x\" (UniqueName: \"kubernetes.io/projected/f9ce8dcb-fa34-4847-b6dd-c97af246991c-kube-api-access-gq45x\") on node \"crc\" DevicePath \"\"" Nov 29 06:54:40 crc kubenswrapper[4947]: I1129 06:54:40.182992 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9ce8dcb-fa34-4847-b6dd-c97af246991c-config\") on node \"crc\" DevicePath \"\"" Nov 29 06:54:40 crc kubenswrapper[4947]: I1129 06:54:40.183126 4947 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f9ce8dcb-fa34-4847-b6dd-c97af246991c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 06:54:40 crc kubenswrapper[4947]: I1129 06:54:40.183298 4947 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9ce8dcb-fa34-4847-b6dd-c97af246991c-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 06:54:40 crc kubenswrapper[4947]: I1129 06:54:40.337085 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-v9kgh"] Nov 29 06:54:40 crc kubenswrapper[4947]: I1129 06:54:40.344510 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-v9kgh"] Nov 29 06:54:41 crc kubenswrapper[4947]: I1129 06:54:41.189212 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9ce8dcb-fa34-4847-b6dd-c97af246991c" path="/var/lib/kubelet/pods/f9ce8dcb-fa34-4847-b6dd-c97af246991c/volumes" Nov 29 06:54:42 crc kubenswrapper[4947]: I1129 06:54:42.021740 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8","Type":"ContainerStarted","Data":"1de336f26a889f79cd35e84bb053e33754774c33fa596603e4d541d0ced2a2dc"} Nov 29 06:54:42 crc kubenswrapper[4947]: I1129 06:54:42.025742 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1df9108b-7e5b-4dd6-bd7e-787381428bce","Type":"ContainerStarted","Data":"0fff10656ad29de8b7f5aeb6366cf54e2b7e706199c69fac88eb15e75c97c542"} Nov 29 06:54:43 crc kubenswrapper[4947]: I1129 06:54:43.034795 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 29 06:54:43 crc kubenswrapper[4947]: I1129 06:54:43.121375 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=42.932998976 podStartE2EDuration="1m8.121346823s" podCreationTimestamp="2025-11-29 06:53:35 +0000 UTC" firstStartedPulling="2025-11-29 06:53:37.567584506 +0000 UTC m=+1168.611966587" lastFinishedPulling="2025-11-29 06:54:02.755932353 +0000 UTC m=+1193.800314434" observedRunningTime="2025-11-29 06:54:43.055871834 +0000 UTC m=+1234.100253935" watchObservedRunningTime="2025-11-29 06:54:43.121346823 +0000 UTC m=+1234.165728904" Nov 29 06:54:43 crc kubenswrapper[4947]: I1129 06:54:43.147423 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=42.139789952 podStartE2EDuration="1m8.147398147s" podCreationTimestamp="2025-11-29 06:53:35 +0000 UTC" firstStartedPulling="2025-11-29 06:53:36.865438009 +0000 UTC m=+1167.909820080" lastFinishedPulling="2025-11-29 06:54:02.873046194 +0000 UTC m=+1193.917428275" observedRunningTime="2025-11-29 06:54:43.145704776 +0000 UTC m=+1234.190086857" watchObservedRunningTime="2025-11-29 06:54:43.147398147 +0000 UTC m=+1234.191780228" Nov 29 06:54:45 crc kubenswrapper[4947]: I1129 06:54:45.663546 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-sn9qf" podUID="fc369426-cee0-4a95-aa63-9b8d4df05e7a" containerName="ovn-controller" probeResult="failure" output=< Nov 29 06:54:45 crc kubenswrapper[4947]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 29 06:54:45 crc kubenswrapper[4947]: > Nov 29 06:54:45 crc kubenswrapper[4947]: I1129 06:54:45.690281 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-ztbsf" Nov 29 06:54:45 crc kubenswrapper[4947]: I1129 06:54:45.691724 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-ztbsf" Nov 29 06:54:45 crc kubenswrapper[4947]: I1129 06:54:45.940539 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-sn9qf-config-fdjj7"] Nov 29 06:54:45 crc kubenswrapper[4947]: E1129 06:54:45.941283 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9ce8dcb-fa34-4847-b6dd-c97af246991c" containerName="init" Nov 29 06:54:45 crc kubenswrapper[4947]: I1129 06:54:45.941299 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9ce8dcb-fa34-4847-b6dd-c97af246991c" containerName="init" Nov 29 06:54:45 crc kubenswrapper[4947]: E1129 06:54:45.941320 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0de3454e-aa59-4cbf-855f-3a161faa93eb" containerName="dnsmasq-dns" Nov 29 06:54:45 crc kubenswrapper[4947]: I1129 06:54:45.941327 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="0de3454e-aa59-4cbf-855f-3a161faa93eb" containerName="dnsmasq-dns" Nov 29 06:54:45 crc kubenswrapper[4947]: E1129 06:54:45.941352 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0de3454e-aa59-4cbf-855f-3a161faa93eb" containerName="init" Nov 29 06:54:45 crc kubenswrapper[4947]: I1129 06:54:45.941360 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="0de3454e-aa59-4cbf-855f-3a161faa93eb" containerName="init" Nov 29 06:54:45 crc kubenswrapper[4947]: E1129 06:54:45.941370 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="565b2507-ddbb-4657-b879-082fb62e1284" containerName="init" Nov 29 06:54:45 crc kubenswrapper[4947]: I1129 06:54:45.941378 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="565b2507-ddbb-4657-b879-082fb62e1284" containerName="init" Nov 29 06:54:45 crc kubenswrapper[4947]: E1129 06:54:45.941386 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="565b2507-ddbb-4657-b879-082fb62e1284" containerName="dnsmasq-dns" Nov 29 06:54:45 crc kubenswrapper[4947]: I1129 06:54:45.941393 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="565b2507-ddbb-4657-b879-082fb62e1284" containerName="dnsmasq-dns" Nov 29 06:54:45 crc kubenswrapper[4947]: E1129 06:54:45.941410 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9ce8dcb-fa34-4847-b6dd-c97af246991c" containerName="dnsmasq-dns" Nov 29 06:54:45 crc kubenswrapper[4947]: I1129 06:54:45.941417 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9ce8dcb-fa34-4847-b6dd-c97af246991c" containerName="dnsmasq-dns" Nov 29 06:54:45 crc kubenswrapper[4947]: I1129 06:54:45.941603 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9ce8dcb-fa34-4847-b6dd-c97af246991c" containerName="dnsmasq-dns" Nov 29 06:54:45 crc kubenswrapper[4947]: I1129 06:54:45.941626 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="0de3454e-aa59-4cbf-855f-3a161faa93eb" containerName="dnsmasq-dns" Nov 29 06:54:45 crc kubenswrapper[4947]: I1129 06:54:45.941639 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="565b2507-ddbb-4657-b879-082fb62e1284" containerName="dnsmasq-dns" Nov 29 06:54:45 crc kubenswrapper[4947]: I1129 06:54:45.942248 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sn9qf-config-fdjj7" Nov 29 06:54:45 crc kubenswrapper[4947]: I1129 06:54:45.945273 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 29 06:54:46 crc kubenswrapper[4947]: I1129 06:54:46.024593 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sn9qf-config-fdjj7"] Nov 29 06:54:46 crc kubenswrapper[4947]: I1129 06:54:46.123568 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f8dfea0f-ddc1-43e2-84b8-4670accc8ac6-var-log-ovn\") pod \"ovn-controller-sn9qf-config-fdjj7\" (UID: \"f8dfea0f-ddc1-43e2-84b8-4670accc8ac6\") " pod="openstack/ovn-controller-sn9qf-config-fdjj7" Nov 29 06:54:46 crc kubenswrapper[4947]: I1129 06:54:46.123656 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f8dfea0f-ddc1-43e2-84b8-4670accc8ac6-var-run\") pod \"ovn-controller-sn9qf-config-fdjj7\" (UID: \"f8dfea0f-ddc1-43e2-84b8-4670accc8ac6\") " pod="openstack/ovn-controller-sn9qf-config-fdjj7" Nov 29 06:54:46 crc kubenswrapper[4947]: I1129 06:54:46.123688 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f8dfea0f-ddc1-43e2-84b8-4670accc8ac6-var-run-ovn\") pod \"ovn-controller-sn9qf-config-fdjj7\" (UID: \"f8dfea0f-ddc1-43e2-84b8-4670accc8ac6\") " pod="openstack/ovn-controller-sn9qf-config-fdjj7" Nov 29 06:54:46 crc kubenswrapper[4947]: I1129 06:54:46.123912 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f8dfea0f-ddc1-43e2-84b8-4670accc8ac6-additional-scripts\") pod \"ovn-controller-sn9qf-config-fdjj7\" (UID: \"f8dfea0f-ddc1-43e2-84b8-4670accc8ac6\") " pod="openstack/ovn-controller-sn9qf-config-fdjj7" Nov 29 06:54:46 crc kubenswrapper[4947]: I1129 06:54:46.123982 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f8dfea0f-ddc1-43e2-84b8-4670accc8ac6-scripts\") pod \"ovn-controller-sn9qf-config-fdjj7\" (UID: \"f8dfea0f-ddc1-43e2-84b8-4670accc8ac6\") " pod="openstack/ovn-controller-sn9qf-config-fdjj7" Nov 29 06:54:46 crc kubenswrapper[4947]: I1129 06:54:46.124098 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj8t6\" (UniqueName: \"kubernetes.io/projected/f8dfea0f-ddc1-43e2-84b8-4670accc8ac6-kube-api-access-xj8t6\") pod \"ovn-controller-sn9qf-config-fdjj7\" (UID: \"f8dfea0f-ddc1-43e2-84b8-4670accc8ac6\") " pod="openstack/ovn-controller-sn9qf-config-fdjj7" Nov 29 06:54:46 crc kubenswrapper[4947]: I1129 06:54:46.225826 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f8dfea0f-ddc1-43e2-84b8-4670accc8ac6-var-log-ovn\") pod \"ovn-controller-sn9qf-config-fdjj7\" (UID: \"f8dfea0f-ddc1-43e2-84b8-4670accc8ac6\") " pod="openstack/ovn-controller-sn9qf-config-fdjj7" Nov 29 06:54:46 crc kubenswrapper[4947]: I1129 06:54:46.226284 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f8dfea0f-ddc1-43e2-84b8-4670accc8ac6-var-log-ovn\") pod \"ovn-controller-sn9qf-config-fdjj7\" (UID: \"f8dfea0f-ddc1-43e2-84b8-4670accc8ac6\") " pod="openstack/ovn-controller-sn9qf-config-fdjj7" Nov 29 06:54:46 crc kubenswrapper[4947]: I1129 06:54:46.226621 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f8dfea0f-ddc1-43e2-84b8-4670accc8ac6-var-run\") pod \"ovn-controller-sn9qf-config-fdjj7\" (UID: \"f8dfea0f-ddc1-43e2-84b8-4670accc8ac6\") " pod="openstack/ovn-controller-sn9qf-config-fdjj7" Nov 29 06:54:46 crc kubenswrapper[4947]: I1129 06:54:46.226939 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f8dfea0f-ddc1-43e2-84b8-4670accc8ac6-var-run-ovn\") pod \"ovn-controller-sn9qf-config-fdjj7\" (UID: \"f8dfea0f-ddc1-43e2-84b8-4670accc8ac6\") " pod="openstack/ovn-controller-sn9qf-config-fdjj7" Nov 29 06:54:46 crc kubenswrapper[4947]: I1129 06:54:46.227042 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f8dfea0f-ddc1-43e2-84b8-4670accc8ac6-var-run-ovn\") pod \"ovn-controller-sn9qf-config-fdjj7\" (UID: \"f8dfea0f-ddc1-43e2-84b8-4670accc8ac6\") " pod="openstack/ovn-controller-sn9qf-config-fdjj7" Nov 29 06:54:46 crc kubenswrapper[4947]: I1129 06:54:46.226962 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f8dfea0f-ddc1-43e2-84b8-4670accc8ac6-var-run\") pod \"ovn-controller-sn9qf-config-fdjj7\" (UID: \"f8dfea0f-ddc1-43e2-84b8-4670accc8ac6\") " pod="openstack/ovn-controller-sn9qf-config-fdjj7" Nov 29 06:54:46 crc kubenswrapper[4947]: I1129 06:54:46.227199 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f8dfea0f-ddc1-43e2-84b8-4670accc8ac6-additional-scripts\") pod \"ovn-controller-sn9qf-config-fdjj7\" (UID: \"f8dfea0f-ddc1-43e2-84b8-4670accc8ac6\") " pod="openstack/ovn-controller-sn9qf-config-fdjj7" Nov 29 06:54:46 crc kubenswrapper[4947]: I1129 06:54:46.227804 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f8dfea0f-ddc1-43e2-84b8-4670accc8ac6-scripts\") pod \"ovn-controller-sn9qf-config-fdjj7\" (UID: \"f8dfea0f-ddc1-43e2-84b8-4670accc8ac6\") " pod="openstack/ovn-controller-sn9qf-config-fdjj7" Nov 29 06:54:46 crc kubenswrapper[4947]: I1129 06:54:46.227997 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xj8t6\" (UniqueName: \"kubernetes.io/projected/f8dfea0f-ddc1-43e2-84b8-4670accc8ac6-kube-api-access-xj8t6\") pod \"ovn-controller-sn9qf-config-fdjj7\" (UID: \"f8dfea0f-ddc1-43e2-84b8-4670accc8ac6\") " pod="openstack/ovn-controller-sn9qf-config-fdjj7" Nov 29 06:54:46 crc kubenswrapper[4947]: I1129 06:54:46.228544 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f8dfea0f-ddc1-43e2-84b8-4670accc8ac6-additional-scripts\") pod \"ovn-controller-sn9qf-config-fdjj7\" (UID: \"f8dfea0f-ddc1-43e2-84b8-4670accc8ac6\") " pod="openstack/ovn-controller-sn9qf-config-fdjj7" Nov 29 06:54:46 crc kubenswrapper[4947]: I1129 06:54:46.230431 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f8dfea0f-ddc1-43e2-84b8-4670accc8ac6-scripts\") pod \"ovn-controller-sn9qf-config-fdjj7\" (UID: \"f8dfea0f-ddc1-43e2-84b8-4670accc8ac6\") " pod="openstack/ovn-controller-sn9qf-config-fdjj7" Nov 29 06:54:46 crc kubenswrapper[4947]: I1129 06:54:46.261864 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj8t6\" (UniqueName: \"kubernetes.io/projected/f8dfea0f-ddc1-43e2-84b8-4670accc8ac6-kube-api-access-xj8t6\") pod \"ovn-controller-sn9qf-config-fdjj7\" (UID: \"f8dfea0f-ddc1-43e2-84b8-4670accc8ac6\") " pod="openstack/ovn-controller-sn9qf-config-fdjj7" Nov 29 06:54:46 crc kubenswrapper[4947]: I1129 06:54:46.325005 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sn9qf-config-fdjj7" Nov 29 06:54:46 crc kubenswrapper[4947]: I1129 06:54:46.547522 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 29 06:54:48 crc kubenswrapper[4947]: I1129 06:54:48.575043 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sn9qf-config-fdjj7"] Nov 29 06:54:48 crc kubenswrapper[4947]: W1129 06:54:48.576860 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8dfea0f_ddc1_43e2_84b8_4670accc8ac6.slice/crio-a0e4f325e511ba820ff4c423c19bf7e80edae5756825bf7ac3c8580beb17e13d WatchSource:0}: Error finding container a0e4f325e511ba820ff4c423c19bf7e80edae5756825bf7ac3c8580beb17e13d: Status 404 returned error can't find the container with id a0e4f325e511ba820ff4c423c19bf7e80edae5756825bf7ac3c8580beb17e13d Nov 29 06:54:49 crc kubenswrapper[4947]: I1129 06:54:49.085049 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sn9qf-config-fdjj7" event={"ID":"f8dfea0f-ddc1-43e2-84b8-4670accc8ac6","Type":"ContainerStarted","Data":"a0e4f325e511ba820ff4c423c19bf7e80edae5756825bf7ac3c8580beb17e13d"} Nov 29 06:54:50 crc kubenswrapper[4947]: I1129 06:54:50.098648 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sn9qf-config-fdjj7" event={"ID":"f8dfea0f-ddc1-43e2-84b8-4670accc8ac6","Type":"ContainerStarted","Data":"581deea5a04147a566075b9d640e15eff4822d496d12e2c9c238dbd60a9b5d18"} Nov 29 06:54:50 crc kubenswrapper[4947]: I1129 06:54:50.102711 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"869b4713-f63d-4d82-aa68-e1addc0ae4eb","Type":"ContainerStarted","Data":"9109df95282aa32c673668281833d9f8cfd66744e89e095c7e6a28b44d04cf5f"} Nov 29 06:54:50 crc kubenswrapper[4947]: I1129 06:54:50.130613 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 29 06:54:50 crc kubenswrapper[4947]: I1129 06:54:50.142294 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-sn9qf-config-fdjj7" podStartSLOduration=5.142269325 podStartE2EDuration="5.142269325s" podCreationTimestamp="2025-11-29 06:54:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:54:50.139801715 +0000 UTC m=+1241.184183806" watchObservedRunningTime="2025-11-29 06:54:50.142269325 +0000 UTC m=+1241.186651406" Nov 29 06:54:50 crc kubenswrapper[4947]: I1129 06:54:50.232998 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="bc968903-97f7-437d-882d-1bb4278dab13" containerName="galera" probeResult="failure" output=< Nov 29 06:54:50 crc kubenswrapper[4947]: wsrep_local_state_comment (Joined) differs from Synced Nov 29 06:54:50 crc kubenswrapper[4947]: > Nov 29 06:54:50 crc kubenswrapper[4947]: I1129 06:54:50.689599 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-sn9qf" Nov 29 06:54:51 crc kubenswrapper[4947]: I1129 06:54:51.113937 4947 generic.go:334] "Generic (PLEG): container finished" podID="f8dfea0f-ddc1-43e2-84b8-4670accc8ac6" containerID="581deea5a04147a566075b9d640e15eff4822d496d12e2c9c238dbd60a9b5d18" exitCode=0 Nov 29 06:54:51 crc kubenswrapper[4947]: I1129 06:54:51.115338 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sn9qf-config-fdjj7" event={"ID":"f8dfea0f-ddc1-43e2-84b8-4670accc8ac6","Type":"ContainerDied","Data":"581deea5a04147a566075b9d640e15eff4822d496d12e2c9c238dbd60a9b5d18"} Nov 29 06:54:51 crc kubenswrapper[4947]: I1129 06:54:51.117361 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"869b4713-f63d-4d82-aa68-e1addc0ae4eb","Type":"ContainerStarted","Data":"1750216d9343eec5da8aacc23533062d563d15b7176759495afb26564fd769f8"} Nov 29 06:54:51 crc kubenswrapper[4947]: I1129 06:54:51.118422 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 29 06:54:51 crc kubenswrapper[4947]: I1129 06:54:51.177657 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=4.466317739 podStartE2EDuration="34.17763915s" podCreationTimestamp="2025-11-29 06:54:17 +0000 UTC" firstStartedPulling="2025-11-29 06:54:18.376432732 +0000 UTC m=+1209.420814813" lastFinishedPulling="2025-11-29 06:54:48.087754143 +0000 UTC m=+1239.132136224" observedRunningTime="2025-11-29 06:54:51.173340485 +0000 UTC m=+1242.217722586" watchObservedRunningTime="2025-11-29 06:54:51.17763915 +0000 UTC m=+1242.222021231" Nov 29 06:54:52 crc kubenswrapper[4947]: I1129 06:54:52.504809 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sn9qf-config-fdjj7" Nov 29 06:54:52 crc kubenswrapper[4947]: I1129 06:54:52.663436 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f8dfea0f-ddc1-43e2-84b8-4670accc8ac6-additional-scripts\") pod \"f8dfea0f-ddc1-43e2-84b8-4670accc8ac6\" (UID: \"f8dfea0f-ddc1-43e2-84b8-4670accc8ac6\") " Nov 29 06:54:52 crc kubenswrapper[4947]: I1129 06:54:52.663578 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f8dfea0f-ddc1-43e2-84b8-4670accc8ac6-var-run-ovn\") pod \"f8dfea0f-ddc1-43e2-84b8-4670accc8ac6\" (UID: \"f8dfea0f-ddc1-43e2-84b8-4670accc8ac6\") " Nov 29 06:54:52 crc kubenswrapper[4947]: I1129 06:54:52.663644 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f8dfea0f-ddc1-43e2-84b8-4670accc8ac6-var-log-ovn\") pod \"f8dfea0f-ddc1-43e2-84b8-4670accc8ac6\" (UID: \"f8dfea0f-ddc1-43e2-84b8-4670accc8ac6\") " Nov 29 06:54:52 crc kubenswrapper[4947]: I1129 06:54:52.663680 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f8dfea0f-ddc1-43e2-84b8-4670accc8ac6-scripts\") pod \"f8dfea0f-ddc1-43e2-84b8-4670accc8ac6\" (UID: \"f8dfea0f-ddc1-43e2-84b8-4670accc8ac6\") " Nov 29 06:54:52 crc kubenswrapper[4947]: I1129 06:54:52.663689 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8dfea0f-ddc1-43e2-84b8-4670accc8ac6-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "f8dfea0f-ddc1-43e2-84b8-4670accc8ac6" (UID: "f8dfea0f-ddc1-43e2-84b8-4670accc8ac6"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 06:54:52 crc kubenswrapper[4947]: I1129 06:54:52.663809 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8dfea0f-ddc1-43e2-84b8-4670accc8ac6-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "f8dfea0f-ddc1-43e2-84b8-4670accc8ac6" (UID: "f8dfea0f-ddc1-43e2-84b8-4670accc8ac6"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 06:54:52 crc kubenswrapper[4947]: I1129 06:54:52.663873 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8dfea0f-ddc1-43e2-84b8-4670accc8ac6-var-run" (OuterVolumeSpecName: "var-run") pod "f8dfea0f-ddc1-43e2-84b8-4670accc8ac6" (UID: "f8dfea0f-ddc1-43e2-84b8-4670accc8ac6"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 06:54:52 crc kubenswrapper[4947]: I1129 06:54:52.663836 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f8dfea0f-ddc1-43e2-84b8-4670accc8ac6-var-run\") pod \"f8dfea0f-ddc1-43e2-84b8-4670accc8ac6\" (UID: \"f8dfea0f-ddc1-43e2-84b8-4670accc8ac6\") " Nov 29 06:54:52 crc kubenswrapper[4947]: I1129 06:54:52.664043 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xj8t6\" (UniqueName: \"kubernetes.io/projected/f8dfea0f-ddc1-43e2-84b8-4670accc8ac6-kube-api-access-xj8t6\") pod \"f8dfea0f-ddc1-43e2-84b8-4670accc8ac6\" (UID: \"f8dfea0f-ddc1-43e2-84b8-4670accc8ac6\") " Nov 29 06:54:52 crc kubenswrapper[4947]: I1129 06:54:52.664406 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8dfea0f-ddc1-43e2-84b8-4670accc8ac6-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "f8dfea0f-ddc1-43e2-84b8-4670accc8ac6" (UID: "f8dfea0f-ddc1-43e2-84b8-4670accc8ac6"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:54:52 crc kubenswrapper[4947]: I1129 06:54:52.664802 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8dfea0f-ddc1-43e2-84b8-4670accc8ac6-scripts" (OuterVolumeSpecName: "scripts") pod "f8dfea0f-ddc1-43e2-84b8-4670accc8ac6" (UID: "f8dfea0f-ddc1-43e2-84b8-4670accc8ac6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:54:52 crc kubenswrapper[4947]: I1129 06:54:52.665086 4947 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f8dfea0f-ddc1-43e2-84b8-4670accc8ac6-var-run\") on node \"crc\" DevicePath \"\"" Nov 29 06:54:52 crc kubenswrapper[4947]: I1129 06:54:52.665100 4947 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f8dfea0f-ddc1-43e2-84b8-4670accc8ac6-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 06:54:52 crc kubenswrapper[4947]: I1129 06:54:52.665111 4947 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f8dfea0f-ddc1-43e2-84b8-4670accc8ac6-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 29 06:54:52 crc kubenswrapper[4947]: I1129 06:54:52.665123 4947 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f8dfea0f-ddc1-43e2-84b8-4670accc8ac6-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 29 06:54:52 crc kubenswrapper[4947]: I1129 06:54:52.665132 4947 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f8dfea0f-ddc1-43e2-84b8-4670accc8ac6-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 06:54:52 crc kubenswrapper[4947]: I1129 06:54:52.674238 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8dfea0f-ddc1-43e2-84b8-4670accc8ac6-kube-api-access-xj8t6" (OuterVolumeSpecName: "kube-api-access-xj8t6") pod "f8dfea0f-ddc1-43e2-84b8-4670accc8ac6" (UID: "f8dfea0f-ddc1-43e2-84b8-4670accc8ac6"). InnerVolumeSpecName "kube-api-access-xj8t6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:54:52 crc kubenswrapper[4947]: I1129 06:54:52.766477 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xj8t6\" (UniqueName: \"kubernetes.io/projected/f8dfea0f-ddc1-43e2-84b8-4670accc8ac6-kube-api-access-xj8t6\") on node \"crc\" DevicePath \"\"" Nov 29 06:54:52 crc kubenswrapper[4947]: I1129 06:54:52.987680 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 06:54:52 crc kubenswrapper[4947]: I1129 06:54:52.988056 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 06:54:53 crc kubenswrapper[4947]: I1129 06:54:53.136091 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sn9qf-config-fdjj7" event={"ID":"f8dfea0f-ddc1-43e2-84b8-4670accc8ac6","Type":"ContainerDied","Data":"a0e4f325e511ba820ff4c423c19bf7e80edae5756825bf7ac3c8580beb17e13d"} Nov 29 06:54:53 crc kubenswrapper[4947]: I1129 06:54:53.136172 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0e4f325e511ba820ff4c423c19bf7e80edae5756825bf7ac3c8580beb17e13d" Nov 29 06:54:53 crc kubenswrapper[4947]: I1129 06:54:53.136556 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sn9qf-config-fdjj7" Nov 29 06:54:53 crc kubenswrapper[4947]: I1129 06:54:53.274460 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-sn9qf-config-fdjj7"] Nov 29 06:54:53 crc kubenswrapper[4947]: I1129 06:54:53.295313 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-sn9qf-config-fdjj7"] Nov 29 06:54:55 crc kubenswrapper[4947]: I1129 06:54:55.190820 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8dfea0f-ddc1-43e2-84b8-4670accc8ac6" path="/var/lib/kubelet/pods/f8dfea0f-ddc1-43e2-84b8-4670accc8ac6/volumes" Nov 29 06:54:56 crc kubenswrapper[4947]: I1129 06:54:56.550727 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Nov 29 06:54:56 crc kubenswrapper[4947]: I1129 06:54:56.936341 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="1df9108b-7e5b-4dd6-bd7e-787381428bce" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Nov 29 06:54:58 crc kubenswrapper[4947]: I1129 06:54:58.001794 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 29 06:54:58 crc kubenswrapper[4947]: I1129 06:54:58.102835 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 29 06:54:58 crc kubenswrapper[4947]: I1129 06:54:58.812533 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 29 06:54:59 crc kubenswrapper[4947]: I1129 06:54:59.843919 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-50f4-account-create-update-9rpwl"] Nov 29 06:54:59 crc kubenswrapper[4947]: E1129 06:54:59.844451 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8dfea0f-ddc1-43e2-84b8-4670accc8ac6" containerName="ovn-config" Nov 29 06:54:59 crc kubenswrapper[4947]: I1129 06:54:59.844468 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8dfea0f-ddc1-43e2-84b8-4670accc8ac6" containerName="ovn-config" Nov 29 06:54:59 crc kubenswrapper[4947]: I1129 06:54:59.844677 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8dfea0f-ddc1-43e2-84b8-4670accc8ac6" containerName="ovn-config" Nov 29 06:54:59 crc kubenswrapper[4947]: I1129 06:54:59.845391 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-50f4-account-create-update-9rpwl" Nov 29 06:54:59 crc kubenswrapper[4947]: I1129 06:54:59.848749 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 29 06:54:59 crc kubenswrapper[4947]: I1129 06:54:59.852600 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-50f4-account-create-update-9rpwl"] Nov 29 06:54:59 crc kubenswrapper[4947]: I1129 06:54:59.891410 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-6pgp4"] Nov 29 06:54:59 crc kubenswrapper[4947]: I1129 06:54:59.893056 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6pgp4" Nov 29 06:54:59 crc kubenswrapper[4947]: I1129 06:54:59.899596 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-6pgp4"] Nov 29 06:54:59 crc kubenswrapper[4947]: I1129 06:54:59.998815 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5ntk\" (UniqueName: \"kubernetes.io/projected/28eb01a7-ba6c-4709-ba2c-031ef8fe4b17-kube-api-access-v5ntk\") pod \"keystone-db-create-6pgp4\" (UID: \"28eb01a7-ba6c-4709-ba2c-031ef8fe4b17\") " pod="openstack/keystone-db-create-6pgp4" Nov 29 06:54:59 crc kubenswrapper[4947]: I1129 06:54:59.998894 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f8504e7-7313-440f-87b1-13a06167f241-operator-scripts\") pod \"keystone-50f4-account-create-update-9rpwl\" (UID: \"0f8504e7-7313-440f-87b1-13a06167f241\") " pod="openstack/keystone-50f4-account-create-update-9rpwl" Nov 29 06:54:59 crc kubenswrapper[4947]: I1129 06:54:59.999147 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28eb01a7-ba6c-4709-ba2c-031ef8fe4b17-operator-scripts\") pod \"keystone-db-create-6pgp4\" (UID: \"28eb01a7-ba6c-4709-ba2c-031ef8fe4b17\") " pod="openstack/keystone-db-create-6pgp4" Nov 29 06:54:59 crc kubenswrapper[4947]: I1129 06:54:59.999448 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq8lt\" (UniqueName: \"kubernetes.io/projected/0f8504e7-7313-440f-87b1-13a06167f241-kube-api-access-dq8lt\") pod \"keystone-50f4-account-create-update-9rpwl\" (UID: \"0f8504e7-7313-440f-87b1-13a06167f241\") " pod="openstack/keystone-50f4-account-create-update-9rpwl" Nov 29 06:55:00 crc kubenswrapper[4947]: I1129 06:55:00.087268 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-d5thj"] Nov 29 06:55:00 crc kubenswrapper[4947]: I1129 06:55:00.088566 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-d5thj" Nov 29 06:55:00 crc kubenswrapper[4947]: I1129 06:55:00.101342 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5ntk\" (UniqueName: \"kubernetes.io/projected/28eb01a7-ba6c-4709-ba2c-031ef8fe4b17-kube-api-access-v5ntk\") pod \"keystone-db-create-6pgp4\" (UID: \"28eb01a7-ba6c-4709-ba2c-031ef8fe4b17\") " pod="openstack/keystone-db-create-6pgp4" Nov 29 06:55:00 crc kubenswrapper[4947]: I1129 06:55:00.101452 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f8504e7-7313-440f-87b1-13a06167f241-operator-scripts\") pod \"keystone-50f4-account-create-update-9rpwl\" (UID: \"0f8504e7-7313-440f-87b1-13a06167f241\") " pod="openstack/keystone-50f4-account-create-update-9rpwl" Nov 29 06:55:00 crc kubenswrapper[4947]: I1129 06:55:00.101517 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28eb01a7-ba6c-4709-ba2c-031ef8fe4b17-operator-scripts\") pod \"keystone-db-create-6pgp4\" (UID: \"28eb01a7-ba6c-4709-ba2c-031ef8fe4b17\") " pod="openstack/keystone-db-create-6pgp4" Nov 29 06:55:00 crc kubenswrapper[4947]: I1129 06:55:00.101574 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dq8lt\" (UniqueName: \"kubernetes.io/projected/0f8504e7-7313-440f-87b1-13a06167f241-kube-api-access-dq8lt\") pod \"keystone-50f4-account-create-update-9rpwl\" (UID: \"0f8504e7-7313-440f-87b1-13a06167f241\") " pod="openstack/keystone-50f4-account-create-update-9rpwl" Nov 29 06:55:00 crc kubenswrapper[4947]: I1129 06:55:00.102456 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f8504e7-7313-440f-87b1-13a06167f241-operator-scripts\") pod \"keystone-50f4-account-create-update-9rpwl\" (UID: \"0f8504e7-7313-440f-87b1-13a06167f241\") " pod="openstack/keystone-50f4-account-create-update-9rpwl" Nov 29 06:55:00 crc kubenswrapper[4947]: I1129 06:55:00.103434 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28eb01a7-ba6c-4709-ba2c-031ef8fe4b17-operator-scripts\") pod \"keystone-db-create-6pgp4\" (UID: \"28eb01a7-ba6c-4709-ba2c-031ef8fe4b17\") " pod="openstack/keystone-db-create-6pgp4" Nov 29 06:55:00 crc kubenswrapper[4947]: I1129 06:55:00.114357 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-d5thj"] Nov 29 06:55:00 crc kubenswrapper[4947]: I1129 06:55:00.140322 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dq8lt\" (UniqueName: \"kubernetes.io/projected/0f8504e7-7313-440f-87b1-13a06167f241-kube-api-access-dq8lt\") pod \"keystone-50f4-account-create-update-9rpwl\" (UID: \"0f8504e7-7313-440f-87b1-13a06167f241\") " pod="openstack/keystone-50f4-account-create-update-9rpwl" Nov 29 06:55:00 crc kubenswrapper[4947]: I1129 06:55:00.154245 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5ntk\" (UniqueName: \"kubernetes.io/projected/28eb01a7-ba6c-4709-ba2c-031ef8fe4b17-kube-api-access-v5ntk\") pod \"keystone-db-create-6pgp4\" (UID: \"28eb01a7-ba6c-4709-ba2c-031ef8fe4b17\") " pod="openstack/keystone-db-create-6pgp4" Nov 29 06:55:00 crc kubenswrapper[4947]: I1129 06:55:00.176100 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-50f4-account-create-update-9rpwl" Nov 29 06:55:00 crc kubenswrapper[4947]: I1129 06:55:00.203854 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75a2658c-c424-413f-9b8e-40a8cf3e6aeb-operator-scripts\") pod \"placement-db-create-d5thj\" (UID: \"75a2658c-c424-413f-9b8e-40a8cf3e6aeb\") " pod="openstack/placement-db-create-d5thj" Nov 29 06:55:00 crc kubenswrapper[4947]: I1129 06:55:00.203952 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5jmf\" (UniqueName: \"kubernetes.io/projected/75a2658c-c424-413f-9b8e-40a8cf3e6aeb-kube-api-access-h5jmf\") pod \"placement-db-create-d5thj\" (UID: \"75a2658c-c424-413f-9b8e-40a8cf3e6aeb\") " pod="openstack/placement-db-create-d5thj" Nov 29 06:55:00 crc kubenswrapper[4947]: I1129 06:55:00.216167 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6pgp4" Nov 29 06:55:00 crc kubenswrapper[4947]: I1129 06:55:00.225042 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-2380-account-create-update-8fbcp"] Nov 29 06:55:00 crc kubenswrapper[4947]: I1129 06:55:00.227124 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2380-account-create-update-8fbcp" Nov 29 06:55:00 crc kubenswrapper[4947]: I1129 06:55:00.234737 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 29 06:55:00 crc kubenswrapper[4947]: I1129 06:55:00.244315 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-2380-account-create-update-8fbcp"] Nov 29 06:55:00 crc kubenswrapper[4947]: I1129 06:55:00.306527 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75a2658c-c424-413f-9b8e-40a8cf3e6aeb-operator-scripts\") pod \"placement-db-create-d5thj\" (UID: \"75a2658c-c424-413f-9b8e-40a8cf3e6aeb\") " pod="openstack/placement-db-create-d5thj" Nov 29 06:55:00 crc kubenswrapper[4947]: I1129 06:55:00.306633 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5jmf\" (UniqueName: \"kubernetes.io/projected/75a2658c-c424-413f-9b8e-40a8cf3e6aeb-kube-api-access-h5jmf\") pod \"placement-db-create-d5thj\" (UID: \"75a2658c-c424-413f-9b8e-40a8cf3e6aeb\") " pod="openstack/placement-db-create-d5thj" Nov 29 06:55:00 crc kubenswrapper[4947]: I1129 06:55:00.307427 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75a2658c-c424-413f-9b8e-40a8cf3e6aeb-operator-scripts\") pod \"placement-db-create-d5thj\" (UID: \"75a2658c-c424-413f-9b8e-40a8cf3e6aeb\") " pod="openstack/placement-db-create-d5thj" Nov 29 06:55:00 crc kubenswrapper[4947]: I1129 06:55:00.330236 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5jmf\" (UniqueName: \"kubernetes.io/projected/75a2658c-c424-413f-9b8e-40a8cf3e6aeb-kube-api-access-h5jmf\") pod \"placement-db-create-d5thj\" (UID: \"75a2658c-c424-413f-9b8e-40a8cf3e6aeb\") " pod="openstack/placement-db-create-d5thj" Nov 29 06:55:00 crc kubenswrapper[4947]: I1129 06:55:00.409952 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-d5thj" Nov 29 06:55:00 crc kubenswrapper[4947]: I1129 06:55:00.411168 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3da3753b-36fd-41b4-a3d9-34ad05c5e2f1-operator-scripts\") pod \"placement-2380-account-create-update-8fbcp\" (UID: \"3da3753b-36fd-41b4-a3d9-34ad05c5e2f1\") " pod="openstack/placement-2380-account-create-update-8fbcp" Nov 29 06:55:00 crc kubenswrapper[4947]: I1129 06:55:00.411420 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qwwd\" (UniqueName: \"kubernetes.io/projected/3da3753b-36fd-41b4-a3d9-34ad05c5e2f1-kube-api-access-5qwwd\") pod \"placement-2380-account-create-update-8fbcp\" (UID: \"3da3753b-36fd-41b4-a3d9-34ad05c5e2f1\") " pod="openstack/placement-2380-account-create-update-8fbcp" Nov 29 06:55:00 crc kubenswrapper[4947]: I1129 06:55:00.513524 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3da3753b-36fd-41b4-a3d9-34ad05c5e2f1-operator-scripts\") pod \"placement-2380-account-create-update-8fbcp\" (UID: \"3da3753b-36fd-41b4-a3d9-34ad05c5e2f1\") " pod="openstack/placement-2380-account-create-update-8fbcp" Nov 29 06:55:00 crc kubenswrapper[4947]: I1129 06:55:00.514060 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qwwd\" (UniqueName: \"kubernetes.io/projected/3da3753b-36fd-41b4-a3d9-34ad05c5e2f1-kube-api-access-5qwwd\") pod \"placement-2380-account-create-update-8fbcp\" (UID: \"3da3753b-36fd-41b4-a3d9-34ad05c5e2f1\") " pod="openstack/placement-2380-account-create-update-8fbcp" Nov 29 06:55:00 crc kubenswrapper[4947]: I1129 06:55:00.515418 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3da3753b-36fd-41b4-a3d9-34ad05c5e2f1-operator-scripts\") pod \"placement-2380-account-create-update-8fbcp\" (UID: \"3da3753b-36fd-41b4-a3d9-34ad05c5e2f1\") " pod="openstack/placement-2380-account-create-update-8fbcp" Nov 29 06:55:00 crc kubenswrapper[4947]: I1129 06:55:00.544138 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-50f4-account-create-update-9rpwl"] Nov 29 06:55:00 crc kubenswrapper[4947]: I1129 06:55:00.545145 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qwwd\" (UniqueName: \"kubernetes.io/projected/3da3753b-36fd-41b4-a3d9-34ad05c5e2f1-kube-api-access-5qwwd\") pod \"placement-2380-account-create-update-8fbcp\" (UID: \"3da3753b-36fd-41b4-a3d9-34ad05c5e2f1\") " pod="openstack/placement-2380-account-create-update-8fbcp" Nov 29 06:55:00 crc kubenswrapper[4947]: I1129 06:55:00.588980 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-6pgp4"] Nov 29 06:55:00 crc kubenswrapper[4947]: W1129 06:55:00.589836 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28eb01a7_ba6c_4709_ba2c_031ef8fe4b17.slice/crio-2475b1bb749de867950ba8f1c0d4a13f7d39432fcb9b32c592e434e65e7494d6 WatchSource:0}: Error finding container 2475b1bb749de867950ba8f1c0d4a13f7d39432fcb9b32c592e434e65e7494d6: Status 404 returned error can't find the container with id 2475b1bb749de867950ba8f1c0d4a13f7d39432fcb9b32c592e434e65e7494d6 Nov 29 06:55:00 crc kubenswrapper[4947]: I1129 06:55:00.616242 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2380-account-create-update-8fbcp" Nov 29 06:55:00 crc kubenswrapper[4947]: I1129 06:55:00.911971 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-d5thj"] Nov 29 06:55:01 crc kubenswrapper[4947]: I1129 06:55:01.116779 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-2380-account-create-update-8fbcp"] Nov 29 06:55:01 crc kubenswrapper[4947]: I1129 06:55:01.375462 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-d5thj" event={"ID":"75a2658c-c424-413f-9b8e-40a8cf3e6aeb","Type":"ContainerStarted","Data":"f3f1bb31c83c16ea36b22c1cf9f374c59a994a8d564b6b94b19b9e3af24689e0"} Nov 29 06:55:01 crc kubenswrapper[4947]: I1129 06:55:01.376037 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-d5thj" event={"ID":"75a2658c-c424-413f-9b8e-40a8cf3e6aeb","Type":"ContainerStarted","Data":"3e4a19988f7eff385a47e50f02c3a71d67236cfc61079546a3a5d3ddfb7fc720"} Nov 29 06:55:01 crc kubenswrapper[4947]: I1129 06:55:01.388531 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-2380-account-create-update-8fbcp" event={"ID":"3da3753b-36fd-41b4-a3d9-34ad05c5e2f1","Type":"ContainerStarted","Data":"de114af35f95603ce933b73ee4b7c65281e7173b3c6d1e962a8e1e36ae7e12e3"} Nov 29 06:55:01 crc kubenswrapper[4947]: I1129 06:55:01.424872 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-d5thj" podStartSLOduration=1.424844907 podStartE2EDuration="1.424844907s" podCreationTimestamp="2025-11-29 06:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:55:01.421958137 +0000 UTC m=+1252.466340218" watchObservedRunningTime="2025-11-29 06:55:01.424844907 +0000 UTC m=+1252.469226988" Nov 29 06:55:01 crc kubenswrapper[4947]: I1129 06:55:01.433963 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6pgp4" event={"ID":"28eb01a7-ba6c-4709-ba2c-031ef8fe4b17","Type":"ContainerStarted","Data":"0947df8cdc146d3c4e325d26f8b32b5b0fb2b3f775107d120af0202291f237b6"} Nov 29 06:55:01 crc kubenswrapper[4947]: I1129 06:55:01.434046 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6pgp4" event={"ID":"28eb01a7-ba6c-4709-ba2c-031ef8fe4b17","Type":"ContainerStarted","Data":"2475b1bb749de867950ba8f1c0d4a13f7d39432fcb9b32c592e434e65e7494d6"} Nov 29 06:55:01 crc kubenswrapper[4947]: I1129 06:55:01.445776 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-50f4-account-create-update-9rpwl" event={"ID":"0f8504e7-7313-440f-87b1-13a06167f241","Type":"ContainerStarted","Data":"0b180403347745205a169de81adc02b2ffc9fc88eddec7ab486e5a9ab0bade32"} Nov 29 06:55:01 crc kubenswrapper[4947]: I1129 06:55:01.445854 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-50f4-account-create-update-9rpwl" event={"ID":"0f8504e7-7313-440f-87b1-13a06167f241","Type":"ContainerStarted","Data":"ea6c98cba11cd3d623160ee29ab784834cd10158cf824e06e497e579fc55dc6e"} Nov 29 06:55:01 crc kubenswrapper[4947]: I1129 06:55:01.473278 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-50f4-account-create-update-9rpwl" podStartSLOduration=2.473259017 podStartE2EDuration="2.473259017s" podCreationTimestamp="2025-11-29 06:54:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:55:01.4680624 +0000 UTC m=+1252.512444481" watchObservedRunningTime="2025-11-29 06:55:01.473259017 +0000 UTC m=+1252.517641098" Nov 29 06:55:01 crc kubenswrapper[4947]: I1129 06:55:01.477740 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-6pgp4" podStartSLOduration=2.477723156 podStartE2EDuration="2.477723156s" podCreationTimestamp="2025-11-29 06:54:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:55:01.454539361 +0000 UTC m=+1252.498921432" watchObservedRunningTime="2025-11-29 06:55:01.477723156 +0000 UTC m=+1252.522105237" Nov 29 06:55:02 crc kubenswrapper[4947]: I1129 06:55:02.456598 4947 generic.go:334] "Generic (PLEG): container finished" podID="28eb01a7-ba6c-4709-ba2c-031ef8fe4b17" containerID="0947df8cdc146d3c4e325d26f8b32b5b0fb2b3f775107d120af0202291f237b6" exitCode=0 Nov 29 06:55:02 crc kubenswrapper[4947]: I1129 06:55:02.456729 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6pgp4" event={"ID":"28eb01a7-ba6c-4709-ba2c-031ef8fe4b17","Type":"ContainerDied","Data":"0947df8cdc146d3c4e325d26f8b32b5b0fb2b3f775107d120af0202291f237b6"} Nov 29 06:55:02 crc kubenswrapper[4947]: I1129 06:55:02.461268 4947 generic.go:334] "Generic (PLEG): container finished" podID="0f8504e7-7313-440f-87b1-13a06167f241" containerID="0b180403347745205a169de81adc02b2ffc9fc88eddec7ab486e5a9ab0bade32" exitCode=0 Nov 29 06:55:02 crc kubenswrapper[4947]: I1129 06:55:02.461385 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-50f4-account-create-update-9rpwl" event={"ID":"0f8504e7-7313-440f-87b1-13a06167f241","Type":"ContainerDied","Data":"0b180403347745205a169de81adc02b2ffc9fc88eddec7ab486e5a9ab0bade32"} Nov 29 06:55:02 crc kubenswrapper[4947]: I1129 06:55:02.464567 4947 generic.go:334] "Generic (PLEG): container finished" podID="75a2658c-c424-413f-9b8e-40a8cf3e6aeb" containerID="f3f1bb31c83c16ea36b22c1cf9f374c59a994a8d564b6b94b19b9e3af24689e0" exitCode=0 Nov 29 06:55:02 crc kubenswrapper[4947]: I1129 06:55:02.464653 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-d5thj" event={"ID":"75a2658c-c424-413f-9b8e-40a8cf3e6aeb","Type":"ContainerDied","Data":"f3f1bb31c83c16ea36b22c1cf9f374c59a994a8d564b6b94b19b9e3af24689e0"} Nov 29 06:55:02 crc kubenswrapper[4947]: I1129 06:55:02.467614 4947 generic.go:334] "Generic (PLEG): container finished" podID="3da3753b-36fd-41b4-a3d9-34ad05c5e2f1" containerID="dd31444ca7c2a006fc538c1f4c099000022fbe7ea2752e9ee126c5e23064a312" exitCode=0 Nov 29 06:55:02 crc kubenswrapper[4947]: I1129 06:55:02.467683 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-2380-account-create-update-8fbcp" event={"ID":"3da3753b-36fd-41b4-a3d9-34ad05c5e2f1","Type":"ContainerDied","Data":"dd31444ca7c2a006fc538c1f4c099000022fbe7ea2752e9ee126c5e23064a312"} Nov 29 06:55:02 crc kubenswrapper[4947]: I1129 06:55:02.887678 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 29 06:55:03 crc kubenswrapper[4947]: I1129 06:55:03.888847 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-d5thj" Nov 29 06:55:03 crc kubenswrapper[4947]: I1129 06:55:03.906975 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75a2658c-c424-413f-9b8e-40a8cf3e6aeb-operator-scripts\") pod \"75a2658c-c424-413f-9b8e-40a8cf3e6aeb\" (UID: \"75a2658c-c424-413f-9b8e-40a8cf3e6aeb\") " Nov 29 06:55:03 crc kubenswrapper[4947]: I1129 06:55:03.907246 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5jmf\" (UniqueName: \"kubernetes.io/projected/75a2658c-c424-413f-9b8e-40a8cf3e6aeb-kube-api-access-h5jmf\") pod \"75a2658c-c424-413f-9b8e-40a8cf3e6aeb\" (UID: \"75a2658c-c424-413f-9b8e-40a8cf3e6aeb\") " Nov 29 06:55:03 crc kubenswrapper[4947]: I1129 06:55:03.914953 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75a2658c-c424-413f-9b8e-40a8cf3e6aeb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "75a2658c-c424-413f-9b8e-40a8cf3e6aeb" (UID: "75a2658c-c424-413f-9b8e-40a8cf3e6aeb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:55:03 crc kubenswrapper[4947]: I1129 06:55:03.929469 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75a2658c-c424-413f-9b8e-40a8cf3e6aeb-kube-api-access-h5jmf" (OuterVolumeSpecName: "kube-api-access-h5jmf") pod "75a2658c-c424-413f-9b8e-40a8cf3e6aeb" (UID: "75a2658c-c424-413f-9b8e-40a8cf3e6aeb"). InnerVolumeSpecName "kube-api-access-h5jmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:55:04 crc kubenswrapper[4947]: I1129 06:55:04.009932 4947 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75a2658c-c424-413f-9b8e-40a8cf3e6aeb-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 06:55:04 crc kubenswrapper[4947]: I1129 06:55:04.009986 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5jmf\" (UniqueName: \"kubernetes.io/projected/75a2658c-c424-413f-9b8e-40a8cf3e6aeb-kube-api-access-h5jmf\") on node \"crc\" DevicePath \"\"" Nov 29 06:55:04 crc kubenswrapper[4947]: I1129 06:55:04.075884 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-50f4-account-create-update-9rpwl" Nov 29 06:55:04 crc kubenswrapper[4947]: I1129 06:55:04.085001 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2380-account-create-update-8fbcp" Nov 29 06:55:04 crc kubenswrapper[4947]: I1129 06:55:04.097212 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6pgp4" Nov 29 06:55:04 crc kubenswrapper[4947]: I1129 06:55:04.112106 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3da3753b-36fd-41b4-a3d9-34ad05c5e2f1-operator-scripts\") pod \"3da3753b-36fd-41b4-a3d9-34ad05c5e2f1\" (UID: \"3da3753b-36fd-41b4-a3d9-34ad05c5e2f1\") " Nov 29 06:55:04 crc kubenswrapper[4947]: I1129 06:55:04.112783 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3da3753b-36fd-41b4-a3d9-34ad05c5e2f1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3da3753b-36fd-41b4-a3d9-34ad05c5e2f1" (UID: "3da3753b-36fd-41b4-a3d9-34ad05c5e2f1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:55:04 crc kubenswrapper[4947]: I1129 06:55:04.112900 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qwwd\" (UniqueName: \"kubernetes.io/projected/3da3753b-36fd-41b4-a3d9-34ad05c5e2f1-kube-api-access-5qwwd\") pod \"3da3753b-36fd-41b4-a3d9-34ad05c5e2f1\" (UID: \"3da3753b-36fd-41b4-a3d9-34ad05c5e2f1\") " Nov 29 06:55:04 crc kubenswrapper[4947]: I1129 06:55:04.113011 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dq8lt\" (UniqueName: \"kubernetes.io/projected/0f8504e7-7313-440f-87b1-13a06167f241-kube-api-access-dq8lt\") pod \"0f8504e7-7313-440f-87b1-13a06167f241\" (UID: \"0f8504e7-7313-440f-87b1-13a06167f241\") " Nov 29 06:55:04 crc kubenswrapper[4947]: I1129 06:55:04.113439 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28eb01a7-ba6c-4709-ba2c-031ef8fe4b17-operator-scripts\") pod \"28eb01a7-ba6c-4709-ba2c-031ef8fe4b17\" (UID: \"28eb01a7-ba6c-4709-ba2c-031ef8fe4b17\") " Nov 29 06:55:04 crc kubenswrapper[4947]: I1129 06:55:04.113542 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f8504e7-7313-440f-87b1-13a06167f241-operator-scripts\") pod \"0f8504e7-7313-440f-87b1-13a06167f241\" (UID: \"0f8504e7-7313-440f-87b1-13a06167f241\") " Nov 29 06:55:04 crc kubenswrapper[4947]: I1129 06:55:04.113630 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5ntk\" (UniqueName: \"kubernetes.io/projected/28eb01a7-ba6c-4709-ba2c-031ef8fe4b17-kube-api-access-v5ntk\") pod \"28eb01a7-ba6c-4709-ba2c-031ef8fe4b17\" (UID: \"28eb01a7-ba6c-4709-ba2c-031ef8fe4b17\") " Nov 29 06:55:04 crc kubenswrapper[4947]: I1129 06:55:04.116681 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28eb01a7-ba6c-4709-ba2c-031ef8fe4b17-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "28eb01a7-ba6c-4709-ba2c-031ef8fe4b17" (UID: "28eb01a7-ba6c-4709-ba2c-031ef8fe4b17"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:55:04 crc kubenswrapper[4947]: I1129 06:55:04.117808 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f8504e7-7313-440f-87b1-13a06167f241-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0f8504e7-7313-440f-87b1-13a06167f241" (UID: "0f8504e7-7313-440f-87b1-13a06167f241"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:55:04 crc kubenswrapper[4947]: I1129 06:55:04.119253 4947 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3da3753b-36fd-41b4-a3d9-34ad05c5e2f1-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 06:55:04 crc kubenswrapper[4947]: I1129 06:55:04.119383 4947 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28eb01a7-ba6c-4709-ba2c-031ef8fe4b17-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 06:55:04 crc kubenswrapper[4947]: I1129 06:55:04.119424 4947 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f8504e7-7313-440f-87b1-13a06167f241-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 06:55:04 crc kubenswrapper[4947]: I1129 06:55:04.119270 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f8504e7-7313-440f-87b1-13a06167f241-kube-api-access-dq8lt" (OuterVolumeSpecName: "kube-api-access-dq8lt") pod "0f8504e7-7313-440f-87b1-13a06167f241" (UID: "0f8504e7-7313-440f-87b1-13a06167f241"). InnerVolumeSpecName "kube-api-access-dq8lt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:55:04 crc kubenswrapper[4947]: I1129 06:55:04.123217 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28eb01a7-ba6c-4709-ba2c-031ef8fe4b17-kube-api-access-v5ntk" (OuterVolumeSpecName: "kube-api-access-v5ntk") pod "28eb01a7-ba6c-4709-ba2c-031ef8fe4b17" (UID: "28eb01a7-ba6c-4709-ba2c-031ef8fe4b17"). InnerVolumeSpecName "kube-api-access-v5ntk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:55:04 crc kubenswrapper[4947]: I1129 06:55:04.127949 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3da3753b-36fd-41b4-a3d9-34ad05c5e2f1-kube-api-access-5qwwd" (OuterVolumeSpecName: "kube-api-access-5qwwd") pod "3da3753b-36fd-41b4-a3d9-34ad05c5e2f1" (UID: "3da3753b-36fd-41b4-a3d9-34ad05c5e2f1"). InnerVolumeSpecName "kube-api-access-5qwwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:55:04 crc kubenswrapper[4947]: I1129 06:55:04.222185 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qwwd\" (UniqueName: \"kubernetes.io/projected/3da3753b-36fd-41b4-a3d9-34ad05c5e2f1-kube-api-access-5qwwd\") on node \"crc\" DevicePath \"\"" Nov 29 06:55:04 crc kubenswrapper[4947]: I1129 06:55:04.222248 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dq8lt\" (UniqueName: \"kubernetes.io/projected/0f8504e7-7313-440f-87b1-13a06167f241-kube-api-access-dq8lt\") on node \"crc\" DevicePath \"\"" Nov 29 06:55:04 crc kubenswrapper[4947]: I1129 06:55:04.222260 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5ntk\" (UniqueName: \"kubernetes.io/projected/28eb01a7-ba6c-4709-ba2c-031ef8fe4b17-kube-api-access-v5ntk\") on node \"crc\" DevicePath \"\"" Nov 29 06:55:04 crc kubenswrapper[4947]: I1129 06:55:04.486503 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6pgp4" Nov 29 06:55:04 crc kubenswrapper[4947]: I1129 06:55:04.486508 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6pgp4" event={"ID":"28eb01a7-ba6c-4709-ba2c-031ef8fe4b17","Type":"ContainerDied","Data":"2475b1bb749de867950ba8f1c0d4a13f7d39432fcb9b32c592e434e65e7494d6"} Nov 29 06:55:04 crc kubenswrapper[4947]: I1129 06:55:04.486618 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2475b1bb749de867950ba8f1c0d4a13f7d39432fcb9b32c592e434e65e7494d6" Nov 29 06:55:04 crc kubenswrapper[4947]: I1129 06:55:04.488799 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-50f4-account-create-update-9rpwl" event={"ID":"0f8504e7-7313-440f-87b1-13a06167f241","Type":"ContainerDied","Data":"ea6c98cba11cd3d623160ee29ab784834cd10158cf824e06e497e579fc55dc6e"} Nov 29 06:55:04 crc kubenswrapper[4947]: I1129 06:55:04.488959 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea6c98cba11cd3d623160ee29ab784834cd10158cf824e06e497e579fc55dc6e" Nov 29 06:55:04 crc kubenswrapper[4947]: I1129 06:55:04.488829 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-50f4-account-create-update-9rpwl" Nov 29 06:55:04 crc kubenswrapper[4947]: I1129 06:55:04.492940 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-d5thj" Nov 29 06:55:04 crc kubenswrapper[4947]: I1129 06:55:04.492968 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-d5thj" event={"ID":"75a2658c-c424-413f-9b8e-40a8cf3e6aeb","Type":"ContainerDied","Data":"3e4a19988f7eff385a47e50f02c3a71d67236cfc61079546a3a5d3ddfb7fc720"} Nov 29 06:55:04 crc kubenswrapper[4947]: I1129 06:55:04.493023 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e4a19988f7eff385a47e50f02c3a71d67236cfc61079546a3a5d3ddfb7fc720" Nov 29 06:55:04 crc kubenswrapper[4947]: I1129 06:55:04.496498 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-2380-account-create-update-8fbcp" event={"ID":"3da3753b-36fd-41b4-a3d9-34ad05c5e2f1","Type":"ContainerDied","Data":"de114af35f95603ce933b73ee4b7c65281e7173b3c6d1e962a8e1e36ae7e12e3"} Nov 29 06:55:04 crc kubenswrapper[4947]: I1129 06:55:04.496549 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de114af35f95603ce933b73ee4b7c65281e7173b3c6d1e962a8e1e36ae7e12e3" Nov 29 06:55:04 crc kubenswrapper[4947]: I1129 06:55:04.496556 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2380-account-create-update-8fbcp" Nov 29 06:55:05 crc kubenswrapper[4947]: I1129 06:55:05.376244 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-g5qfw"] Nov 29 06:55:05 crc kubenswrapper[4947]: E1129 06:55:05.377010 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3da3753b-36fd-41b4-a3d9-34ad05c5e2f1" containerName="mariadb-account-create-update" Nov 29 06:55:05 crc kubenswrapper[4947]: I1129 06:55:05.377029 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="3da3753b-36fd-41b4-a3d9-34ad05c5e2f1" containerName="mariadb-account-create-update" Nov 29 06:55:05 crc kubenswrapper[4947]: E1129 06:55:05.377057 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75a2658c-c424-413f-9b8e-40a8cf3e6aeb" containerName="mariadb-database-create" Nov 29 06:55:05 crc kubenswrapper[4947]: I1129 06:55:05.377065 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="75a2658c-c424-413f-9b8e-40a8cf3e6aeb" containerName="mariadb-database-create" Nov 29 06:55:05 crc kubenswrapper[4947]: E1129 06:55:05.377084 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28eb01a7-ba6c-4709-ba2c-031ef8fe4b17" containerName="mariadb-database-create" Nov 29 06:55:05 crc kubenswrapper[4947]: I1129 06:55:05.377093 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="28eb01a7-ba6c-4709-ba2c-031ef8fe4b17" containerName="mariadb-database-create" Nov 29 06:55:05 crc kubenswrapper[4947]: E1129 06:55:05.377134 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f8504e7-7313-440f-87b1-13a06167f241" containerName="mariadb-account-create-update" Nov 29 06:55:05 crc kubenswrapper[4947]: I1129 06:55:05.377143 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f8504e7-7313-440f-87b1-13a06167f241" containerName="mariadb-account-create-update" Nov 29 06:55:05 crc kubenswrapper[4947]: I1129 06:55:05.377405 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f8504e7-7313-440f-87b1-13a06167f241" containerName="mariadb-account-create-update" Nov 29 06:55:05 crc kubenswrapper[4947]: I1129 06:55:05.377421 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="28eb01a7-ba6c-4709-ba2c-031ef8fe4b17" containerName="mariadb-database-create" Nov 29 06:55:05 crc kubenswrapper[4947]: I1129 06:55:05.377431 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="75a2658c-c424-413f-9b8e-40a8cf3e6aeb" containerName="mariadb-database-create" Nov 29 06:55:05 crc kubenswrapper[4947]: I1129 06:55:05.377448 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="3da3753b-36fd-41b4-a3d9-34ad05c5e2f1" containerName="mariadb-account-create-update" Nov 29 06:55:05 crc kubenswrapper[4947]: I1129 06:55:05.378172 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-g5qfw" Nov 29 06:55:05 crc kubenswrapper[4947]: I1129 06:55:05.387456 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-g5qfw"] Nov 29 06:55:05 crc kubenswrapper[4947]: I1129 06:55:05.456278 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjmdk\" (UniqueName: \"kubernetes.io/projected/4647fd5c-6d46-4947-be25-554fa8a74cff-kube-api-access-mjmdk\") pod \"glance-db-create-g5qfw\" (UID: \"4647fd5c-6d46-4947-be25-554fa8a74cff\") " pod="openstack/glance-db-create-g5qfw" Nov 29 06:55:05 crc kubenswrapper[4947]: I1129 06:55:05.456353 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4647fd5c-6d46-4947-be25-554fa8a74cff-operator-scripts\") pod \"glance-db-create-g5qfw\" (UID: \"4647fd5c-6d46-4947-be25-554fa8a74cff\") " pod="openstack/glance-db-create-g5qfw" Nov 29 06:55:05 crc kubenswrapper[4947]: I1129 06:55:05.483170 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-f74a-account-create-update-jsfv8"] Nov 29 06:55:05 crc kubenswrapper[4947]: I1129 06:55:05.484610 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f74a-account-create-update-jsfv8" Nov 29 06:55:05 crc kubenswrapper[4947]: I1129 06:55:05.487454 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 29 06:55:05 crc kubenswrapper[4947]: I1129 06:55:05.503786 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-f74a-account-create-update-jsfv8"] Nov 29 06:55:05 crc kubenswrapper[4947]: I1129 06:55:05.558652 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjmdk\" (UniqueName: \"kubernetes.io/projected/4647fd5c-6d46-4947-be25-554fa8a74cff-kube-api-access-mjmdk\") pod \"glance-db-create-g5qfw\" (UID: \"4647fd5c-6d46-4947-be25-554fa8a74cff\") " pod="openstack/glance-db-create-g5qfw" Nov 29 06:55:05 crc kubenswrapper[4947]: I1129 06:55:05.558721 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4647fd5c-6d46-4947-be25-554fa8a74cff-operator-scripts\") pod \"glance-db-create-g5qfw\" (UID: \"4647fd5c-6d46-4947-be25-554fa8a74cff\") " pod="openstack/glance-db-create-g5qfw" Nov 29 06:55:05 crc kubenswrapper[4947]: I1129 06:55:05.558866 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnxrc\" (UniqueName: \"kubernetes.io/projected/7d8e8106-6ded-493a-8df9-9c798209d461-kube-api-access-pnxrc\") pod \"glance-f74a-account-create-update-jsfv8\" (UID: \"7d8e8106-6ded-493a-8df9-9c798209d461\") " pod="openstack/glance-f74a-account-create-update-jsfv8" Nov 29 06:55:05 crc kubenswrapper[4947]: I1129 06:55:05.558931 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d8e8106-6ded-493a-8df9-9c798209d461-operator-scripts\") pod \"glance-f74a-account-create-update-jsfv8\" (UID: \"7d8e8106-6ded-493a-8df9-9c798209d461\") " pod="openstack/glance-f74a-account-create-update-jsfv8" Nov 29 06:55:05 crc kubenswrapper[4947]: I1129 06:55:05.559894 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4647fd5c-6d46-4947-be25-554fa8a74cff-operator-scripts\") pod \"glance-db-create-g5qfw\" (UID: \"4647fd5c-6d46-4947-be25-554fa8a74cff\") " pod="openstack/glance-db-create-g5qfw" Nov 29 06:55:05 crc kubenswrapper[4947]: I1129 06:55:05.588384 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjmdk\" (UniqueName: \"kubernetes.io/projected/4647fd5c-6d46-4947-be25-554fa8a74cff-kube-api-access-mjmdk\") pod \"glance-db-create-g5qfw\" (UID: \"4647fd5c-6d46-4947-be25-554fa8a74cff\") " pod="openstack/glance-db-create-g5qfw" Nov 29 06:55:05 crc kubenswrapper[4947]: I1129 06:55:05.661692 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnxrc\" (UniqueName: \"kubernetes.io/projected/7d8e8106-6ded-493a-8df9-9c798209d461-kube-api-access-pnxrc\") pod \"glance-f74a-account-create-update-jsfv8\" (UID: \"7d8e8106-6ded-493a-8df9-9c798209d461\") " pod="openstack/glance-f74a-account-create-update-jsfv8" Nov 29 06:55:05 crc kubenswrapper[4947]: I1129 06:55:05.662455 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d8e8106-6ded-493a-8df9-9c798209d461-operator-scripts\") pod \"glance-f74a-account-create-update-jsfv8\" (UID: \"7d8e8106-6ded-493a-8df9-9c798209d461\") " pod="openstack/glance-f74a-account-create-update-jsfv8" Nov 29 06:55:05 crc kubenswrapper[4947]: I1129 06:55:05.663420 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d8e8106-6ded-493a-8df9-9c798209d461-operator-scripts\") pod \"glance-f74a-account-create-update-jsfv8\" (UID: \"7d8e8106-6ded-493a-8df9-9c798209d461\") " pod="openstack/glance-f74a-account-create-update-jsfv8" Nov 29 06:55:05 crc kubenswrapper[4947]: I1129 06:55:05.685875 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnxrc\" (UniqueName: \"kubernetes.io/projected/7d8e8106-6ded-493a-8df9-9c798209d461-kube-api-access-pnxrc\") pod \"glance-f74a-account-create-update-jsfv8\" (UID: \"7d8e8106-6ded-493a-8df9-9c798209d461\") " pod="openstack/glance-f74a-account-create-update-jsfv8" Nov 29 06:55:05 crc kubenswrapper[4947]: I1129 06:55:05.696718 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-g5qfw" Nov 29 06:55:05 crc kubenswrapper[4947]: I1129 06:55:05.804627 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f74a-account-create-update-jsfv8" Nov 29 06:55:05 crc kubenswrapper[4947]: I1129 06:55:05.961466 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-g5qfw"] Nov 29 06:55:06 crc kubenswrapper[4947]: I1129 06:55:06.133771 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-f74a-account-create-update-jsfv8"] Nov 29 06:55:06 crc kubenswrapper[4947]: W1129 06:55:06.150267 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d8e8106_6ded_493a_8df9_9c798209d461.slice/crio-292816b7fccd3ce3b21f1d808000526e0112cc4ed44bdd14b5a250b183817692 WatchSource:0}: Error finding container 292816b7fccd3ce3b21f1d808000526e0112cc4ed44bdd14b5a250b183817692: Status 404 returned error can't find the container with id 292816b7fccd3ce3b21f1d808000526e0112cc4ed44bdd14b5a250b183817692 Nov 29 06:55:06 crc kubenswrapper[4947]: I1129 06:55:06.516852 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f74a-account-create-update-jsfv8" event={"ID":"7d8e8106-6ded-493a-8df9-9c798209d461","Type":"ContainerStarted","Data":"489752600eba33d8113eba1165505f1a4acd04e6273f14c16462bf120ce22f63"} Nov 29 06:55:06 crc kubenswrapper[4947]: I1129 06:55:06.517268 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f74a-account-create-update-jsfv8" event={"ID":"7d8e8106-6ded-493a-8df9-9c798209d461","Type":"ContainerStarted","Data":"292816b7fccd3ce3b21f1d808000526e0112cc4ed44bdd14b5a250b183817692"} Nov 29 06:55:06 crc kubenswrapper[4947]: I1129 06:55:06.520475 4947 generic.go:334] "Generic (PLEG): container finished" podID="4647fd5c-6d46-4947-be25-554fa8a74cff" containerID="604d1f2f681d00aabfa37f506853ca330af00f3d7877a66d7bdff7733b89f7b0" exitCode=0 Nov 29 06:55:06 crc kubenswrapper[4947]: I1129 06:55:06.520571 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-g5qfw" event={"ID":"4647fd5c-6d46-4947-be25-554fa8a74cff","Type":"ContainerDied","Data":"604d1f2f681d00aabfa37f506853ca330af00f3d7877a66d7bdff7733b89f7b0"} Nov 29 06:55:06 crc kubenswrapper[4947]: I1129 06:55:06.520624 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-g5qfw" event={"ID":"4647fd5c-6d46-4947-be25-554fa8a74cff","Type":"ContainerStarted","Data":"e03d8e1991a2aefa0a3f18586a029d76505ac92cbdc289c10644e2b4e4f04778"} Nov 29 06:55:06 crc kubenswrapper[4947]: I1129 06:55:06.539083 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-f74a-account-create-update-jsfv8" podStartSLOduration=1.539060063 podStartE2EDuration="1.539060063s" podCreationTimestamp="2025-11-29 06:55:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:55:06.534665656 +0000 UTC m=+1257.579047747" watchObservedRunningTime="2025-11-29 06:55:06.539060063 +0000 UTC m=+1257.583442144" Nov 29 06:55:06 crc kubenswrapper[4947]: I1129 06:55:06.549288 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 29 06:55:06 crc kubenswrapper[4947]: I1129 06:55:06.933439 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 29 06:55:07 crc kubenswrapper[4947]: I1129 06:55:07.531089 4947 generic.go:334] "Generic (PLEG): container finished" podID="7d8e8106-6ded-493a-8df9-9c798209d461" containerID="489752600eba33d8113eba1165505f1a4acd04e6273f14c16462bf120ce22f63" exitCode=0 Nov 29 06:55:07 crc kubenswrapper[4947]: I1129 06:55:07.531302 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f74a-account-create-update-jsfv8" event={"ID":"7d8e8106-6ded-493a-8df9-9c798209d461","Type":"ContainerDied","Data":"489752600eba33d8113eba1165505f1a4acd04e6273f14c16462bf120ce22f63"} Nov 29 06:55:07 crc kubenswrapper[4947]: I1129 06:55:07.860902 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-g5qfw" Nov 29 06:55:07 crc kubenswrapper[4947]: I1129 06:55:07.905930 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4647fd5c-6d46-4947-be25-554fa8a74cff-operator-scripts\") pod \"4647fd5c-6d46-4947-be25-554fa8a74cff\" (UID: \"4647fd5c-6d46-4947-be25-554fa8a74cff\") " Nov 29 06:55:07 crc kubenswrapper[4947]: I1129 06:55:07.906294 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjmdk\" (UniqueName: \"kubernetes.io/projected/4647fd5c-6d46-4947-be25-554fa8a74cff-kube-api-access-mjmdk\") pod \"4647fd5c-6d46-4947-be25-554fa8a74cff\" (UID: \"4647fd5c-6d46-4947-be25-554fa8a74cff\") " Nov 29 06:55:07 crc kubenswrapper[4947]: I1129 06:55:07.906684 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4647fd5c-6d46-4947-be25-554fa8a74cff-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4647fd5c-6d46-4947-be25-554fa8a74cff" (UID: "4647fd5c-6d46-4947-be25-554fa8a74cff"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:55:07 crc kubenswrapper[4947]: I1129 06:55:07.907438 4947 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4647fd5c-6d46-4947-be25-554fa8a74cff-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 06:55:07 crc kubenswrapper[4947]: I1129 06:55:07.935642 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4647fd5c-6d46-4947-be25-554fa8a74cff-kube-api-access-mjmdk" (OuterVolumeSpecName: "kube-api-access-mjmdk") pod "4647fd5c-6d46-4947-be25-554fa8a74cff" (UID: "4647fd5c-6d46-4947-be25-554fa8a74cff"). InnerVolumeSpecName "kube-api-access-mjmdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:55:08 crc kubenswrapper[4947]: I1129 06:55:08.009660 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjmdk\" (UniqueName: \"kubernetes.io/projected/4647fd5c-6d46-4947-be25-554fa8a74cff-kube-api-access-mjmdk\") on node \"crc\" DevicePath \"\"" Nov 29 06:55:08 crc kubenswrapper[4947]: I1129 06:55:08.542483 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-g5qfw" Nov 29 06:55:08 crc kubenswrapper[4947]: I1129 06:55:08.542472 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-g5qfw" event={"ID":"4647fd5c-6d46-4947-be25-554fa8a74cff","Type":"ContainerDied","Data":"e03d8e1991a2aefa0a3f18586a029d76505ac92cbdc289c10644e2b4e4f04778"} Nov 29 06:55:08 crc kubenswrapper[4947]: I1129 06:55:08.542650 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e03d8e1991a2aefa0a3f18586a029d76505ac92cbdc289c10644e2b4e4f04778" Nov 29 06:55:08 crc kubenswrapper[4947]: I1129 06:55:08.925192 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f74a-account-create-update-jsfv8" Nov 29 06:55:08 crc kubenswrapper[4947]: I1129 06:55:08.926247 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-g7bk7"] Nov 29 06:55:08 crc kubenswrapper[4947]: E1129 06:55:08.926688 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4647fd5c-6d46-4947-be25-554fa8a74cff" containerName="mariadb-database-create" Nov 29 06:55:08 crc kubenswrapper[4947]: I1129 06:55:08.926707 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="4647fd5c-6d46-4947-be25-554fa8a74cff" containerName="mariadb-database-create" Nov 29 06:55:08 crc kubenswrapper[4947]: E1129 06:55:08.926716 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d8e8106-6ded-493a-8df9-9c798209d461" containerName="mariadb-account-create-update" Nov 29 06:55:08 crc kubenswrapper[4947]: I1129 06:55:08.926723 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d8e8106-6ded-493a-8df9-9c798209d461" containerName="mariadb-account-create-update" Nov 29 06:55:08 crc kubenswrapper[4947]: I1129 06:55:08.926913 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="4647fd5c-6d46-4947-be25-554fa8a74cff" containerName="mariadb-database-create" Nov 29 06:55:08 crc kubenswrapper[4947]: I1129 06:55:08.926935 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d8e8106-6ded-493a-8df9-9c798209d461" containerName="mariadb-account-create-update" Nov 29 06:55:08 crc kubenswrapper[4947]: I1129 06:55:08.927548 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-g7bk7" Nov 29 06:55:08 crc kubenswrapper[4947]: I1129 06:55:08.954318 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-g7bk7"] Nov 29 06:55:08 crc kubenswrapper[4947]: I1129 06:55:08.967813 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-04ce-account-create-update-jjbtk"] Nov 29 06:55:08 crc kubenswrapper[4947]: I1129 06:55:08.971308 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-04ce-account-create-update-jjbtk" Nov 29 06:55:08 crc kubenswrapper[4947]: I1129 06:55:08.973870 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 29 06:55:08 crc kubenswrapper[4947]: I1129 06:55:08.998801 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-04ce-account-create-update-jjbtk"] Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.029698 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d8e8106-6ded-493a-8df9-9c798209d461-operator-scripts\") pod \"7d8e8106-6ded-493a-8df9-9c798209d461\" (UID: \"7d8e8106-6ded-493a-8df9-9c798209d461\") " Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.029971 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnxrc\" (UniqueName: \"kubernetes.io/projected/7d8e8106-6ded-493a-8df9-9c798209d461-kube-api-access-pnxrc\") pod \"7d8e8106-6ded-493a-8df9-9c798209d461\" (UID: \"7d8e8106-6ded-493a-8df9-9c798209d461\") " Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.030307 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1884516f-3c31-4dc9-8d46-4cceeefeb6e2-operator-scripts\") pod \"cinder-db-create-g7bk7\" (UID: \"1884516f-3c31-4dc9-8d46-4cceeefeb6e2\") " pod="openstack/cinder-db-create-g7bk7" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.030449 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j97vd\" (UniqueName: \"kubernetes.io/projected/7ff65d93-9651-43e9-9309-49ed52f33a3c-kube-api-access-j97vd\") pod \"cinder-04ce-account-create-update-jjbtk\" (UID: \"7ff65d93-9651-43e9-9309-49ed52f33a3c\") " pod="openstack/cinder-04ce-account-create-update-jjbtk" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.030496 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ff65d93-9651-43e9-9309-49ed52f33a3c-operator-scripts\") pod \"cinder-04ce-account-create-update-jjbtk\" (UID: \"7ff65d93-9651-43e9-9309-49ed52f33a3c\") " pod="openstack/cinder-04ce-account-create-update-jjbtk" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.030502 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d8e8106-6ded-493a-8df9-9c798209d461-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7d8e8106-6ded-493a-8df9-9c798209d461" (UID: "7d8e8106-6ded-493a-8df9-9c798209d461"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.030560 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfffw\" (UniqueName: \"kubernetes.io/projected/1884516f-3c31-4dc9-8d46-4cceeefeb6e2-kube-api-access-gfffw\") pod \"cinder-db-create-g7bk7\" (UID: \"1884516f-3c31-4dc9-8d46-4cceeefeb6e2\") " pod="openstack/cinder-db-create-g7bk7" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.030706 4947 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d8e8106-6ded-493a-8df9-9c798209d461-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.039739 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-274hz"] Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.041427 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-274hz" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.057358 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d8e8106-6ded-493a-8df9-9c798209d461-kube-api-access-pnxrc" (OuterVolumeSpecName: "kube-api-access-pnxrc") pod "7d8e8106-6ded-493a-8df9-9c798209d461" (UID: "7d8e8106-6ded-493a-8df9-9c798209d461"). InnerVolumeSpecName "kube-api-access-pnxrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.081908 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-8407-account-create-update-b65nc"] Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.083069 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8407-account-create-update-b65nc" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.088573 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.107822 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8407-account-create-update-b65nc"] Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.123470 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-274hz"] Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.133417 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j97vd\" (UniqueName: \"kubernetes.io/projected/7ff65d93-9651-43e9-9309-49ed52f33a3c-kube-api-access-j97vd\") pod \"cinder-04ce-account-create-update-jjbtk\" (UID: \"7ff65d93-9651-43e9-9309-49ed52f33a3c\") " pod="openstack/cinder-04ce-account-create-update-jjbtk" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.133477 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ff65d93-9651-43e9-9309-49ed52f33a3c-operator-scripts\") pod \"cinder-04ce-account-create-update-jjbtk\" (UID: \"7ff65d93-9651-43e9-9309-49ed52f33a3c\") " pod="openstack/cinder-04ce-account-create-update-jjbtk" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.133521 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k84lp\" (UniqueName: \"kubernetes.io/projected/e8a15105-2612-40d5-b685-5ee0c0ad58a8-kube-api-access-k84lp\") pod \"barbican-db-create-274hz\" (UID: \"e8a15105-2612-40d5-b685-5ee0c0ad58a8\") " pod="openstack/barbican-db-create-274hz" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.133550 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfffw\" (UniqueName: \"kubernetes.io/projected/1884516f-3c31-4dc9-8d46-4cceeefeb6e2-kube-api-access-gfffw\") pod \"cinder-db-create-g7bk7\" (UID: \"1884516f-3c31-4dc9-8d46-4cceeefeb6e2\") " pod="openstack/cinder-db-create-g7bk7" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.133578 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1884516f-3c31-4dc9-8d46-4cceeefeb6e2-operator-scripts\") pod \"cinder-db-create-g7bk7\" (UID: \"1884516f-3c31-4dc9-8d46-4cceeefeb6e2\") " pod="openstack/cinder-db-create-g7bk7" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.133603 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c243880-90ea-479d-bba2-a12f36ad3e82-operator-scripts\") pod \"barbican-8407-account-create-update-b65nc\" (UID: \"9c243880-90ea-479d-bba2-a12f36ad3e82\") " pod="openstack/barbican-8407-account-create-update-b65nc" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.133628 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8a15105-2612-40d5-b685-5ee0c0ad58a8-operator-scripts\") pod \"barbican-db-create-274hz\" (UID: \"e8a15105-2612-40d5-b685-5ee0c0ad58a8\") " pod="openstack/barbican-db-create-274hz" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.133658 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6zh5\" (UniqueName: \"kubernetes.io/projected/9c243880-90ea-479d-bba2-a12f36ad3e82-kube-api-access-t6zh5\") pod \"barbican-8407-account-create-update-b65nc\" (UID: \"9c243880-90ea-479d-bba2-a12f36ad3e82\") " pod="openstack/barbican-8407-account-create-update-b65nc" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.133720 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnxrc\" (UniqueName: \"kubernetes.io/projected/7d8e8106-6ded-493a-8df9-9c798209d461-kube-api-access-pnxrc\") on node \"crc\" DevicePath \"\"" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.134616 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1884516f-3c31-4dc9-8d46-4cceeefeb6e2-operator-scripts\") pod \"cinder-db-create-g7bk7\" (UID: \"1884516f-3c31-4dc9-8d46-4cceeefeb6e2\") " pod="openstack/cinder-db-create-g7bk7" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.134843 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ff65d93-9651-43e9-9309-49ed52f33a3c-operator-scripts\") pod \"cinder-04ce-account-create-update-jjbtk\" (UID: \"7ff65d93-9651-43e9-9309-49ed52f33a3c\") " pod="openstack/cinder-04ce-account-create-update-jjbtk" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.161586 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfffw\" (UniqueName: \"kubernetes.io/projected/1884516f-3c31-4dc9-8d46-4cceeefeb6e2-kube-api-access-gfffw\") pod \"cinder-db-create-g7bk7\" (UID: \"1884516f-3c31-4dc9-8d46-4cceeefeb6e2\") " pod="openstack/cinder-db-create-g7bk7" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.171391 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j97vd\" (UniqueName: \"kubernetes.io/projected/7ff65d93-9651-43e9-9309-49ed52f33a3c-kube-api-access-j97vd\") pod \"cinder-04ce-account-create-update-jjbtk\" (UID: \"7ff65d93-9651-43e9-9309-49ed52f33a3c\") " pod="openstack/cinder-04ce-account-create-update-jjbtk" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.236094 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k84lp\" (UniqueName: \"kubernetes.io/projected/e8a15105-2612-40d5-b685-5ee0c0ad58a8-kube-api-access-k84lp\") pod \"barbican-db-create-274hz\" (UID: \"e8a15105-2612-40d5-b685-5ee0c0ad58a8\") " pod="openstack/barbican-db-create-274hz" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.236179 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c243880-90ea-479d-bba2-a12f36ad3e82-operator-scripts\") pod \"barbican-8407-account-create-update-b65nc\" (UID: \"9c243880-90ea-479d-bba2-a12f36ad3e82\") " pod="openstack/barbican-8407-account-create-update-b65nc" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.236207 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8a15105-2612-40d5-b685-5ee0c0ad58a8-operator-scripts\") pod \"barbican-db-create-274hz\" (UID: \"e8a15105-2612-40d5-b685-5ee0c0ad58a8\") " pod="openstack/barbican-db-create-274hz" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.236246 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6zh5\" (UniqueName: \"kubernetes.io/projected/9c243880-90ea-479d-bba2-a12f36ad3e82-kube-api-access-t6zh5\") pod \"barbican-8407-account-create-update-b65nc\" (UID: \"9c243880-90ea-479d-bba2-a12f36ad3e82\") " pod="openstack/barbican-8407-account-create-update-b65nc" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.237505 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c243880-90ea-479d-bba2-a12f36ad3e82-operator-scripts\") pod \"barbican-8407-account-create-update-b65nc\" (UID: \"9c243880-90ea-479d-bba2-a12f36ad3e82\") " pod="openstack/barbican-8407-account-create-update-b65nc" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.238669 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8a15105-2612-40d5-b685-5ee0c0ad58a8-operator-scripts\") pod \"barbican-db-create-274hz\" (UID: \"e8a15105-2612-40d5-b685-5ee0c0ad58a8\") " pod="openstack/barbican-db-create-274hz" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.246663 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-g7bk7" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.252371 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-svhxq"] Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.261122 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-svhxq" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.265041 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-svhxq"] Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.270930 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.271130 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.271353 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6zh5\" (UniqueName: \"kubernetes.io/projected/9c243880-90ea-479d-bba2-a12f36ad3e82-kube-api-access-t6zh5\") pod \"barbican-8407-account-create-update-b65nc\" (UID: \"9c243880-90ea-479d-bba2-a12f36ad3e82\") " pod="openstack/barbican-8407-account-create-update-b65nc" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.271402 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.271470 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zkrv9" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.283074 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k84lp\" (UniqueName: \"kubernetes.io/projected/e8a15105-2612-40d5-b685-5ee0c0ad58a8-kube-api-access-k84lp\") pod \"barbican-db-create-274hz\" (UID: \"e8a15105-2612-40d5-b685-5ee0c0ad58a8\") " pod="openstack/barbican-db-create-274hz" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.299561 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-04ce-account-create-update-jjbtk" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.338811 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96a21275-0215-4d96-adcd-6e60d7a2c900-combined-ca-bundle\") pod \"keystone-db-sync-svhxq\" (UID: \"96a21275-0215-4d96-adcd-6e60d7a2c900\") " pod="openstack/keystone-db-sync-svhxq" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.338883 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96a21275-0215-4d96-adcd-6e60d7a2c900-config-data\") pod \"keystone-db-sync-svhxq\" (UID: \"96a21275-0215-4d96-adcd-6e60d7a2c900\") " pod="openstack/keystone-db-sync-svhxq" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.338998 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxbhm\" (UniqueName: \"kubernetes.io/projected/96a21275-0215-4d96-adcd-6e60d7a2c900-kube-api-access-dxbhm\") pod \"keystone-db-sync-svhxq\" (UID: \"96a21275-0215-4d96-adcd-6e60d7a2c900\") " pod="openstack/keystone-db-sync-svhxq" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.401966 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-274hz" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.423765 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8407-account-create-update-b65nc" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.437657 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-b914-account-create-update-c249x"] Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.439236 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b914-account-create-update-c249x" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.441429 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxbhm\" (UniqueName: \"kubernetes.io/projected/96a21275-0215-4d96-adcd-6e60d7a2c900-kube-api-access-dxbhm\") pod \"keystone-db-sync-svhxq\" (UID: \"96a21275-0215-4d96-adcd-6e60d7a2c900\") " pod="openstack/keystone-db-sync-svhxq" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.441532 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c59e9c9-8ffc-422e-8565-7aa51d7ae12d-operator-scripts\") pod \"neutron-b914-account-create-update-c249x\" (UID: \"8c59e9c9-8ffc-422e-8565-7aa51d7ae12d\") " pod="openstack/neutron-b914-account-create-update-c249x" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.441570 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96a21275-0215-4d96-adcd-6e60d7a2c900-combined-ca-bundle\") pod \"keystone-db-sync-svhxq\" (UID: \"96a21275-0215-4d96-adcd-6e60d7a2c900\") " pod="openstack/keystone-db-sync-svhxq" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.441600 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96a21275-0215-4d96-adcd-6e60d7a2c900-config-data\") pod \"keystone-db-sync-svhxq\" (UID: \"96a21275-0215-4d96-adcd-6e60d7a2c900\") " pod="openstack/keystone-db-sync-svhxq" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.441654 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5wfw\" (UniqueName: \"kubernetes.io/projected/8c59e9c9-8ffc-422e-8565-7aa51d7ae12d-kube-api-access-f5wfw\") pod \"neutron-b914-account-create-update-c249x\" (UID: \"8c59e9c9-8ffc-422e-8565-7aa51d7ae12d\") " pod="openstack/neutron-b914-account-create-update-c249x" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.443548 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.448429 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96a21275-0215-4d96-adcd-6e60d7a2c900-config-data\") pod \"keystone-db-sync-svhxq\" (UID: \"96a21275-0215-4d96-adcd-6e60d7a2c900\") " pod="openstack/keystone-db-sync-svhxq" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.452850 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96a21275-0215-4d96-adcd-6e60d7a2c900-combined-ca-bundle\") pod \"keystone-db-sync-svhxq\" (UID: \"96a21275-0215-4d96-adcd-6e60d7a2c900\") " pod="openstack/keystone-db-sync-svhxq" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.470104 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-p4xsj"] Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.470806 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxbhm\" (UniqueName: \"kubernetes.io/projected/96a21275-0215-4d96-adcd-6e60d7a2c900-kube-api-access-dxbhm\") pod \"keystone-db-sync-svhxq\" (UID: \"96a21275-0215-4d96-adcd-6e60d7a2c900\") " pod="openstack/keystone-db-sync-svhxq" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.471664 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-p4xsj" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.502017 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b914-account-create-update-c249x"] Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.531426 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-p4xsj"] Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.543546 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5wfw\" (UniqueName: \"kubernetes.io/projected/8c59e9c9-8ffc-422e-8565-7aa51d7ae12d-kube-api-access-f5wfw\") pod \"neutron-b914-account-create-update-c249x\" (UID: \"8c59e9c9-8ffc-422e-8565-7aa51d7ae12d\") " pod="openstack/neutron-b914-account-create-update-c249x" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.543685 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a315def0-b8c0-4a35-95e3-faf969bb901c-operator-scripts\") pod \"neutron-db-create-p4xsj\" (UID: \"a315def0-b8c0-4a35-95e3-faf969bb901c\") " pod="openstack/neutron-db-create-p4xsj" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.543995 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz9ps\" (UniqueName: \"kubernetes.io/projected/a315def0-b8c0-4a35-95e3-faf969bb901c-kube-api-access-zz9ps\") pod \"neutron-db-create-p4xsj\" (UID: \"a315def0-b8c0-4a35-95e3-faf969bb901c\") " pod="openstack/neutron-db-create-p4xsj" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.544047 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c59e9c9-8ffc-422e-8565-7aa51d7ae12d-operator-scripts\") pod \"neutron-b914-account-create-update-c249x\" (UID: \"8c59e9c9-8ffc-422e-8565-7aa51d7ae12d\") " pod="openstack/neutron-b914-account-create-update-c249x" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.544827 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c59e9c9-8ffc-422e-8565-7aa51d7ae12d-operator-scripts\") pod \"neutron-b914-account-create-update-c249x\" (UID: \"8c59e9c9-8ffc-422e-8565-7aa51d7ae12d\") " pod="openstack/neutron-b914-account-create-update-c249x" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.573061 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5wfw\" (UniqueName: \"kubernetes.io/projected/8c59e9c9-8ffc-422e-8565-7aa51d7ae12d-kube-api-access-f5wfw\") pod \"neutron-b914-account-create-update-c249x\" (UID: \"8c59e9c9-8ffc-422e-8565-7aa51d7ae12d\") " pod="openstack/neutron-b914-account-create-update-c249x" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.579310 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f74a-account-create-update-jsfv8" event={"ID":"7d8e8106-6ded-493a-8df9-9c798209d461","Type":"ContainerDied","Data":"292816b7fccd3ce3b21f1d808000526e0112cc4ed44bdd14b5a250b183817692"} Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.579359 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="292816b7fccd3ce3b21f1d808000526e0112cc4ed44bdd14b5a250b183817692" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.579439 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f74a-account-create-update-jsfv8" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.713944 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-svhxq" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.716541 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a315def0-b8c0-4a35-95e3-faf969bb901c-operator-scripts\") pod \"neutron-db-create-p4xsj\" (UID: \"a315def0-b8c0-4a35-95e3-faf969bb901c\") " pod="openstack/neutron-db-create-p4xsj" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.716692 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zz9ps\" (UniqueName: \"kubernetes.io/projected/a315def0-b8c0-4a35-95e3-faf969bb901c-kube-api-access-zz9ps\") pod \"neutron-db-create-p4xsj\" (UID: \"a315def0-b8c0-4a35-95e3-faf969bb901c\") " pod="openstack/neutron-db-create-p4xsj" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.717544 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a315def0-b8c0-4a35-95e3-faf969bb901c-operator-scripts\") pod \"neutron-db-create-p4xsj\" (UID: \"a315def0-b8c0-4a35-95e3-faf969bb901c\") " pod="openstack/neutron-db-create-p4xsj" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.744196 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz9ps\" (UniqueName: \"kubernetes.io/projected/a315def0-b8c0-4a35-95e3-faf969bb901c-kube-api-access-zz9ps\") pod \"neutron-db-create-p4xsj\" (UID: \"a315def0-b8c0-4a35-95e3-faf969bb901c\") " pod="openstack/neutron-db-create-p4xsj" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.770465 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b914-account-create-update-c249x" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.824522 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-p4xsj" Nov 29 06:55:09 crc kubenswrapper[4947]: I1129 06:55:09.881927 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-g7bk7"] Nov 29 06:55:09 crc kubenswrapper[4947]: W1129 06:55:09.934015 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1884516f_3c31_4dc9_8d46_4cceeefeb6e2.slice/crio-bad9842f558aac908a1075db628c5ad2395d768b9c774e5d93fbe101dda0064c WatchSource:0}: Error finding container bad9842f558aac908a1075db628c5ad2395d768b9c774e5d93fbe101dda0064c: Status 404 returned error can't find the container with id bad9842f558aac908a1075db628c5ad2395d768b9c774e5d93fbe101dda0064c Nov 29 06:55:10 crc kubenswrapper[4947]: I1129 06:55:10.136639 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-04ce-account-create-update-jjbtk"] Nov 29 06:55:10 crc kubenswrapper[4947]: I1129 06:55:10.188683 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 29 06:55:10 crc kubenswrapper[4947]: W1129 06:55:10.249808 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8a15105_2612_40d5_b685_5ee0c0ad58a8.slice/crio-e1ccc307ab5db67352890011416eb3a1e519c8e430d3af5643decbd85e9f6a9c WatchSource:0}: Error finding container e1ccc307ab5db67352890011416eb3a1e519c8e430d3af5643decbd85e9f6a9c: Status 404 returned error can't find the container with id e1ccc307ab5db67352890011416eb3a1e519c8e430d3af5643decbd85e9f6a9c Nov 29 06:55:10 crc kubenswrapper[4947]: I1129 06:55:10.262775 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-274hz"] Nov 29 06:55:10 crc kubenswrapper[4947]: I1129 06:55:10.396572 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8407-account-create-update-b65nc"] Nov 29 06:55:10 crc kubenswrapper[4947]: I1129 06:55:10.406746 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b914-account-create-update-c249x"] Nov 29 06:55:10 crc kubenswrapper[4947]: I1129 06:55:10.457523 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 29 06:55:10 crc kubenswrapper[4947]: I1129 06:55:10.503134 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-p4xsj"] Nov 29 06:55:10 crc kubenswrapper[4947]: I1129 06:55:10.510809 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-svhxq"] Nov 29 06:55:10 crc kubenswrapper[4947]: I1129 06:55:10.591794 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-274hz" event={"ID":"e8a15105-2612-40d5-b685-5ee0c0ad58a8","Type":"ContainerStarted","Data":"e1ccc307ab5db67352890011416eb3a1e519c8e430d3af5643decbd85e9f6a9c"} Nov 29 06:55:10 crc kubenswrapper[4947]: I1129 06:55:10.593803 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-04ce-account-create-update-jjbtk" event={"ID":"7ff65d93-9651-43e9-9309-49ed52f33a3c","Type":"ContainerStarted","Data":"114ba3ded2e51e2ad3ed31a9e60e0950d3eda74989f501fc7b27fb8fb6aa922f"} Nov 29 06:55:10 crc kubenswrapper[4947]: I1129 06:55:10.596918 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-p4xsj" event={"ID":"a315def0-b8c0-4a35-95e3-faf969bb901c","Type":"ContainerStarted","Data":"79d2b5f44b78ffdb0744e41a66bf1df4d65c5a35703af25c92c220ac123248b8"} Nov 29 06:55:10 crc kubenswrapper[4947]: I1129 06:55:10.598875 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-g7bk7" event={"ID":"1884516f-3c31-4dc9-8d46-4cceeefeb6e2","Type":"ContainerStarted","Data":"3a45ff862944a36f00d1b34cdd784774b21726d2522be0cabb02e89822623af4"} Nov 29 06:55:10 crc kubenswrapper[4947]: I1129 06:55:10.598909 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-g7bk7" event={"ID":"1884516f-3c31-4dc9-8d46-4cceeefeb6e2","Type":"ContainerStarted","Data":"bad9842f558aac908a1075db628c5ad2395d768b9c774e5d93fbe101dda0064c"} Nov 29 06:55:10 crc kubenswrapper[4947]: I1129 06:55:10.603066 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8407-account-create-update-b65nc" event={"ID":"9c243880-90ea-479d-bba2-a12f36ad3e82","Type":"ContainerStarted","Data":"9c66651f6ba3bb0353166fc5205cf23f8b654b0400e080ff2009c1976e514a5d"} Nov 29 06:55:10 crc kubenswrapper[4947]: I1129 06:55:10.609061 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b914-account-create-update-c249x" event={"ID":"8c59e9c9-8ffc-422e-8565-7aa51d7ae12d","Type":"ContainerStarted","Data":"281ad3ed22bda670a9041ec8de7c661cb67b8f9561515a00a3b32374f5c3cce9"} Nov 29 06:55:10 crc kubenswrapper[4947]: I1129 06:55:10.616033 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-svhxq" event={"ID":"96a21275-0215-4d96-adcd-6e60d7a2c900","Type":"ContainerStarted","Data":"66e4b33ac35d5726c8d56c98539fad6b9523a1c111817a192b5f6d1895da403e"} Nov 29 06:55:10 crc kubenswrapper[4947]: I1129 06:55:10.639438 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-g7bk7" podStartSLOduration=2.639388607 podStartE2EDuration="2.639388607s" podCreationTimestamp="2025-11-29 06:55:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:55:10.623423448 +0000 UTC m=+1261.667805549" watchObservedRunningTime="2025-11-29 06:55:10.639388607 +0000 UTC m=+1261.683770688" Nov 29 06:55:10 crc kubenswrapper[4947]: I1129 06:55:10.792108 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-6njvc"] Nov 29 06:55:10 crc kubenswrapper[4947]: I1129 06:55:10.794459 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6njvc" Nov 29 06:55:10 crc kubenswrapper[4947]: I1129 06:55:10.798016 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 29 06:55:10 crc kubenswrapper[4947]: I1129 06:55:10.798788 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-8278w" Nov 29 06:55:10 crc kubenswrapper[4947]: I1129 06:55:10.882672 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7d3675d1-9c60-4463-936e-95953f64b250-db-sync-config-data\") pod \"glance-db-sync-6njvc\" (UID: \"7d3675d1-9c60-4463-936e-95953f64b250\") " pod="openstack/glance-db-sync-6njvc" Nov 29 06:55:10 crc kubenswrapper[4947]: I1129 06:55:10.882919 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d3675d1-9c60-4463-936e-95953f64b250-combined-ca-bundle\") pod \"glance-db-sync-6njvc\" (UID: \"7d3675d1-9c60-4463-936e-95953f64b250\") " pod="openstack/glance-db-sync-6njvc" Nov 29 06:55:10 crc kubenswrapper[4947]: I1129 06:55:10.882970 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkm2g\" (UniqueName: \"kubernetes.io/projected/7d3675d1-9c60-4463-936e-95953f64b250-kube-api-access-bkm2g\") pod \"glance-db-sync-6njvc\" (UID: \"7d3675d1-9c60-4463-936e-95953f64b250\") " pod="openstack/glance-db-sync-6njvc" Nov 29 06:55:10 crc kubenswrapper[4947]: I1129 06:55:10.883054 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d3675d1-9c60-4463-936e-95953f64b250-config-data\") pod \"glance-db-sync-6njvc\" (UID: \"7d3675d1-9c60-4463-936e-95953f64b250\") " pod="openstack/glance-db-sync-6njvc" Nov 29 06:55:10 crc kubenswrapper[4947]: I1129 06:55:10.895845 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-6njvc"] Nov 29 06:55:10 crc kubenswrapper[4947]: I1129 06:55:10.985756 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d3675d1-9c60-4463-936e-95953f64b250-combined-ca-bundle\") pod \"glance-db-sync-6njvc\" (UID: \"7d3675d1-9c60-4463-936e-95953f64b250\") " pod="openstack/glance-db-sync-6njvc" Nov 29 06:55:10 crc kubenswrapper[4947]: I1129 06:55:10.985902 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkm2g\" (UniqueName: \"kubernetes.io/projected/7d3675d1-9c60-4463-936e-95953f64b250-kube-api-access-bkm2g\") pod \"glance-db-sync-6njvc\" (UID: \"7d3675d1-9c60-4463-936e-95953f64b250\") " pod="openstack/glance-db-sync-6njvc" Nov 29 06:55:10 crc kubenswrapper[4947]: I1129 06:55:10.986068 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d3675d1-9c60-4463-936e-95953f64b250-config-data\") pod \"glance-db-sync-6njvc\" (UID: \"7d3675d1-9c60-4463-936e-95953f64b250\") " pod="openstack/glance-db-sync-6njvc" Nov 29 06:55:10 crc kubenswrapper[4947]: I1129 06:55:10.986316 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7d3675d1-9c60-4463-936e-95953f64b250-db-sync-config-data\") pod \"glance-db-sync-6njvc\" (UID: \"7d3675d1-9c60-4463-936e-95953f64b250\") " pod="openstack/glance-db-sync-6njvc" Nov 29 06:55:10 crc kubenswrapper[4947]: I1129 06:55:10.995197 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7d3675d1-9c60-4463-936e-95953f64b250-db-sync-config-data\") pod \"glance-db-sync-6njvc\" (UID: \"7d3675d1-9c60-4463-936e-95953f64b250\") " pod="openstack/glance-db-sync-6njvc" Nov 29 06:55:10 crc kubenswrapper[4947]: I1129 06:55:10.995245 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d3675d1-9c60-4463-936e-95953f64b250-combined-ca-bundle\") pod \"glance-db-sync-6njvc\" (UID: \"7d3675d1-9c60-4463-936e-95953f64b250\") " pod="openstack/glance-db-sync-6njvc" Nov 29 06:55:10 crc kubenswrapper[4947]: I1129 06:55:10.998881 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d3675d1-9c60-4463-936e-95953f64b250-config-data\") pod \"glance-db-sync-6njvc\" (UID: \"7d3675d1-9c60-4463-936e-95953f64b250\") " pod="openstack/glance-db-sync-6njvc" Nov 29 06:55:11 crc kubenswrapper[4947]: I1129 06:55:11.026333 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkm2g\" (UniqueName: \"kubernetes.io/projected/7d3675d1-9c60-4463-936e-95953f64b250-kube-api-access-bkm2g\") pod \"glance-db-sync-6njvc\" (UID: \"7d3675d1-9c60-4463-936e-95953f64b250\") " pod="openstack/glance-db-sync-6njvc" Nov 29 06:55:11 crc kubenswrapper[4947]: I1129 06:55:11.208463 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6njvc" Nov 29 06:55:11 crc kubenswrapper[4947]: I1129 06:55:11.630802 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8407-account-create-update-b65nc" event={"ID":"9c243880-90ea-479d-bba2-a12f36ad3e82","Type":"ContainerStarted","Data":"70c1c80e0cceb47f2f19077b4c6b7eed8596e31cd8ca081697a71fb968008929"} Nov 29 06:55:11 crc kubenswrapper[4947]: I1129 06:55:11.639943 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b914-account-create-update-c249x" event={"ID":"8c59e9c9-8ffc-422e-8565-7aa51d7ae12d","Type":"ContainerStarted","Data":"a7dff5927bee0a0a68415cc71926f507e73174840345428cacecb266d68514e2"} Nov 29 06:55:11 crc kubenswrapper[4947]: I1129 06:55:11.641877 4947 generic.go:334] "Generic (PLEG): container finished" podID="e8a15105-2612-40d5-b685-5ee0c0ad58a8" containerID="076f2fa9cbe8cfa6fdb60e5d12fc40cde04b899b6a01e3ccce60342199d349f3" exitCode=0 Nov 29 06:55:11 crc kubenswrapper[4947]: I1129 06:55:11.641922 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-274hz" event={"ID":"e8a15105-2612-40d5-b685-5ee0c0ad58a8","Type":"ContainerDied","Data":"076f2fa9cbe8cfa6fdb60e5d12fc40cde04b899b6a01e3ccce60342199d349f3"} Nov 29 06:55:11 crc kubenswrapper[4947]: I1129 06:55:11.643922 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-04ce-account-create-update-jjbtk" event={"ID":"7ff65d93-9651-43e9-9309-49ed52f33a3c","Type":"ContainerStarted","Data":"3ab396327df99ffa84c29b3623168cbf3e22dbe0f86b662c4b147a5cf43b0979"} Nov 29 06:55:11 crc kubenswrapper[4947]: I1129 06:55:11.651126 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-p4xsj" event={"ID":"a315def0-b8c0-4a35-95e3-faf969bb901c","Type":"ContainerStarted","Data":"d848c790d67b09dce2b34a90b7ce52c5ec4bf0f0296069b9065fb2b090e897af"} Nov 29 06:55:11 crc kubenswrapper[4947]: I1129 06:55:11.652821 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-8407-account-create-update-b65nc" podStartSLOduration=2.652795915 podStartE2EDuration="2.652795915s" podCreationTimestamp="2025-11-29 06:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:55:11.646611584 +0000 UTC m=+1262.690993665" watchObservedRunningTime="2025-11-29 06:55:11.652795915 +0000 UTC m=+1262.697177996" Nov 29 06:55:11 crc kubenswrapper[4947]: I1129 06:55:11.657421 4947 generic.go:334] "Generic (PLEG): container finished" podID="1884516f-3c31-4dc9-8d46-4cceeefeb6e2" containerID="3a45ff862944a36f00d1b34cdd784774b21726d2522be0cabb02e89822623af4" exitCode=0 Nov 29 06:55:11 crc kubenswrapper[4947]: I1129 06:55:11.657438 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-g7bk7" event={"ID":"1884516f-3c31-4dc9-8d46-4cceeefeb6e2","Type":"ContainerDied","Data":"3a45ff862944a36f00d1b34cdd784774b21726d2522be0cabb02e89822623af4"} Nov 29 06:55:11 crc kubenswrapper[4947]: I1129 06:55:11.667865 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-04ce-account-create-update-jjbtk" podStartSLOduration=3.667839862 podStartE2EDuration="3.667839862s" podCreationTimestamp="2025-11-29 06:55:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:55:11.665412253 +0000 UTC m=+1262.709794334" watchObservedRunningTime="2025-11-29 06:55:11.667839862 +0000 UTC m=+1262.712221943" Nov 29 06:55:11 crc kubenswrapper[4947]: I1129 06:55:11.722534 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-b914-account-create-update-c249x" podStartSLOduration=2.722509414 podStartE2EDuration="2.722509414s" podCreationTimestamp="2025-11-29 06:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:55:11.705207262 +0000 UTC m=+1262.749589343" watchObservedRunningTime="2025-11-29 06:55:11.722509414 +0000 UTC m=+1262.766891495" Nov 29 06:55:11 crc kubenswrapper[4947]: I1129 06:55:11.753424 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-p4xsj" podStartSLOduration=2.753405257 podStartE2EDuration="2.753405257s" podCreationTimestamp="2025-11-29 06:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:55:11.748009456 +0000 UTC m=+1262.792391567" watchObservedRunningTime="2025-11-29 06:55:11.753405257 +0000 UTC m=+1262.797787338" Nov 29 06:55:11 crc kubenswrapper[4947]: I1129 06:55:11.913107 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-6njvc"] Nov 29 06:55:11 crc kubenswrapper[4947]: W1129 06:55:11.931307 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d3675d1_9c60_4463_936e_95953f64b250.slice/crio-47e7035241b780ef8abfda771ec110db3ef2e45156a801a53e025ddbabc06125 WatchSource:0}: Error finding container 47e7035241b780ef8abfda771ec110db3ef2e45156a801a53e025ddbabc06125: Status 404 returned error can't find the container with id 47e7035241b780ef8abfda771ec110db3ef2e45156a801a53e025ddbabc06125 Nov 29 06:55:12 crc kubenswrapper[4947]: I1129 06:55:12.671502 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6njvc" event={"ID":"7d3675d1-9c60-4463-936e-95953f64b250","Type":"ContainerStarted","Data":"47e7035241b780ef8abfda771ec110db3ef2e45156a801a53e025ddbabc06125"} Nov 29 06:55:12 crc kubenswrapper[4947]: I1129 06:55:12.676434 4947 generic.go:334] "Generic (PLEG): container finished" podID="7ff65d93-9651-43e9-9309-49ed52f33a3c" containerID="3ab396327df99ffa84c29b3623168cbf3e22dbe0f86b662c4b147a5cf43b0979" exitCode=0 Nov 29 06:55:12 crc kubenswrapper[4947]: I1129 06:55:12.676532 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-04ce-account-create-update-jjbtk" event={"ID":"7ff65d93-9651-43e9-9309-49ed52f33a3c","Type":"ContainerDied","Data":"3ab396327df99ffa84c29b3623168cbf3e22dbe0f86b662c4b147a5cf43b0979"} Nov 29 06:55:12 crc kubenswrapper[4947]: I1129 06:55:12.682682 4947 generic.go:334] "Generic (PLEG): container finished" podID="a315def0-b8c0-4a35-95e3-faf969bb901c" containerID="d848c790d67b09dce2b34a90b7ce52c5ec4bf0f0296069b9065fb2b090e897af" exitCode=0 Nov 29 06:55:12 crc kubenswrapper[4947]: I1129 06:55:12.682913 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-p4xsj" event={"ID":"a315def0-b8c0-4a35-95e3-faf969bb901c","Type":"ContainerDied","Data":"d848c790d67b09dce2b34a90b7ce52c5ec4bf0f0296069b9065fb2b090e897af"} Nov 29 06:55:13 crc kubenswrapper[4947]: I1129 06:55:13.107748 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-g7bk7" Nov 29 06:55:13 crc kubenswrapper[4947]: I1129 06:55:13.115603 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-274hz" Nov 29 06:55:13 crc kubenswrapper[4947]: I1129 06:55:13.240928 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8a15105-2612-40d5-b685-5ee0c0ad58a8-operator-scripts\") pod \"e8a15105-2612-40d5-b685-5ee0c0ad58a8\" (UID: \"e8a15105-2612-40d5-b685-5ee0c0ad58a8\") " Nov 29 06:55:13 crc kubenswrapper[4947]: I1129 06:55:13.241052 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1884516f-3c31-4dc9-8d46-4cceeefeb6e2-operator-scripts\") pod \"1884516f-3c31-4dc9-8d46-4cceeefeb6e2\" (UID: \"1884516f-3c31-4dc9-8d46-4cceeefeb6e2\") " Nov 29 06:55:13 crc kubenswrapper[4947]: I1129 06:55:13.241132 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfffw\" (UniqueName: \"kubernetes.io/projected/1884516f-3c31-4dc9-8d46-4cceeefeb6e2-kube-api-access-gfffw\") pod \"1884516f-3c31-4dc9-8d46-4cceeefeb6e2\" (UID: \"1884516f-3c31-4dc9-8d46-4cceeefeb6e2\") " Nov 29 06:55:13 crc kubenswrapper[4947]: I1129 06:55:13.241240 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k84lp\" (UniqueName: \"kubernetes.io/projected/e8a15105-2612-40d5-b685-5ee0c0ad58a8-kube-api-access-k84lp\") pod \"e8a15105-2612-40d5-b685-5ee0c0ad58a8\" (UID: \"e8a15105-2612-40d5-b685-5ee0c0ad58a8\") " Nov 29 06:55:13 crc kubenswrapper[4947]: I1129 06:55:13.242132 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1884516f-3c31-4dc9-8d46-4cceeefeb6e2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1884516f-3c31-4dc9-8d46-4cceeefeb6e2" (UID: "1884516f-3c31-4dc9-8d46-4cceeefeb6e2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:55:13 crc kubenswrapper[4947]: I1129 06:55:13.242135 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8a15105-2612-40d5-b685-5ee0c0ad58a8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e8a15105-2612-40d5-b685-5ee0c0ad58a8" (UID: "e8a15105-2612-40d5-b685-5ee0c0ad58a8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:55:13 crc kubenswrapper[4947]: I1129 06:55:13.251140 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8a15105-2612-40d5-b685-5ee0c0ad58a8-kube-api-access-k84lp" (OuterVolumeSpecName: "kube-api-access-k84lp") pod "e8a15105-2612-40d5-b685-5ee0c0ad58a8" (UID: "e8a15105-2612-40d5-b685-5ee0c0ad58a8"). InnerVolumeSpecName "kube-api-access-k84lp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:55:13 crc kubenswrapper[4947]: I1129 06:55:13.251342 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1884516f-3c31-4dc9-8d46-4cceeefeb6e2-kube-api-access-gfffw" (OuterVolumeSpecName: "kube-api-access-gfffw") pod "1884516f-3c31-4dc9-8d46-4cceeefeb6e2" (UID: "1884516f-3c31-4dc9-8d46-4cceeefeb6e2"). InnerVolumeSpecName "kube-api-access-gfffw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:55:13 crc kubenswrapper[4947]: I1129 06:55:13.343993 4947 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8a15105-2612-40d5-b685-5ee0c0ad58a8-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 06:55:13 crc kubenswrapper[4947]: I1129 06:55:13.344111 4947 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1884516f-3c31-4dc9-8d46-4cceeefeb6e2-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 06:55:13 crc kubenswrapper[4947]: I1129 06:55:13.344129 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfffw\" (UniqueName: \"kubernetes.io/projected/1884516f-3c31-4dc9-8d46-4cceeefeb6e2-kube-api-access-gfffw\") on node \"crc\" DevicePath \"\"" Nov 29 06:55:13 crc kubenswrapper[4947]: I1129 06:55:13.344148 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k84lp\" (UniqueName: \"kubernetes.io/projected/e8a15105-2612-40d5-b685-5ee0c0ad58a8-kube-api-access-k84lp\") on node \"crc\" DevicePath \"\"" Nov 29 06:55:13 crc kubenswrapper[4947]: I1129 06:55:13.700779 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-g7bk7" Nov 29 06:55:13 crc kubenswrapper[4947]: I1129 06:55:13.700942 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-g7bk7" event={"ID":"1884516f-3c31-4dc9-8d46-4cceeefeb6e2","Type":"ContainerDied","Data":"bad9842f558aac908a1075db628c5ad2395d768b9c774e5d93fbe101dda0064c"} Nov 29 06:55:13 crc kubenswrapper[4947]: I1129 06:55:13.701865 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bad9842f558aac908a1075db628c5ad2395d768b9c774e5d93fbe101dda0064c" Nov 29 06:55:13 crc kubenswrapper[4947]: I1129 06:55:13.703571 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-274hz" Nov 29 06:55:13 crc kubenswrapper[4947]: I1129 06:55:13.706144 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-274hz" event={"ID":"e8a15105-2612-40d5-b685-5ee0c0ad58a8","Type":"ContainerDied","Data":"e1ccc307ab5db67352890011416eb3a1e519c8e430d3af5643decbd85e9f6a9c"} Nov 29 06:55:13 crc kubenswrapper[4947]: I1129 06:55:13.706182 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1ccc307ab5db67352890011416eb3a1e519c8e430d3af5643decbd85e9f6a9c" Nov 29 06:55:18 crc kubenswrapper[4947]: E1129 06:55:18.172465 4947 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c243880_90ea_479d_bba2_a12f36ad3e82.slice/crio-70c1c80e0cceb47f2f19077b4c6b7eed8596e31cd8ca081697a71fb968008929.scope\": RecentStats: unable to find data in memory cache]" Nov 29 06:55:18 crc kubenswrapper[4947]: I1129 06:55:18.787296 4947 generic.go:334] "Generic (PLEG): container finished" podID="9c243880-90ea-479d-bba2-a12f36ad3e82" containerID="70c1c80e0cceb47f2f19077b4c6b7eed8596e31cd8ca081697a71fb968008929" exitCode=0 Nov 29 06:55:18 crc kubenswrapper[4947]: I1129 06:55:18.787425 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8407-account-create-update-b65nc" event={"ID":"9c243880-90ea-479d-bba2-a12f36ad3e82","Type":"ContainerDied","Data":"70c1c80e0cceb47f2f19077b4c6b7eed8596e31cd8ca081697a71fb968008929"} Nov 29 06:55:18 crc kubenswrapper[4947]: I1129 06:55:18.810199 4947 generic.go:334] "Generic (PLEG): container finished" podID="8c59e9c9-8ffc-422e-8565-7aa51d7ae12d" containerID="a7dff5927bee0a0a68415cc71926f507e73174840345428cacecb266d68514e2" exitCode=0 Nov 29 06:55:18 crc kubenswrapper[4947]: I1129 06:55:18.810272 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b914-account-create-update-c249x" event={"ID":"8c59e9c9-8ffc-422e-8565-7aa51d7ae12d","Type":"ContainerDied","Data":"a7dff5927bee0a0a68415cc71926f507e73174840345428cacecb266d68514e2"} Nov 29 06:55:22 crc kubenswrapper[4947]: I1129 06:55:22.987953 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 06:55:22 crc kubenswrapper[4947]: I1129 06:55:22.988692 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 06:55:22 crc kubenswrapper[4947]: I1129 06:55:22.988746 4947 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" Nov 29 06:55:22 crc kubenswrapper[4947]: I1129 06:55:22.989552 4947 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a415fb27869ca193be5294677b2f866f2ec48db054e83e7f53b656f014c7087f"} pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 06:55:22 crc kubenswrapper[4947]: I1129 06:55:22.989608 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" containerID="cri-o://a415fb27869ca193be5294677b2f866f2ec48db054e83e7f53b656f014c7087f" gracePeriod=600 Nov 29 06:55:24 crc kubenswrapper[4947]: E1129 06:55:24.010326 4947 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-keystone:current-podified" Nov 29 06:55:24 crc kubenswrapper[4947]: E1129 06:55:24.010989 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:keystone-db-sync,Image:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,Command:[/bin/bash],Args:[-c keystone-manage db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/keystone/keystone.conf,SubPath:keystone.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dxbhm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42425,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42425,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-db-sync-svhxq_openstack(96a21275-0215-4d96-adcd-6e60d7a2c900): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 06:55:24 crc kubenswrapper[4947]: E1129 06:55:24.012385 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"keystone-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/keystone-db-sync-svhxq" podUID="96a21275-0215-4d96-adcd-6e60d7a2c900" Nov 29 06:55:24 crc kubenswrapper[4947]: I1129 06:55:24.013232 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b914-account-create-update-c249x" Nov 29 06:55:24 crc kubenswrapper[4947]: I1129 06:55:24.103565 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c59e9c9-8ffc-422e-8565-7aa51d7ae12d-operator-scripts\") pod \"8c59e9c9-8ffc-422e-8565-7aa51d7ae12d\" (UID: \"8c59e9c9-8ffc-422e-8565-7aa51d7ae12d\") " Nov 29 06:55:24 crc kubenswrapper[4947]: I1129 06:55:24.103934 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5wfw\" (UniqueName: \"kubernetes.io/projected/8c59e9c9-8ffc-422e-8565-7aa51d7ae12d-kube-api-access-f5wfw\") pod \"8c59e9c9-8ffc-422e-8565-7aa51d7ae12d\" (UID: \"8c59e9c9-8ffc-422e-8565-7aa51d7ae12d\") " Nov 29 06:55:24 crc kubenswrapper[4947]: I1129 06:55:24.104564 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c59e9c9-8ffc-422e-8565-7aa51d7ae12d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8c59e9c9-8ffc-422e-8565-7aa51d7ae12d" (UID: "8c59e9c9-8ffc-422e-8565-7aa51d7ae12d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:55:24 crc kubenswrapper[4947]: I1129 06:55:24.110337 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c59e9c9-8ffc-422e-8565-7aa51d7ae12d-kube-api-access-f5wfw" (OuterVolumeSpecName: "kube-api-access-f5wfw") pod "8c59e9c9-8ffc-422e-8565-7aa51d7ae12d" (UID: "8c59e9c9-8ffc-422e-8565-7aa51d7ae12d"). InnerVolumeSpecName "kube-api-access-f5wfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:55:24 crc kubenswrapper[4947]: I1129 06:55:24.209938 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5wfw\" (UniqueName: \"kubernetes.io/projected/8c59e9c9-8ffc-422e-8565-7aa51d7ae12d-kube-api-access-f5wfw\") on node \"crc\" DevicePath \"\"" Nov 29 06:55:24 crc kubenswrapper[4947]: I1129 06:55:24.209984 4947 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c59e9c9-8ffc-422e-8565-7aa51d7ae12d-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 06:55:24 crc kubenswrapper[4947]: I1129 06:55:24.874114 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b914-account-create-update-c249x" event={"ID":"8c59e9c9-8ffc-422e-8565-7aa51d7ae12d","Type":"ContainerDied","Data":"281ad3ed22bda670a9041ec8de7c661cb67b8f9561515a00a3b32374f5c3cce9"} Nov 29 06:55:24 crc kubenswrapper[4947]: I1129 06:55:24.874540 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="281ad3ed22bda670a9041ec8de7c661cb67b8f9561515a00a3b32374f5c3cce9" Nov 29 06:55:24 crc kubenswrapper[4947]: I1129 06:55:24.874135 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b914-account-create-update-c249x" Nov 29 06:55:24 crc kubenswrapper[4947]: I1129 06:55:24.877709 4947 generic.go:334] "Generic (PLEG): container finished" podID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerID="a415fb27869ca193be5294677b2f866f2ec48db054e83e7f53b656f014c7087f" exitCode=0 Nov 29 06:55:24 crc kubenswrapper[4947]: I1129 06:55:24.877773 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" event={"ID":"5f4d791f-bb61-4aaa-a09c-3007b59645a7","Type":"ContainerDied","Data":"a415fb27869ca193be5294677b2f866f2ec48db054e83e7f53b656f014c7087f"} Nov 29 06:55:24 crc kubenswrapper[4947]: I1129 06:55:24.877830 4947 scope.go:117] "RemoveContainer" containerID="95afd1d0c4fb1119bc14de336e7d92cb2ee91cd1747056ef7ee978c29db619c9" Nov 29 06:55:24 crc kubenswrapper[4947]: E1129 06:55:24.881627 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"keystone-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-keystone:current-podified\\\"\"" pod="openstack/keystone-db-sync-svhxq" podUID="96a21275-0215-4d96-adcd-6e60d7a2c900" Nov 29 06:55:32 crc kubenswrapper[4947]: E1129 06:55:32.200036 4947 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Nov 29 06:55:32 crc kubenswrapper[4947]: E1129 06:55:32.201067 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bkm2g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-6njvc_openstack(7d3675d1-9c60-4463-936e-95953f64b250): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 06:55:32 crc kubenswrapper[4947]: E1129 06:55:32.202304 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-6njvc" podUID="7d3675d1-9c60-4463-936e-95953f64b250" Nov 29 06:55:32 crc kubenswrapper[4947]: I1129 06:55:32.338291 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8407-account-create-update-b65nc" Nov 29 06:55:32 crc kubenswrapper[4947]: I1129 06:55:32.353695 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-04ce-account-create-update-jjbtk" Nov 29 06:55:32 crc kubenswrapper[4947]: I1129 06:55:32.377273 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-p4xsj" Nov 29 06:55:32 crc kubenswrapper[4947]: I1129 06:55:32.397963 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zz9ps\" (UniqueName: \"kubernetes.io/projected/a315def0-b8c0-4a35-95e3-faf969bb901c-kube-api-access-zz9ps\") pod \"a315def0-b8c0-4a35-95e3-faf969bb901c\" (UID: \"a315def0-b8c0-4a35-95e3-faf969bb901c\") " Nov 29 06:55:32 crc kubenswrapper[4947]: I1129 06:55:32.398004 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j97vd\" (UniqueName: \"kubernetes.io/projected/7ff65d93-9651-43e9-9309-49ed52f33a3c-kube-api-access-j97vd\") pod \"7ff65d93-9651-43e9-9309-49ed52f33a3c\" (UID: \"7ff65d93-9651-43e9-9309-49ed52f33a3c\") " Nov 29 06:55:32 crc kubenswrapper[4947]: I1129 06:55:32.398038 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c243880-90ea-479d-bba2-a12f36ad3e82-operator-scripts\") pod \"9c243880-90ea-479d-bba2-a12f36ad3e82\" (UID: \"9c243880-90ea-479d-bba2-a12f36ad3e82\") " Nov 29 06:55:32 crc kubenswrapper[4947]: I1129 06:55:32.398117 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6zh5\" (UniqueName: \"kubernetes.io/projected/9c243880-90ea-479d-bba2-a12f36ad3e82-kube-api-access-t6zh5\") pod \"9c243880-90ea-479d-bba2-a12f36ad3e82\" (UID: \"9c243880-90ea-479d-bba2-a12f36ad3e82\") " Nov 29 06:55:32 crc kubenswrapper[4947]: I1129 06:55:32.398230 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ff65d93-9651-43e9-9309-49ed52f33a3c-operator-scripts\") pod \"7ff65d93-9651-43e9-9309-49ed52f33a3c\" (UID: \"7ff65d93-9651-43e9-9309-49ed52f33a3c\") " Nov 29 06:55:32 crc kubenswrapper[4947]: I1129 06:55:32.400175 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ff65d93-9651-43e9-9309-49ed52f33a3c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7ff65d93-9651-43e9-9309-49ed52f33a3c" (UID: "7ff65d93-9651-43e9-9309-49ed52f33a3c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:55:32 crc kubenswrapper[4947]: I1129 06:55:32.400173 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c243880-90ea-479d-bba2-a12f36ad3e82-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9c243880-90ea-479d-bba2-a12f36ad3e82" (UID: "9c243880-90ea-479d-bba2-a12f36ad3e82"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:55:32 crc kubenswrapper[4947]: I1129 06:55:32.408233 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ff65d93-9651-43e9-9309-49ed52f33a3c-kube-api-access-j97vd" (OuterVolumeSpecName: "kube-api-access-j97vd") pod "7ff65d93-9651-43e9-9309-49ed52f33a3c" (UID: "7ff65d93-9651-43e9-9309-49ed52f33a3c"). InnerVolumeSpecName "kube-api-access-j97vd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:55:32 crc kubenswrapper[4947]: I1129 06:55:32.411645 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a315def0-b8c0-4a35-95e3-faf969bb901c-kube-api-access-zz9ps" (OuterVolumeSpecName: "kube-api-access-zz9ps") pod "a315def0-b8c0-4a35-95e3-faf969bb901c" (UID: "a315def0-b8c0-4a35-95e3-faf969bb901c"). InnerVolumeSpecName "kube-api-access-zz9ps". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:55:32 crc kubenswrapper[4947]: I1129 06:55:32.419285 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c243880-90ea-479d-bba2-a12f36ad3e82-kube-api-access-t6zh5" (OuterVolumeSpecName: "kube-api-access-t6zh5") pod "9c243880-90ea-479d-bba2-a12f36ad3e82" (UID: "9c243880-90ea-479d-bba2-a12f36ad3e82"). InnerVolumeSpecName "kube-api-access-t6zh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:55:32 crc kubenswrapper[4947]: I1129 06:55:32.500868 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a315def0-b8c0-4a35-95e3-faf969bb901c-operator-scripts\") pod \"a315def0-b8c0-4a35-95e3-faf969bb901c\" (UID: \"a315def0-b8c0-4a35-95e3-faf969bb901c\") " Nov 29 06:55:32 crc kubenswrapper[4947]: I1129 06:55:32.501447 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a315def0-b8c0-4a35-95e3-faf969bb901c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a315def0-b8c0-4a35-95e3-faf969bb901c" (UID: "a315def0-b8c0-4a35-95e3-faf969bb901c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:55:32 crc kubenswrapper[4947]: I1129 06:55:32.502903 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zz9ps\" (UniqueName: \"kubernetes.io/projected/a315def0-b8c0-4a35-95e3-faf969bb901c-kube-api-access-zz9ps\") on node \"crc\" DevicePath \"\"" Nov 29 06:55:32 crc kubenswrapper[4947]: I1129 06:55:32.502986 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j97vd\" (UniqueName: \"kubernetes.io/projected/7ff65d93-9651-43e9-9309-49ed52f33a3c-kube-api-access-j97vd\") on node \"crc\" DevicePath \"\"" Nov 29 06:55:32 crc kubenswrapper[4947]: I1129 06:55:32.503009 4947 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c243880-90ea-479d-bba2-a12f36ad3e82-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 06:55:32 crc kubenswrapper[4947]: I1129 06:55:32.503029 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6zh5\" (UniqueName: \"kubernetes.io/projected/9c243880-90ea-479d-bba2-a12f36ad3e82-kube-api-access-t6zh5\") on node \"crc\" DevicePath \"\"" Nov 29 06:55:32 crc kubenswrapper[4947]: I1129 06:55:32.503054 4947 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ff65d93-9651-43e9-9309-49ed52f33a3c-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 06:55:32 crc kubenswrapper[4947]: I1129 06:55:32.605406 4947 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a315def0-b8c0-4a35-95e3-faf969bb901c-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 06:55:32 crc kubenswrapper[4947]: I1129 06:55:32.970418 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-p4xsj" event={"ID":"a315def0-b8c0-4a35-95e3-faf969bb901c","Type":"ContainerDied","Data":"79d2b5f44b78ffdb0744e41a66bf1df4d65c5a35703af25c92c220ac123248b8"} Nov 29 06:55:32 crc kubenswrapper[4947]: I1129 06:55:32.970952 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79d2b5f44b78ffdb0744e41a66bf1df4d65c5a35703af25c92c220ac123248b8" Nov 29 06:55:32 crc kubenswrapper[4947]: I1129 06:55:32.970740 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-p4xsj" Nov 29 06:55:32 crc kubenswrapper[4947]: I1129 06:55:32.973790 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8407-account-create-update-b65nc" Nov 29 06:55:32 crc kubenswrapper[4947]: I1129 06:55:32.975136 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8407-account-create-update-b65nc" event={"ID":"9c243880-90ea-479d-bba2-a12f36ad3e82","Type":"ContainerDied","Data":"9c66651f6ba3bb0353166fc5205cf23f8b654b0400e080ff2009c1976e514a5d"} Nov 29 06:55:32 crc kubenswrapper[4947]: I1129 06:55:32.975192 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c66651f6ba3bb0353166fc5205cf23f8b654b0400e080ff2009c1976e514a5d" Nov 29 06:55:32 crc kubenswrapper[4947]: I1129 06:55:32.981778 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" event={"ID":"5f4d791f-bb61-4aaa-a09c-3007b59645a7","Type":"ContainerStarted","Data":"df51260596870c91ccb9712810f435518b0c8fd5a5c15540a25aafaee5eb1aa5"} Nov 29 06:55:32 crc kubenswrapper[4947]: I1129 06:55:32.984920 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-04ce-account-create-update-jjbtk" Nov 29 06:55:32 crc kubenswrapper[4947]: I1129 06:55:32.985421 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-04ce-account-create-update-jjbtk" event={"ID":"7ff65d93-9651-43e9-9309-49ed52f33a3c","Type":"ContainerDied","Data":"114ba3ded2e51e2ad3ed31a9e60e0950d3eda74989f501fc7b27fb8fb6aa922f"} Nov 29 06:55:32 crc kubenswrapper[4947]: I1129 06:55:32.985477 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="114ba3ded2e51e2ad3ed31a9e60e0950d3eda74989f501fc7b27fb8fb6aa922f" Nov 29 06:55:32 crc kubenswrapper[4947]: E1129 06:55:32.986822 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-6njvc" podUID="7d3675d1-9c60-4463-936e-95953f64b250" Nov 29 06:55:41 crc kubenswrapper[4947]: I1129 06:55:41.070561 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-svhxq" event={"ID":"96a21275-0215-4d96-adcd-6e60d7a2c900","Type":"ContainerStarted","Data":"8c174c1525a70ec1156fba04120e45e8cd4417c637710bc58bcea553dfcb6d67"} Nov 29 06:55:41 crc kubenswrapper[4947]: I1129 06:55:41.100287 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-svhxq" podStartSLOduration=2.707697003 podStartE2EDuration="32.100261809s" podCreationTimestamp="2025-11-29 06:55:09 +0000 UTC" firstStartedPulling="2025-11-29 06:55:10.52257974 +0000 UTC m=+1261.566961821" lastFinishedPulling="2025-11-29 06:55:39.915144546 +0000 UTC m=+1290.959526627" observedRunningTime="2025-11-29 06:55:41.094509829 +0000 UTC m=+1292.138891920" watchObservedRunningTime="2025-11-29 06:55:41.100261809 +0000 UTC m=+1292.144643910" Nov 29 06:55:44 crc kubenswrapper[4947]: I1129 06:55:44.097412 4947 generic.go:334] "Generic (PLEG): container finished" podID="96a21275-0215-4d96-adcd-6e60d7a2c900" containerID="8c174c1525a70ec1156fba04120e45e8cd4417c637710bc58bcea553dfcb6d67" exitCode=0 Nov 29 06:55:44 crc kubenswrapper[4947]: I1129 06:55:44.097496 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-svhxq" event={"ID":"96a21275-0215-4d96-adcd-6e60d7a2c900","Type":"ContainerDied","Data":"8c174c1525a70ec1156fba04120e45e8cd4417c637710bc58bcea553dfcb6d67"} Nov 29 06:55:45 crc kubenswrapper[4947]: I1129 06:55:45.617462 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-svhxq" Nov 29 06:55:45 crc kubenswrapper[4947]: I1129 06:55:45.790903 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96a21275-0215-4d96-adcd-6e60d7a2c900-combined-ca-bundle\") pod \"96a21275-0215-4d96-adcd-6e60d7a2c900\" (UID: \"96a21275-0215-4d96-adcd-6e60d7a2c900\") " Nov 29 06:55:45 crc kubenswrapper[4947]: I1129 06:55:45.791153 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxbhm\" (UniqueName: \"kubernetes.io/projected/96a21275-0215-4d96-adcd-6e60d7a2c900-kube-api-access-dxbhm\") pod \"96a21275-0215-4d96-adcd-6e60d7a2c900\" (UID: \"96a21275-0215-4d96-adcd-6e60d7a2c900\") " Nov 29 06:55:45 crc kubenswrapper[4947]: I1129 06:55:45.791198 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96a21275-0215-4d96-adcd-6e60d7a2c900-config-data\") pod \"96a21275-0215-4d96-adcd-6e60d7a2c900\" (UID: \"96a21275-0215-4d96-adcd-6e60d7a2c900\") " Nov 29 06:55:45 crc kubenswrapper[4947]: I1129 06:55:45.799297 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96a21275-0215-4d96-adcd-6e60d7a2c900-kube-api-access-dxbhm" (OuterVolumeSpecName: "kube-api-access-dxbhm") pod "96a21275-0215-4d96-adcd-6e60d7a2c900" (UID: "96a21275-0215-4d96-adcd-6e60d7a2c900"). InnerVolumeSpecName "kube-api-access-dxbhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:55:45 crc kubenswrapper[4947]: I1129 06:55:45.820295 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96a21275-0215-4d96-adcd-6e60d7a2c900-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "96a21275-0215-4d96-adcd-6e60d7a2c900" (UID: "96a21275-0215-4d96-adcd-6e60d7a2c900"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:55:45 crc kubenswrapper[4947]: I1129 06:55:45.841246 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96a21275-0215-4d96-adcd-6e60d7a2c900-config-data" (OuterVolumeSpecName: "config-data") pod "96a21275-0215-4d96-adcd-6e60d7a2c900" (UID: "96a21275-0215-4d96-adcd-6e60d7a2c900"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:55:45 crc kubenswrapper[4947]: I1129 06:55:45.894843 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxbhm\" (UniqueName: \"kubernetes.io/projected/96a21275-0215-4d96-adcd-6e60d7a2c900-kube-api-access-dxbhm\") on node \"crc\" DevicePath \"\"" Nov 29 06:55:45 crc kubenswrapper[4947]: I1129 06:55:45.894951 4947 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96a21275-0215-4d96-adcd-6e60d7a2c900-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 06:55:45 crc kubenswrapper[4947]: I1129 06:55:45.894973 4947 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96a21275-0215-4d96-adcd-6e60d7a2c900-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.117624 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-svhxq" event={"ID":"96a21275-0215-4d96-adcd-6e60d7a2c900","Type":"ContainerDied","Data":"66e4b33ac35d5726c8d56c98539fad6b9523a1c111817a192b5f6d1895da403e"} Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.117698 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66e4b33ac35d5726c8d56c98539fad6b9523a1c111817a192b5f6d1895da403e" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.117723 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-svhxq" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.396875 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75bb4695fc-tmn66"] Nov 29 06:55:46 crc kubenswrapper[4947]: E1129 06:55:46.397726 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a315def0-b8c0-4a35-95e3-faf969bb901c" containerName="mariadb-database-create" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.397741 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="a315def0-b8c0-4a35-95e3-faf969bb901c" containerName="mariadb-database-create" Nov 29 06:55:46 crc kubenswrapper[4947]: E1129 06:55:46.397752 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c243880-90ea-479d-bba2-a12f36ad3e82" containerName="mariadb-account-create-update" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.397758 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c243880-90ea-479d-bba2-a12f36ad3e82" containerName="mariadb-account-create-update" Nov 29 06:55:46 crc kubenswrapper[4947]: E1129 06:55:46.397771 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ff65d93-9651-43e9-9309-49ed52f33a3c" containerName="mariadb-account-create-update" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.397777 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ff65d93-9651-43e9-9309-49ed52f33a3c" containerName="mariadb-account-create-update" Nov 29 06:55:46 crc kubenswrapper[4947]: E1129 06:55:46.397794 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8a15105-2612-40d5-b685-5ee0c0ad58a8" containerName="mariadb-database-create" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.397800 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8a15105-2612-40d5-b685-5ee0c0ad58a8" containerName="mariadb-database-create" Nov 29 06:55:46 crc kubenswrapper[4947]: E1129 06:55:46.397812 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96a21275-0215-4d96-adcd-6e60d7a2c900" containerName="keystone-db-sync" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.397818 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="96a21275-0215-4d96-adcd-6e60d7a2c900" containerName="keystone-db-sync" Nov 29 06:55:46 crc kubenswrapper[4947]: E1129 06:55:46.397832 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1884516f-3c31-4dc9-8d46-4cceeefeb6e2" containerName="mariadb-database-create" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.397838 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="1884516f-3c31-4dc9-8d46-4cceeefeb6e2" containerName="mariadb-database-create" Nov 29 06:55:46 crc kubenswrapper[4947]: E1129 06:55:46.397851 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c59e9c9-8ffc-422e-8565-7aa51d7ae12d" containerName="mariadb-account-create-update" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.397856 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c59e9c9-8ffc-422e-8565-7aa51d7ae12d" containerName="mariadb-account-create-update" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.398013 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8a15105-2612-40d5-b685-5ee0c0ad58a8" containerName="mariadb-database-create" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.398032 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c243880-90ea-479d-bba2-a12f36ad3e82" containerName="mariadb-account-create-update" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.398047 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c59e9c9-8ffc-422e-8565-7aa51d7ae12d" containerName="mariadb-account-create-update" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.398054 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="1884516f-3c31-4dc9-8d46-4cceeefeb6e2" containerName="mariadb-database-create" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.398062 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="96a21275-0215-4d96-adcd-6e60d7a2c900" containerName="keystone-db-sync" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.398069 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="a315def0-b8c0-4a35-95e3-faf969bb901c" containerName="mariadb-database-create" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.398080 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ff65d93-9651-43e9-9309-49ed52f33a3c" containerName="mariadb-account-create-update" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.399102 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bb4695fc-tmn66" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.452966 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-mh2wf"] Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.454275 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mh2wf" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.462874 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.463194 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.463759 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.463922 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zkrv9" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.464099 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.483363 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75bb4695fc-tmn66"] Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.509300 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mh2wf"] Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.517406 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0723ce9-d9b2-4b12-91f1-220f211bac72-config\") pod \"dnsmasq-dns-75bb4695fc-tmn66\" (UID: \"d0723ce9-d9b2-4b12-91f1-220f211bac72\") " pod="openstack/dnsmasq-dns-75bb4695fc-tmn66" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.517496 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0723ce9-d9b2-4b12-91f1-220f211bac72-dns-svc\") pod \"dnsmasq-dns-75bb4695fc-tmn66\" (UID: \"d0723ce9-d9b2-4b12-91f1-220f211bac72\") " pod="openstack/dnsmasq-dns-75bb4695fc-tmn66" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.517558 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drb5r\" (UniqueName: \"kubernetes.io/projected/d0723ce9-d9b2-4b12-91f1-220f211bac72-kube-api-access-drb5r\") pod \"dnsmasq-dns-75bb4695fc-tmn66\" (UID: \"d0723ce9-d9b2-4b12-91f1-220f211bac72\") " pod="openstack/dnsmasq-dns-75bb4695fc-tmn66" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.517588 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d0723ce9-d9b2-4b12-91f1-220f211bac72-ovsdbserver-nb\") pod \"dnsmasq-dns-75bb4695fc-tmn66\" (UID: \"d0723ce9-d9b2-4b12-91f1-220f211bac72\") " pod="openstack/dnsmasq-dns-75bb4695fc-tmn66" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.518299 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d0723ce9-d9b2-4b12-91f1-220f211bac72-ovsdbserver-sb\") pod \"dnsmasq-dns-75bb4695fc-tmn66\" (UID: \"d0723ce9-d9b2-4b12-91f1-220f211bac72\") " pod="openstack/dnsmasq-dns-75bb4695fc-tmn66" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.620838 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9735c296-f485-4f6a-b52b-e3c08ff6593b-fernet-keys\") pod \"keystone-bootstrap-mh2wf\" (UID: \"9735c296-f485-4f6a-b52b-e3c08ff6593b\") " pod="openstack/keystone-bootstrap-mh2wf" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.620912 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9735c296-f485-4f6a-b52b-e3c08ff6593b-config-data\") pod \"keystone-bootstrap-mh2wf\" (UID: \"9735c296-f485-4f6a-b52b-e3c08ff6593b\") " pod="openstack/keystone-bootstrap-mh2wf" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.620935 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9735c296-f485-4f6a-b52b-e3c08ff6593b-scripts\") pod \"keystone-bootstrap-mh2wf\" (UID: \"9735c296-f485-4f6a-b52b-e3c08ff6593b\") " pod="openstack/keystone-bootstrap-mh2wf" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.620958 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9735c296-f485-4f6a-b52b-e3c08ff6593b-credential-keys\") pod \"keystone-bootstrap-mh2wf\" (UID: \"9735c296-f485-4f6a-b52b-e3c08ff6593b\") " pod="openstack/keystone-bootstrap-mh2wf" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.621025 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d0723ce9-d9b2-4b12-91f1-220f211bac72-ovsdbserver-sb\") pod \"dnsmasq-dns-75bb4695fc-tmn66\" (UID: \"d0723ce9-d9b2-4b12-91f1-220f211bac72\") " pod="openstack/dnsmasq-dns-75bb4695fc-tmn66" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.621079 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0723ce9-d9b2-4b12-91f1-220f211bac72-config\") pod \"dnsmasq-dns-75bb4695fc-tmn66\" (UID: \"d0723ce9-d9b2-4b12-91f1-220f211bac72\") " pod="openstack/dnsmasq-dns-75bb4695fc-tmn66" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.621113 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0723ce9-d9b2-4b12-91f1-220f211bac72-dns-svc\") pod \"dnsmasq-dns-75bb4695fc-tmn66\" (UID: \"d0723ce9-d9b2-4b12-91f1-220f211bac72\") " pod="openstack/dnsmasq-dns-75bb4695fc-tmn66" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.621149 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st96r\" (UniqueName: \"kubernetes.io/projected/9735c296-f485-4f6a-b52b-e3c08ff6593b-kube-api-access-st96r\") pod \"keystone-bootstrap-mh2wf\" (UID: \"9735c296-f485-4f6a-b52b-e3c08ff6593b\") " pod="openstack/keystone-bootstrap-mh2wf" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.621184 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9735c296-f485-4f6a-b52b-e3c08ff6593b-combined-ca-bundle\") pod \"keystone-bootstrap-mh2wf\" (UID: \"9735c296-f485-4f6a-b52b-e3c08ff6593b\") " pod="openstack/keystone-bootstrap-mh2wf" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.621239 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drb5r\" (UniqueName: \"kubernetes.io/projected/d0723ce9-d9b2-4b12-91f1-220f211bac72-kube-api-access-drb5r\") pod \"dnsmasq-dns-75bb4695fc-tmn66\" (UID: \"d0723ce9-d9b2-4b12-91f1-220f211bac72\") " pod="openstack/dnsmasq-dns-75bb4695fc-tmn66" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.621259 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d0723ce9-d9b2-4b12-91f1-220f211bac72-ovsdbserver-nb\") pod \"dnsmasq-dns-75bb4695fc-tmn66\" (UID: \"d0723ce9-d9b2-4b12-91f1-220f211bac72\") " pod="openstack/dnsmasq-dns-75bb4695fc-tmn66" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.622428 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d0723ce9-d9b2-4b12-91f1-220f211bac72-ovsdbserver-nb\") pod \"dnsmasq-dns-75bb4695fc-tmn66\" (UID: \"d0723ce9-d9b2-4b12-91f1-220f211bac72\") " pod="openstack/dnsmasq-dns-75bb4695fc-tmn66" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.623107 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d0723ce9-d9b2-4b12-91f1-220f211bac72-ovsdbserver-sb\") pod \"dnsmasq-dns-75bb4695fc-tmn66\" (UID: \"d0723ce9-d9b2-4b12-91f1-220f211bac72\") " pod="openstack/dnsmasq-dns-75bb4695fc-tmn66" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.623747 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0723ce9-d9b2-4b12-91f1-220f211bac72-config\") pod \"dnsmasq-dns-75bb4695fc-tmn66\" (UID: \"d0723ce9-d9b2-4b12-91f1-220f211bac72\") " pod="openstack/dnsmasq-dns-75bb4695fc-tmn66" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.624405 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0723ce9-d9b2-4b12-91f1-220f211bac72-dns-svc\") pod \"dnsmasq-dns-75bb4695fc-tmn66\" (UID: \"d0723ce9-d9b2-4b12-91f1-220f211bac72\") " pod="openstack/dnsmasq-dns-75bb4695fc-tmn66" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.646066 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-l67pc"] Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.647431 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-l67pc" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.652986 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.653333 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-rss2f" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.653630 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.661007 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drb5r\" (UniqueName: \"kubernetes.io/projected/d0723ce9-d9b2-4b12-91f1-220f211bac72-kube-api-access-drb5r\") pod \"dnsmasq-dns-75bb4695fc-tmn66\" (UID: \"d0723ce9-d9b2-4b12-91f1-220f211bac72\") " pod="openstack/dnsmasq-dns-75bb4695fc-tmn66" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.682205 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-l67pc"] Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.723595 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st96r\" (UniqueName: \"kubernetes.io/projected/9735c296-f485-4f6a-b52b-e3c08ff6593b-kube-api-access-st96r\") pod \"keystone-bootstrap-mh2wf\" (UID: \"9735c296-f485-4f6a-b52b-e3c08ff6593b\") " pod="openstack/keystone-bootstrap-mh2wf" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.723666 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9735c296-f485-4f6a-b52b-e3c08ff6593b-combined-ca-bundle\") pod \"keystone-bootstrap-mh2wf\" (UID: \"9735c296-f485-4f6a-b52b-e3c08ff6593b\") " pod="openstack/keystone-bootstrap-mh2wf" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.723712 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9735c296-f485-4f6a-b52b-e3c08ff6593b-fernet-keys\") pod \"keystone-bootstrap-mh2wf\" (UID: \"9735c296-f485-4f6a-b52b-e3c08ff6593b\") " pod="openstack/keystone-bootstrap-mh2wf" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.723754 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9735c296-f485-4f6a-b52b-e3c08ff6593b-config-data\") pod \"keystone-bootstrap-mh2wf\" (UID: \"9735c296-f485-4f6a-b52b-e3c08ff6593b\") " pod="openstack/keystone-bootstrap-mh2wf" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.723774 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9735c296-f485-4f6a-b52b-e3c08ff6593b-scripts\") pod \"keystone-bootstrap-mh2wf\" (UID: \"9735c296-f485-4f6a-b52b-e3c08ff6593b\") " pod="openstack/keystone-bootstrap-mh2wf" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.723806 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9735c296-f485-4f6a-b52b-e3c08ff6593b-credential-keys\") pod \"keystone-bootstrap-mh2wf\" (UID: \"9735c296-f485-4f6a-b52b-e3c08ff6593b\") " pod="openstack/keystone-bootstrap-mh2wf" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.735661 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9735c296-f485-4f6a-b52b-e3c08ff6593b-credential-keys\") pod \"keystone-bootstrap-mh2wf\" (UID: \"9735c296-f485-4f6a-b52b-e3c08ff6593b\") " pod="openstack/keystone-bootstrap-mh2wf" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.737843 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9735c296-f485-4f6a-b52b-e3c08ff6593b-config-data\") pod \"keystone-bootstrap-mh2wf\" (UID: \"9735c296-f485-4f6a-b52b-e3c08ff6593b\") " pod="openstack/keystone-bootstrap-mh2wf" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.738675 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9735c296-f485-4f6a-b52b-e3c08ff6593b-scripts\") pod \"keystone-bootstrap-mh2wf\" (UID: \"9735c296-f485-4f6a-b52b-e3c08ff6593b\") " pod="openstack/keystone-bootstrap-mh2wf" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.745352 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9735c296-f485-4f6a-b52b-e3c08ff6593b-combined-ca-bundle\") pod \"keystone-bootstrap-mh2wf\" (UID: \"9735c296-f485-4f6a-b52b-e3c08ff6593b\") " pod="openstack/keystone-bootstrap-mh2wf" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.768951 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bb4695fc-tmn66" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.790195 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9735c296-f485-4f6a-b52b-e3c08ff6593b-fernet-keys\") pod \"keystone-bootstrap-mh2wf\" (UID: \"9735c296-f485-4f6a-b52b-e3c08ff6593b\") " pod="openstack/keystone-bootstrap-mh2wf" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.814258 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st96r\" (UniqueName: \"kubernetes.io/projected/9735c296-f485-4f6a-b52b-e3c08ff6593b-kube-api-access-st96r\") pod \"keystone-bootstrap-mh2wf\" (UID: \"9735c296-f485-4f6a-b52b-e3c08ff6593b\") " pod="openstack/keystone-bootstrap-mh2wf" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.818739 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75bb4695fc-tmn66"] Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.828517 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b341c7eb-214f-49d0-ae91-a27c56857739-scripts\") pod \"cinder-db-sync-l67pc\" (UID: \"b341c7eb-214f-49d0-ae91-a27c56857739\") " pod="openstack/cinder-db-sync-l67pc" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.828596 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b341c7eb-214f-49d0-ae91-a27c56857739-config-data\") pod \"cinder-db-sync-l67pc\" (UID: \"b341c7eb-214f-49d0-ae91-a27c56857739\") " pod="openstack/cinder-db-sync-l67pc" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.828658 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b341c7eb-214f-49d0-ae91-a27c56857739-combined-ca-bundle\") pod \"cinder-db-sync-l67pc\" (UID: \"b341c7eb-214f-49d0-ae91-a27c56857739\") " pod="openstack/cinder-db-sync-l67pc" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.828684 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b341c7eb-214f-49d0-ae91-a27c56857739-db-sync-config-data\") pod \"cinder-db-sync-l67pc\" (UID: \"b341c7eb-214f-49d0-ae91-a27c56857739\") " pod="openstack/cinder-db-sync-l67pc" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.828715 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b341c7eb-214f-49d0-ae91-a27c56857739-etc-machine-id\") pod \"cinder-db-sync-l67pc\" (UID: \"b341c7eb-214f-49d0-ae91-a27c56857739\") " pod="openstack/cinder-db-sync-l67pc" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.828752 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpj5m\" (UniqueName: \"kubernetes.io/projected/b341c7eb-214f-49d0-ae91-a27c56857739-kube-api-access-rpj5m\") pod \"cinder-db-sync-l67pc\" (UID: \"b341c7eb-214f-49d0-ae91-a27c56857739\") " pod="openstack/cinder-db-sync-l67pc" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.864856 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-745b9ddc8c-7fptv"] Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.890442 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-4sjrf"] Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.892686 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-745b9ddc8c-7fptv" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.900432 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4sjrf" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.904328 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.904943 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-pf86q" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.905992 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.931609 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b341c7eb-214f-49d0-ae91-a27c56857739-scripts\") pod \"cinder-db-sync-l67pc\" (UID: \"b341c7eb-214f-49d0-ae91-a27c56857739\") " pod="openstack/cinder-db-sync-l67pc" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.931674 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b341c7eb-214f-49d0-ae91-a27c56857739-config-data\") pod \"cinder-db-sync-l67pc\" (UID: \"b341c7eb-214f-49d0-ae91-a27c56857739\") " pod="openstack/cinder-db-sync-l67pc" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.931745 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b341c7eb-214f-49d0-ae91-a27c56857739-combined-ca-bundle\") pod \"cinder-db-sync-l67pc\" (UID: \"b341c7eb-214f-49d0-ae91-a27c56857739\") " pod="openstack/cinder-db-sync-l67pc" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.931771 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b341c7eb-214f-49d0-ae91-a27c56857739-db-sync-config-data\") pod \"cinder-db-sync-l67pc\" (UID: \"b341c7eb-214f-49d0-ae91-a27c56857739\") " pod="openstack/cinder-db-sync-l67pc" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.931806 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b341c7eb-214f-49d0-ae91-a27c56857739-etc-machine-id\") pod \"cinder-db-sync-l67pc\" (UID: \"b341c7eb-214f-49d0-ae91-a27c56857739\") " pod="openstack/cinder-db-sync-l67pc" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.931849 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpj5m\" (UniqueName: \"kubernetes.io/projected/b341c7eb-214f-49d0-ae91-a27c56857739-kube-api-access-rpj5m\") pod \"cinder-db-sync-l67pc\" (UID: \"b341c7eb-214f-49d0-ae91-a27c56857739\") " pod="openstack/cinder-db-sync-l67pc" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.934737 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-745b9ddc8c-7fptv"] Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.936366 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b341c7eb-214f-49d0-ae91-a27c56857739-etc-machine-id\") pod \"cinder-db-sync-l67pc\" (UID: \"b341c7eb-214f-49d0-ae91-a27c56857739\") " pod="openstack/cinder-db-sync-l67pc" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.943191 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b341c7eb-214f-49d0-ae91-a27c56857739-scripts\") pod \"cinder-db-sync-l67pc\" (UID: \"b341c7eb-214f-49d0-ae91-a27c56857739\") " pod="openstack/cinder-db-sync-l67pc" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.950320 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b341c7eb-214f-49d0-ae91-a27c56857739-combined-ca-bundle\") pod \"cinder-db-sync-l67pc\" (UID: \"b341c7eb-214f-49d0-ae91-a27c56857739\") " pod="openstack/cinder-db-sync-l67pc" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.951835 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b341c7eb-214f-49d0-ae91-a27c56857739-db-sync-config-data\") pod \"cinder-db-sync-l67pc\" (UID: \"b341c7eb-214f-49d0-ae91-a27c56857739\") " pod="openstack/cinder-db-sync-l67pc" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.966185 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b341c7eb-214f-49d0-ae91-a27c56857739-config-data\") pod \"cinder-db-sync-l67pc\" (UID: \"b341c7eb-214f-49d0-ae91-a27c56857739\") " pod="openstack/cinder-db-sync-l67pc" Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.974160 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-77d4t"] Nov 29 06:55:46 crc kubenswrapper[4947]: I1129 06:55:46.999791 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-77d4t" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.006092 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpj5m\" (UniqueName: \"kubernetes.io/projected/b341c7eb-214f-49d0-ae91-a27c56857739-kube-api-access-rpj5m\") pod \"cinder-db-sync-l67pc\" (UID: \"b341c7eb-214f-49d0-ae91-a27c56857739\") " pod="openstack/cinder-db-sync-l67pc" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.011134 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-77d4t"] Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.026051 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.026398 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-9znx8" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.045381 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-4sjrf"] Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.049099 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-l67pc" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.050338 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8bfd51e3-c63d-4382-a864-cb0570e277d8-dns-svc\") pod \"dnsmasq-dns-745b9ddc8c-7fptv\" (UID: \"8bfd51e3-c63d-4382-a864-cb0570e277d8\") " pod="openstack/dnsmasq-dns-745b9ddc8c-7fptv" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.050407 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bfd51e3-c63d-4382-a864-cb0570e277d8-config\") pod \"dnsmasq-dns-745b9ddc8c-7fptv\" (UID: \"8bfd51e3-c63d-4382-a864-cb0570e277d8\") " pod="openstack/dnsmasq-dns-745b9ddc8c-7fptv" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.050433 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af3ad5b2-503c-4d1c-927c-0feab47e5212-combined-ca-bundle\") pod \"neutron-db-sync-4sjrf\" (UID: \"af3ad5b2-503c-4d1c-927c-0feab47e5212\") " pod="openstack/neutron-db-sync-4sjrf" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.050475 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8bfd51e3-c63d-4382-a864-cb0570e277d8-ovsdbserver-nb\") pod \"dnsmasq-dns-745b9ddc8c-7fptv\" (UID: \"8bfd51e3-c63d-4382-a864-cb0570e277d8\") " pod="openstack/dnsmasq-dns-745b9ddc8c-7fptv" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.050522 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kdg4\" (UniqueName: \"kubernetes.io/projected/af3ad5b2-503c-4d1c-927c-0feab47e5212-kube-api-access-4kdg4\") pod \"neutron-db-sync-4sjrf\" (UID: \"af3ad5b2-503c-4d1c-927c-0feab47e5212\") " pod="openstack/neutron-db-sync-4sjrf" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.050576 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm66t\" (UniqueName: \"kubernetes.io/projected/8bfd51e3-c63d-4382-a864-cb0570e277d8-kube-api-access-fm66t\") pod \"dnsmasq-dns-745b9ddc8c-7fptv\" (UID: \"8bfd51e3-c63d-4382-a864-cb0570e277d8\") " pod="openstack/dnsmasq-dns-745b9ddc8c-7fptv" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.050621 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/af3ad5b2-503c-4d1c-927c-0feab47e5212-config\") pod \"neutron-db-sync-4sjrf\" (UID: \"af3ad5b2-503c-4d1c-927c-0feab47e5212\") " pod="openstack/neutron-db-sync-4sjrf" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.050645 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8bfd51e3-c63d-4382-a864-cb0570e277d8-ovsdbserver-sb\") pod \"dnsmasq-dns-745b9ddc8c-7fptv\" (UID: \"8bfd51e3-c63d-4382-a864-cb0570e277d8\") " pod="openstack/dnsmasq-dns-745b9ddc8c-7fptv" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.086646 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-55q5b"] Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.088567 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-55q5b" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.089425 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mh2wf" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.102769 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-bf7dn" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.105062 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.107083 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.121304 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-55q5b"] Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.159179 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8bfd51e3-c63d-4382-a864-cb0570e277d8-ovsdbserver-nb\") pod \"dnsmasq-dns-745b9ddc8c-7fptv\" (UID: \"8bfd51e3-c63d-4382-a864-cb0570e277d8\") " pod="openstack/dnsmasq-dns-745b9ddc8c-7fptv" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.159289 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42494b25-5db7-478c-b2a9-e14c7d990c0c-logs\") pod \"placement-db-sync-55q5b\" (UID: \"42494b25-5db7-478c-b2a9-e14c7d990c0c\") " pod="openstack/placement-db-sync-55q5b" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.159336 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f26cf011-4f52-4d26-a248-b92906824399-combined-ca-bundle\") pod \"barbican-db-sync-77d4t\" (UID: \"f26cf011-4f52-4d26-a248-b92906824399\") " pod="openstack/barbican-db-sync-77d4t" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.159364 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kdg4\" (UniqueName: \"kubernetes.io/projected/af3ad5b2-503c-4d1c-927c-0feab47e5212-kube-api-access-4kdg4\") pod \"neutron-db-sync-4sjrf\" (UID: \"af3ad5b2-503c-4d1c-927c-0feab47e5212\") " pod="openstack/neutron-db-sync-4sjrf" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.159393 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fm66t\" (UniqueName: \"kubernetes.io/projected/8bfd51e3-c63d-4382-a864-cb0570e277d8-kube-api-access-fm66t\") pod \"dnsmasq-dns-745b9ddc8c-7fptv\" (UID: \"8bfd51e3-c63d-4382-a864-cb0570e277d8\") " pod="openstack/dnsmasq-dns-745b9ddc8c-7fptv" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.159421 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/af3ad5b2-503c-4d1c-927c-0feab47e5212-config\") pod \"neutron-db-sync-4sjrf\" (UID: \"af3ad5b2-503c-4d1c-927c-0feab47e5212\") " pod="openstack/neutron-db-sync-4sjrf" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.159445 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42494b25-5db7-478c-b2a9-e14c7d990c0c-combined-ca-bundle\") pod \"placement-db-sync-55q5b\" (UID: \"42494b25-5db7-478c-b2a9-e14c7d990c0c\") " pod="openstack/placement-db-sync-55q5b" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.159468 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8bfd51e3-c63d-4382-a864-cb0570e277d8-ovsdbserver-sb\") pod \"dnsmasq-dns-745b9ddc8c-7fptv\" (UID: \"8bfd51e3-c63d-4382-a864-cb0570e277d8\") " pod="openstack/dnsmasq-dns-745b9ddc8c-7fptv" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.159516 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hncz2\" (UniqueName: \"kubernetes.io/projected/f26cf011-4f52-4d26-a248-b92906824399-kube-api-access-hncz2\") pod \"barbican-db-sync-77d4t\" (UID: \"f26cf011-4f52-4d26-a248-b92906824399\") " pod="openstack/barbican-db-sync-77d4t" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.159554 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8bfd51e3-c63d-4382-a864-cb0570e277d8-dns-svc\") pod \"dnsmasq-dns-745b9ddc8c-7fptv\" (UID: \"8bfd51e3-c63d-4382-a864-cb0570e277d8\") " pod="openstack/dnsmasq-dns-745b9ddc8c-7fptv" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.159582 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42494b25-5db7-478c-b2a9-e14c7d990c0c-scripts\") pod \"placement-db-sync-55q5b\" (UID: \"42494b25-5db7-478c-b2a9-e14c7d990c0c\") " pod="openstack/placement-db-sync-55q5b" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.159613 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f26cf011-4f52-4d26-a248-b92906824399-db-sync-config-data\") pod \"barbican-db-sync-77d4t\" (UID: \"f26cf011-4f52-4d26-a248-b92906824399\") " pod="openstack/barbican-db-sync-77d4t" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.159645 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt8w8\" (UniqueName: \"kubernetes.io/projected/42494b25-5db7-478c-b2a9-e14c7d990c0c-kube-api-access-bt8w8\") pod \"placement-db-sync-55q5b\" (UID: \"42494b25-5db7-478c-b2a9-e14c7d990c0c\") " pod="openstack/placement-db-sync-55q5b" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.159664 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42494b25-5db7-478c-b2a9-e14c7d990c0c-config-data\") pod \"placement-db-sync-55q5b\" (UID: \"42494b25-5db7-478c-b2a9-e14c7d990c0c\") " pod="openstack/placement-db-sync-55q5b" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.159712 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bfd51e3-c63d-4382-a864-cb0570e277d8-config\") pod \"dnsmasq-dns-745b9ddc8c-7fptv\" (UID: \"8bfd51e3-c63d-4382-a864-cb0570e277d8\") " pod="openstack/dnsmasq-dns-745b9ddc8c-7fptv" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.159739 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af3ad5b2-503c-4d1c-927c-0feab47e5212-combined-ca-bundle\") pod \"neutron-db-sync-4sjrf\" (UID: \"af3ad5b2-503c-4d1c-927c-0feab47e5212\") " pod="openstack/neutron-db-sync-4sjrf" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.166654 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8bfd51e3-c63d-4382-a864-cb0570e277d8-dns-svc\") pod \"dnsmasq-dns-745b9ddc8c-7fptv\" (UID: \"8bfd51e3-c63d-4382-a864-cb0570e277d8\") " pod="openstack/dnsmasq-dns-745b9ddc8c-7fptv" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.174103 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8bfd51e3-c63d-4382-a864-cb0570e277d8-ovsdbserver-sb\") pod \"dnsmasq-dns-745b9ddc8c-7fptv\" (UID: \"8bfd51e3-c63d-4382-a864-cb0570e277d8\") " pod="openstack/dnsmasq-dns-745b9ddc8c-7fptv" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.175196 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8bfd51e3-c63d-4382-a864-cb0570e277d8-ovsdbserver-nb\") pod \"dnsmasq-dns-745b9ddc8c-7fptv\" (UID: \"8bfd51e3-c63d-4382-a864-cb0570e277d8\") " pod="openstack/dnsmasq-dns-745b9ddc8c-7fptv" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.177821 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bfd51e3-c63d-4382-a864-cb0570e277d8-config\") pod \"dnsmasq-dns-745b9ddc8c-7fptv\" (UID: \"8bfd51e3-c63d-4382-a864-cb0570e277d8\") " pod="openstack/dnsmasq-dns-745b9ddc8c-7fptv" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.196698 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af3ad5b2-503c-4d1c-927c-0feab47e5212-combined-ca-bundle\") pod \"neutron-db-sync-4sjrf\" (UID: \"af3ad5b2-503c-4d1c-927c-0feab47e5212\") " pod="openstack/neutron-db-sync-4sjrf" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.208093 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/af3ad5b2-503c-4d1c-927c-0feab47e5212-config\") pod \"neutron-db-sync-4sjrf\" (UID: \"af3ad5b2-503c-4d1c-927c-0feab47e5212\") " pod="openstack/neutron-db-sync-4sjrf" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.236556 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kdg4\" (UniqueName: \"kubernetes.io/projected/af3ad5b2-503c-4d1c-927c-0feab47e5212-kube-api-access-4kdg4\") pod \"neutron-db-sync-4sjrf\" (UID: \"af3ad5b2-503c-4d1c-927c-0feab47e5212\") " pod="openstack/neutron-db-sync-4sjrf" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.236705 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.240481 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.242748 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fm66t\" (UniqueName: \"kubernetes.io/projected/8bfd51e3-c63d-4382-a864-cb0570e277d8-kube-api-access-fm66t\") pod \"dnsmasq-dns-745b9ddc8c-7fptv\" (UID: \"8bfd51e3-c63d-4382-a864-cb0570e277d8\") " pod="openstack/dnsmasq-dns-745b9ddc8c-7fptv" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.250404 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.282424 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42494b25-5db7-478c-b2a9-e14c7d990c0c-logs\") pod \"placement-db-sync-55q5b\" (UID: \"42494b25-5db7-478c-b2a9-e14c7d990c0c\") " pod="openstack/placement-db-sync-55q5b" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.282526 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f26cf011-4f52-4d26-a248-b92906824399-combined-ca-bundle\") pod \"barbican-db-sync-77d4t\" (UID: \"f26cf011-4f52-4d26-a248-b92906824399\") " pod="openstack/barbican-db-sync-77d4t" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.282579 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42494b25-5db7-478c-b2a9-e14c7d990c0c-combined-ca-bundle\") pod \"placement-db-sync-55q5b\" (UID: \"42494b25-5db7-478c-b2a9-e14c7d990c0c\") " pod="openstack/placement-db-sync-55q5b" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.282636 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hncz2\" (UniqueName: \"kubernetes.io/projected/f26cf011-4f52-4d26-a248-b92906824399-kube-api-access-hncz2\") pod \"barbican-db-sync-77d4t\" (UID: \"f26cf011-4f52-4d26-a248-b92906824399\") " pod="openstack/barbican-db-sync-77d4t" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.282674 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42494b25-5db7-478c-b2a9-e14c7d990c0c-scripts\") pod \"placement-db-sync-55q5b\" (UID: \"42494b25-5db7-478c-b2a9-e14c7d990c0c\") " pod="openstack/placement-db-sync-55q5b" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.282701 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f26cf011-4f52-4d26-a248-b92906824399-db-sync-config-data\") pod \"barbican-db-sync-77d4t\" (UID: \"f26cf011-4f52-4d26-a248-b92906824399\") " pod="openstack/barbican-db-sync-77d4t" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.282710 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.282720 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bt8w8\" (UniqueName: \"kubernetes.io/projected/42494b25-5db7-478c-b2a9-e14c7d990c0c-kube-api-access-bt8w8\") pod \"placement-db-sync-55q5b\" (UID: \"42494b25-5db7-478c-b2a9-e14c7d990c0c\") " pod="openstack/placement-db-sync-55q5b" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.283019 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42494b25-5db7-478c-b2a9-e14c7d990c0c-config-data\") pod \"placement-db-sync-55q5b\" (UID: \"42494b25-5db7-478c-b2a9-e14c7d990c0c\") " pod="openstack/placement-db-sync-55q5b" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.288295 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-745b9ddc8c-7fptv" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.292257 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42494b25-5db7-478c-b2a9-e14c7d990c0c-logs\") pod \"placement-db-sync-55q5b\" (UID: \"42494b25-5db7-478c-b2a9-e14c7d990c0c\") " pod="openstack/placement-db-sync-55q5b" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.298675 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f26cf011-4f52-4d26-a248-b92906824399-db-sync-config-data\") pod \"barbican-db-sync-77d4t\" (UID: \"f26cf011-4f52-4d26-a248-b92906824399\") " pod="openstack/barbican-db-sync-77d4t" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.302038 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42494b25-5db7-478c-b2a9-e14c7d990c0c-scripts\") pod \"placement-db-sync-55q5b\" (UID: \"42494b25-5db7-478c-b2a9-e14c7d990c0c\") " pod="openstack/placement-db-sync-55q5b" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.307263 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f26cf011-4f52-4d26-a248-b92906824399-combined-ca-bundle\") pod \"barbican-db-sync-77d4t\" (UID: \"f26cf011-4f52-4d26-a248-b92906824399\") " pod="openstack/barbican-db-sync-77d4t" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.312498 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hncz2\" (UniqueName: \"kubernetes.io/projected/f26cf011-4f52-4d26-a248-b92906824399-kube-api-access-hncz2\") pod \"barbican-db-sync-77d4t\" (UID: \"f26cf011-4f52-4d26-a248-b92906824399\") " pod="openstack/barbican-db-sync-77d4t" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.314346 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.328378 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4sjrf" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.330788 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42494b25-5db7-478c-b2a9-e14c7d990c0c-combined-ca-bundle\") pod \"placement-db-sync-55q5b\" (UID: \"42494b25-5db7-478c-b2a9-e14c7d990c0c\") " pod="openstack/placement-db-sync-55q5b" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.334063 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bt8w8\" (UniqueName: \"kubernetes.io/projected/42494b25-5db7-478c-b2a9-e14c7d990c0c-kube-api-access-bt8w8\") pod \"placement-db-sync-55q5b\" (UID: \"42494b25-5db7-478c-b2a9-e14c7d990c0c\") " pod="openstack/placement-db-sync-55q5b" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.334448 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42494b25-5db7-478c-b2a9-e14c7d990c0c-config-data\") pod \"placement-db-sync-55q5b\" (UID: \"42494b25-5db7-478c-b2a9-e14c7d990c0c\") " pod="openstack/placement-db-sync-55q5b" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.396533 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/540c7409-0d06-4ab4-89ab-e2a7dd84cb91-run-httpd\") pod \"ceilometer-0\" (UID: \"540c7409-0d06-4ab4-89ab-e2a7dd84cb91\") " pod="openstack/ceilometer-0" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.396590 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/540c7409-0d06-4ab4-89ab-e2a7dd84cb91-config-data\") pod \"ceilometer-0\" (UID: \"540c7409-0d06-4ab4-89ab-e2a7dd84cb91\") " pod="openstack/ceilometer-0" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.396697 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/540c7409-0d06-4ab4-89ab-e2a7dd84cb91-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"540c7409-0d06-4ab4-89ab-e2a7dd84cb91\") " pod="openstack/ceilometer-0" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.396763 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/540c7409-0d06-4ab4-89ab-e2a7dd84cb91-log-httpd\") pod \"ceilometer-0\" (UID: \"540c7409-0d06-4ab4-89ab-e2a7dd84cb91\") " pod="openstack/ceilometer-0" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.396864 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/540c7409-0d06-4ab4-89ab-e2a7dd84cb91-scripts\") pod \"ceilometer-0\" (UID: \"540c7409-0d06-4ab4-89ab-e2a7dd84cb91\") " pod="openstack/ceilometer-0" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.396898 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/540c7409-0d06-4ab4-89ab-e2a7dd84cb91-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"540c7409-0d06-4ab4-89ab-e2a7dd84cb91\") " pod="openstack/ceilometer-0" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.396952 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55z55\" (UniqueName: \"kubernetes.io/projected/540c7409-0d06-4ab4-89ab-e2a7dd84cb91-kube-api-access-55z55\") pod \"ceilometer-0\" (UID: \"540c7409-0d06-4ab4-89ab-e2a7dd84cb91\") " pod="openstack/ceilometer-0" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.400533 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-77d4t" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.457866 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-55q5b" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.499622 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/540c7409-0d06-4ab4-89ab-e2a7dd84cb91-run-httpd\") pod \"ceilometer-0\" (UID: \"540c7409-0d06-4ab4-89ab-e2a7dd84cb91\") " pod="openstack/ceilometer-0" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.499702 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/540c7409-0d06-4ab4-89ab-e2a7dd84cb91-config-data\") pod \"ceilometer-0\" (UID: \"540c7409-0d06-4ab4-89ab-e2a7dd84cb91\") " pod="openstack/ceilometer-0" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.499833 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/540c7409-0d06-4ab4-89ab-e2a7dd84cb91-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"540c7409-0d06-4ab4-89ab-e2a7dd84cb91\") " pod="openstack/ceilometer-0" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.499875 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/540c7409-0d06-4ab4-89ab-e2a7dd84cb91-log-httpd\") pod \"ceilometer-0\" (UID: \"540c7409-0d06-4ab4-89ab-e2a7dd84cb91\") " pod="openstack/ceilometer-0" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.499962 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/540c7409-0d06-4ab4-89ab-e2a7dd84cb91-scripts\") pod \"ceilometer-0\" (UID: \"540c7409-0d06-4ab4-89ab-e2a7dd84cb91\") " pod="openstack/ceilometer-0" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.499993 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/540c7409-0d06-4ab4-89ab-e2a7dd84cb91-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"540c7409-0d06-4ab4-89ab-e2a7dd84cb91\") " pod="openstack/ceilometer-0" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.500056 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55z55\" (UniqueName: \"kubernetes.io/projected/540c7409-0d06-4ab4-89ab-e2a7dd84cb91-kube-api-access-55z55\") pod \"ceilometer-0\" (UID: \"540c7409-0d06-4ab4-89ab-e2a7dd84cb91\") " pod="openstack/ceilometer-0" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.504325 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/540c7409-0d06-4ab4-89ab-e2a7dd84cb91-run-httpd\") pod \"ceilometer-0\" (UID: \"540c7409-0d06-4ab4-89ab-e2a7dd84cb91\") " pod="openstack/ceilometer-0" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.504392 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/540c7409-0d06-4ab4-89ab-e2a7dd84cb91-log-httpd\") pod \"ceilometer-0\" (UID: \"540c7409-0d06-4ab4-89ab-e2a7dd84cb91\") " pod="openstack/ceilometer-0" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.514120 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/540c7409-0d06-4ab4-89ab-e2a7dd84cb91-scripts\") pod \"ceilometer-0\" (UID: \"540c7409-0d06-4ab4-89ab-e2a7dd84cb91\") " pod="openstack/ceilometer-0" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.525858 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55z55\" (UniqueName: \"kubernetes.io/projected/540c7409-0d06-4ab4-89ab-e2a7dd84cb91-kube-api-access-55z55\") pod \"ceilometer-0\" (UID: \"540c7409-0d06-4ab4-89ab-e2a7dd84cb91\") " pod="openstack/ceilometer-0" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.547852 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/540c7409-0d06-4ab4-89ab-e2a7dd84cb91-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"540c7409-0d06-4ab4-89ab-e2a7dd84cb91\") " pod="openstack/ceilometer-0" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.551712 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/540c7409-0d06-4ab4-89ab-e2a7dd84cb91-config-data\") pod \"ceilometer-0\" (UID: \"540c7409-0d06-4ab4-89ab-e2a7dd84cb91\") " pod="openstack/ceilometer-0" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.556335 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/540c7409-0d06-4ab4-89ab-e2a7dd84cb91-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"540c7409-0d06-4ab4-89ab-e2a7dd84cb91\") " pod="openstack/ceilometer-0" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.630714 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 06:55:47 crc kubenswrapper[4947]: I1129 06:55:47.781873 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75bb4695fc-tmn66"] Nov 29 06:55:47 crc kubenswrapper[4947]: W1129 06:55:47.896339 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0723ce9_d9b2_4b12_91f1_220f211bac72.slice/crio-77e25b4214c4459962488da4cf47fc07725130c103cae4883091b571782b07a1 WatchSource:0}: Error finding container 77e25b4214c4459962488da4cf47fc07725130c103cae4883091b571782b07a1: Status 404 returned error can't find the container with id 77e25b4214c4459962488da4cf47fc07725130c103cae4883091b571782b07a1 Nov 29 06:55:48 crc kubenswrapper[4947]: I1129 06:55:48.051355 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-l67pc"] Nov 29 06:55:48 crc kubenswrapper[4947]: I1129 06:55:48.194563 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-l67pc" event={"ID":"b341c7eb-214f-49d0-ae91-a27c56857739","Type":"ContainerStarted","Data":"eda23693ae8e4d49522fc87a6b8ae86c3c1fe0f5471bf0148e0c8c0c5e4befb5"} Nov 29 06:55:48 crc kubenswrapper[4947]: I1129 06:55:48.196823 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bb4695fc-tmn66" event={"ID":"d0723ce9-d9b2-4b12-91f1-220f211bac72","Type":"ContainerStarted","Data":"77e25b4214c4459962488da4cf47fc07725130c103cae4883091b571782b07a1"} Nov 29 06:55:48 crc kubenswrapper[4947]: I1129 06:55:48.202696 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mh2wf"] Nov 29 06:55:48 crc kubenswrapper[4947]: I1129 06:55:48.318530 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-4sjrf"] Nov 29 06:55:48 crc kubenswrapper[4947]: I1129 06:55:48.327514 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-745b9ddc8c-7fptv"] Nov 29 06:55:48 crc kubenswrapper[4947]: W1129 06:55:48.333595 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8bfd51e3_c63d_4382_a864_cb0570e277d8.slice/crio-7e5822a3cc7c73b40b26869a36246c16c397ebf79c6cb546e3d84b4f2844cdaf WatchSource:0}: Error finding container 7e5822a3cc7c73b40b26869a36246c16c397ebf79c6cb546e3d84b4f2844cdaf: Status 404 returned error can't find the container with id 7e5822a3cc7c73b40b26869a36246c16c397ebf79c6cb546e3d84b4f2844cdaf Nov 29 06:55:48 crc kubenswrapper[4947]: I1129 06:55:48.477649 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-77d4t"] Nov 29 06:55:48 crc kubenswrapper[4947]: I1129 06:55:48.487870 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 06:55:48 crc kubenswrapper[4947]: I1129 06:55:48.656958 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-55q5b"] Nov 29 06:55:49 crc kubenswrapper[4947]: I1129 06:55:49.220486 4947 generic.go:334] "Generic (PLEG): container finished" podID="8bfd51e3-c63d-4382-a864-cb0570e277d8" containerID="914c2766feded3e7f76c8a625f6b44f5dc881a4aa99796559acc9afb28a89a37" exitCode=0 Nov 29 06:55:49 crc kubenswrapper[4947]: I1129 06:55:49.221106 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-745b9ddc8c-7fptv" event={"ID":"8bfd51e3-c63d-4382-a864-cb0570e277d8","Type":"ContainerDied","Data":"914c2766feded3e7f76c8a625f6b44f5dc881a4aa99796559acc9afb28a89a37"} Nov 29 06:55:49 crc kubenswrapper[4947]: I1129 06:55:49.221155 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-745b9ddc8c-7fptv" event={"ID":"8bfd51e3-c63d-4382-a864-cb0570e277d8","Type":"ContainerStarted","Data":"7e5822a3cc7c73b40b26869a36246c16c397ebf79c6cb546e3d84b4f2844cdaf"} Nov 29 06:55:49 crc kubenswrapper[4947]: I1129 06:55:49.227862 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-55q5b" event={"ID":"42494b25-5db7-478c-b2a9-e14c7d990c0c","Type":"ContainerStarted","Data":"296073b3fc8d78a741670bf787d41010af1b820b66f8aa8017a9c73ed2531174"} Nov 29 06:55:49 crc kubenswrapper[4947]: I1129 06:55:49.236616 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4sjrf" event={"ID":"af3ad5b2-503c-4d1c-927c-0feab47e5212","Type":"ContainerStarted","Data":"e3996ec78a4407f0f39b3068a197d553922ee5f1b0a9155bead9dca46afe0267"} Nov 29 06:55:49 crc kubenswrapper[4947]: I1129 06:55:49.236677 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4sjrf" event={"ID":"af3ad5b2-503c-4d1c-927c-0feab47e5212","Type":"ContainerStarted","Data":"222b09d42ba08bc614d3fa0a72c941b42f2f854132fd50c19f420ef8347c9f98"} Nov 29 06:55:49 crc kubenswrapper[4947]: I1129 06:55:49.256553 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"540c7409-0d06-4ab4-89ab-e2a7dd84cb91","Type":"ContainerStarted","Data":"dfa49f481f5ad06875e6f1168ac4ab94504b22beabc02c88f36a809c752d7a69"} Nov 29 06:55:49 crc kubenswrapper[4947]: I1129 06:55:49.260896 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6njvc" event={"ID":"7d3675d1-9c60-4463-936e-95953f64b250","Type":"ContainerStarted","Data":"dea28b99f1041cfd6ccad7f37b448d068eec67196b39a0e12bbb99e845020b24"} Nov 29 06:55:49 crc kubenswrapper[4947]: I1129 06:55:49.272920 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-77d4t" event={"ID":"f26cf011-4f52-4d26-a248-b92906824399","Type":"ContainerStarted","Data":"cb221bfdd7ece81457b8fcba3f53f0f7667d5cb1cacfa7ef4403c24844cf6099"} Nov 29 06:55:49 crc kubenswrapper[4947]: I1129 06:55:49.280161 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mh2wf" event={"ID":"9735c296-f485-4f6a-b52b-e3c08ff6593b","Type":"ContainerStarted","Data":"7bd62e2877550c20eabd05cf2f63e2b561345bf6d8e8494a1e9c3b047ba4ea8a"} Nov 29 06:55:49 crc kubenswrapper[4947]: I1129 06:55:49.280247 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mh2wf" event={"ID":"9735c296-f485-4f6a-b52b-e3c08ff6593b","Type":"ContainerStarted","Data":"0a4b52de2d8cf83978b79f53d678e8485d9d656db6ab1bf541cd6a2e5a606d43"} Nov 29 06:55:49 crc kubenswrapper[4947]: I1129 06:55:49.296780 4947 generic.go:334] "Generic (PLEG): container finished" podID="d0723ce9-d9b2-4b12-91f1-220f211bac72" containerID="b746e207432a02d1c9293a23b1b23a0b8b6aa5889663e12cf0548c12f87dda91" exitCode=0 Nov 29 06:55:49 crc kubenswrapper[4947]: I1129 06:55:49.296872 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bb4695fc-tmn66" event={"ID":"d0723ce9-d9b2-4b12-91f1-220f211bac72","Type":"ContainerDied","Data":"b746e207432a02d1c9293a23b1b23a0b8b6aa5889663e12cf0548c12f87dda91"} Nov 29 06:55:49 crc kubenswrapper[4947]: I1129 06:55:49.489754 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-4sjrf" podStartSLOduration=3.489722759 podStartE2EDuration="3.489722759s" podCreationTimestamp="2025-11-29 06:55:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:55:49.451078958 +0000 UTC m=+1300.495461039" watchObservedRunningTime="2025-11-29 06:55:49.489722759 +0000 UTC m=+1300.534104840" Nov 29 06:55:49 crc kubenswrapper[4947]: I1129 06:55:49.594300 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-6njvc" podStartSLOduration=5.264834999 podStartE2EDuration="39.594264347s" podCreationTimestamp="2025-11-29 06:55:10 +0000 UTC" firstStartedPulling="2025-11-29 06:55:11.934611574 +0000 UTC m=+1262.978993655" lastFinishedPulling="2025-11-29 06:55:46.264040912 +0000 UTC m=+1297.308423003" observedRunningTime="2025-11-29 06:55:49.589796538 +0000 UTC m=+1300.634178639" watchObservedRunningTime="2025-11-29 06:55:49.594264347 +0000 UTC m=+1300.638646428" Nov 29 06:55:49 crc kubenswrapper[4947]: I1129 06:55:49.596249 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-mh2wf" podStartSLOduration=3.596240186 podStartE2EDuration="3.596240186s" podCreationTimestamp="2025-11-29 06:55:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:55:49.558550497 +0000 UTC m=+1300.602932588" watchObservedRunningTime="2025-11-29 06:55:49.596240186 +0000 UTC m=+1300.640622267" Nov 29 06:55:49 crc kubenswrapper[4947]: I1129 06:55:49.613647 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 06:55:50 crc kubenswrapper[4947]: I1129 06:55:50.027134 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bb4695fc-tmn66" Nov 29 06:55:50 crc kubenswrapper[4947]: I1129 06:55:50.035011 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0723ce9-d9b2-4b12-91f1-220f211bac72-config\") pod \"d0723ce9-d9b2-4b12-91f1-220f211bac72\" (UID: \"d0723ce9-d9b2-4b12-91f1-220f211bac72\") " Nov 29 06:55:50 crc kubenswrapper[4947]: I1129 06:55:50.035253 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d0723ce9-d9b2-4b12-91f1-220f211bac72-ovsdbserver-nb\") pod \"d0723ce9-d9b2-4b12-91f1-220f211bac72\" (UID: \"d0723ce9-d9b2-4b12-91f1-220f211bac72\") " Nov 29 06:55:50 crc kubenswrapper[4947]: I1129 06:55:50.035366 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0723ce9-d9b2-4b12-91f1-220f211bac72-dns-svc\") pod \"d0723ce9-d9b2-4b12-91f1-220f211bac72\" (UID: \"d0723ce9-d9b2-4b12-91f1-220f211bac72\") " Nov 29 06:55:50 crc kubenswrapper[4947]: I1129 06:55:50.035485 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d0723ce9-d9b2-4b12-91f1-220f211bac72-ovsdbserver-sb\") pod \"d0723ce9-d9b2-4b12-91f1-220f211bac72\" (UID: \"d0723ce9-d9b2-4b12-91f1-220f211bac72\") " Nov 29 06:55:50 crc kubenswrapper[4947]: I1129 06:55:50.035613 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drb5r\" (UniqueName: \"kubernetes.io/projected/d0723ce9-d9b2-4b12-91f1-220f211bac72-kube-api-access-drb5r\") pod \"d0723ce9-d9b2-4b12-91f1-220f211bac72\" (UID: \"d0723ce9-d9b2-4b12-91f1-220f211bac72\") " Nov 29 06:55:50 crc kubenswrapper[4947]: I1129 06:55:50.065558 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0723ce9-d9b2-4b12-91f1-220f211bac72-kube-api-access-drb5r" (OuterVolumeSpecName: "kube-api-access-drb5r") pod "d0723ce9-d9b2-4b12-91f1-220f211bac72" (UID: "d0723ce9-d9b2-4b12-91f1-220f211bac72"). InnerVolumeSpecName "kube-api-access-drb5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:55:50 crc kubenswrapper[4947]: I1129 06:55:50.075183 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0723ce9-d9b2-4b12-91f1-220f211bac72-config" (OuterVolumeSpecName: "config") pod "d0723ce9-d9b2-4b12-91f1-220f211bac72" (UID: "d0723ce9-d9b2-4b12-91f1-220f211bac72"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:55:50 crc kubenswrapper[4947]: I1129 06:55:50.082553 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0723ce9-d9b2-4b12-91f1-220f211bac72-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d0723ce9-d9b2-4b12-91f1-220f211bac72" (UID: "d0723ce9-d9b2-4b12-91f1-220f211bac72"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:55:50 crc kubenswrapper[4947]: I1129 06:55:50.088264 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0723ce9-d9b2-4b12-91f1-220f211bac72-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d0723ce9-d9b2-4b12-91f1-220f211bac72" (UID: "d0723ce9-d9b2-4b12-91f1-220f211bac72"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:55:50 crc kubenswrapper[4947]: I1129 06:55:50.088741 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0723ce9-d9b2-4b12-91f1-220f211bac72-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d0723ce9-d9b2-4b12-91f1-220f211bac72" (UID: "d0723ce9-d9b2-4b12-91f1-220f211bac72"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:55:50 crc kubenswrapper[4947]: I1129 06:55:50.137787 4947 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0723ce9-d9b2-4b12-91f1-220f211bac72-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 06:55:50 crc kubenswrapper[4947]: I1129 06:55:50.138268 4947 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d0723ce9-d9b2-4b12-91f1-220f211bac72-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 06:55:50 crc kubenswrapper[4947]: I1129 06:55:50.138283 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drb5r\" (UniqueName: \"kubernetes.io/projected/d0723ce9-d9b2-4b12-91f1-220f211bac72-kube-api-access-drb5r\") on node \"crc\" DevicePath \"\"" Nov 29 06:55:50 crc kubenswrapper[4947]: I1129 06:55:50.138292 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0723ce9-d9b2-4b12-91f1-220f211bac72-config\") on node \"crc\" DevicePath \"\"" Nov 29 06:55:50 crc kubenswrapper[4947]: I1129 06:55:50.138301 4947 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d0723ce9-d9b2-4b12-91f1-220f211bac72-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 06:55:50 crc kubenswrapper[4947]: I1129 06:55:50.315091 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-745b9ddc8c-7fptv" event={"ID":"8bfd51e3-c63d-4382-a864-cb0570e277d8","Type":"ContainerStarted","Data":"8731b73dcd468f70161bee9662a54eaf91c21f855da0fbd1d56ae2c80052d69f"} Nov 29 06:55:50 crc kubenswrapper[4947]: I1129 06:55:50.315555 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-745b9ddc8c-7fptv" Nov 29 06:55:50 crc kubenswrapper[4947]: I1129 06:55:50.319893 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bb4695fc-tmn66" event={"ID":"d0723ce9-d9b2-4b12-91f1-220f211bac72","Type":"ContainerDied","Data":"77e25b4214c4459962488da4cf47fc07725130c103cae4883091b571782b07a1"} Nov 29 06:55:50 crc kubenswrapper[4947]: I1129 06:55:50.321747 4947 scope.go:117] "RemoveContainer" containerID="b746e207432a02d1c9293a23b1b23a0b8b6aa5889663e12cf0548c12f87dda91" Nov 29 06:55:50 crc kubenswrapper[4947]: I1129 06:55:50.317186 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bb4695fc-tmn66" Nov 29 06:55:50 crc kubenswrapper[4947]: I1129 06:55:50.342713 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-745b9ddc8c-7fptv" podStartSLOduration=4.342686938 podStartE2EDuration="4.342686938s" podCreationTimestamp="2025-11-29 06:55:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:55:50.33947865 +0000 UTC m=+1301.383860731" watchObservedRunningTime="2025-11-29 06:55:50.342686938 +0000 UTC m=+1301.387069019" Nov 29 06:55:50 crc kubenswrapper[4947]: I1129 06:55:50.473678 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75bb4695fc-tmn66"] Nov 29 06:55:50 crc kubenswrapper[4947]: I1129 06:55:50.482522 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75bb4695fc-tmn66"] Nov 29 06:55:51 crc kubenswrapper[4947]: I1129 06:55:51.193332 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0723ce9-d9b2-4b12-91f1-220f211bac72" path="/var/lib/kubelet/pods/d0723ce9-d9b2-4b12-91f1-220f211bac72/volumes" Nov 29 06:55:57 crc kubenswrapper[4947]: I1129 06:55:57.294413 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-745b9ddc8c-7fptv" Nov 29 06:55:57 crc kubenswrapper[4947]: I1129 06:55:57.355032 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-s8tfs"] Nov 29 06:55:57 crc kubenswrapper[4947]: I1129 06:55:57.355354 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-s8tfs" podUID="415d7008-edae-402c-8138-2c069385d502" containerName="dnsmasq-dns" containerID="cri-o://702c25b9fc0f8fb61bbd2b4c53eadd1a17083f4fa927624a756f734a26cb553d" gracePeriod=10 Nov 29 06:55:57 crc kubenswrapper[4947]: I1129 06:55:57.789889 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-s8tfs" podUID="415d7008-edae-402c-8138-2c069385d502" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: connect: connection refused" Nov 29 06:55:59 crc kubenswrapper[4947]: E1129 06:55:59.076954 4947 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod415d7008_edae_402c_8138_2c069385d502.slice/crio-conmon-702c25b9fc0f8fb61bbd2b4c53eadd1a17083f4fa927624a756f734a26cb553d.scope\": RecentStats: unable to find data in memory cache]" Nov 29 06:55:59 crc kubenswrapper[4947]: I1129 06:55:59.453183 4947 generic.go:334] "Generic (PLEG): container finished" podID="415d7008-edae-402c-8138-2c069385d502" containerID="702c25b9fc0f8fb61bbd2b4c53eadd1a17083f4fa927624a756f734a26cb553d" exitCode=0 Nov 29 06:55:59 crc kubenswrapper[4947]: I1129 06:55:59.453264 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-s8tfs" event={"ID":"415d7008-edae-402c-8138-2c069385d502","Type":"ContainerDied","Data":"702c25b9fc0f8fb61bbd2b4c53eadd1a17083f4fa927624a756f734a26cb553d"} Nov 29 06:56:07 crc kubenswrapper[4947]: E1129 06:56:07.623232 4947 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Nov 29 06:56:07 crc kubenswrapper[4947]: E1129 06:56:07.624309 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n7fh94h577h5d9h597hd9h58bhfdhc9h574h7fh67bh565h78h55dhb6h56bh579h89h56fh55bh56fh5f6h66fh6bh86h54ch64h8fh5f8hc4h5cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-55z55,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(540c7409-0d06-4ab4-89ab-e2a7dd84cb91): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 06:56:07 crc kubenswrapper[4947]: I1129 06:56:07.790877 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-s8tfs" podUID="415d7008-edae-402c-8138-2c069385d502" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: i/o timeout" Nov 29 06:56:09 crc kubenswrapper[4947]: I1129 06:56:09.555659 4947 generic.go:334] "Generic (PLEG): container finished" podID="9735c296-f485-4f6a-b52b-e3c08ff6593b" containerID="7bd62e2877550c20eabd05cf2f63e2b561345bf6d8e8494a1e9c3b047ba4ea8a" exitCode=0 Nov 29 06:56:09 crc kubenswrapper[4947]: I1129 06:56:09.555714 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mh2wf" event={"ID":"9735c296-f485-4f6a-b52b-e3c08ff6593b","Type":"ContainerDied","Data":"7bd62e2877550c20eabd05cf2f63e2b561345bf6d8e8494a1e9c3b047ba4ea8a"} Nov 29 06:56:11 crc kubenswrapper[4947]: E1129 06:56:11.459623 4947 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Nov 29 06:56:11 crc kubenswrapper[4947]: E1129 06:56:11.460855 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bt8w8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-55q5b_openstack(42494b25-5db7-478c-b2a9-e14c7d990c0c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 06:56:11 crc kubenswrapper[4947]: E1129 06:56:11.462276 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-55q5b" podUID="42494b25-5db7-478c-b2a9-e14c7d990c0c" Nov 29 06:56:11 crc kubenswrapper[4947]: E1129 06:56:11.577682 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-55q5b" podUID="42494b25-5db7-478c-b2a9-e14c7d990c0c" Nov 29 06:56:12 crc kubenswrapper[4947]: I1129 06:56:12.791477 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-s8tfs" podUID="415d7008-edae-402c-8138-2c069385d502" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: i/o timeout" Nov 29 06:56:12 crc kubenswrapper[4947]: I1129 06:56:12.791868 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-s8tfs" Nov 29 06:56:17 crc kubenswrapper[4947]: I1129 06:56:17.792260 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-s8tfs" podUID="415d7008-edae-402c-8138-2c069385d502" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: i/o timeout" Nov 29 06:56:20 crc kubenswrapper[4947]: I1129 06:56:20.997129 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-s8tfs" Nov 29 06:56:21 crc kubenswrapper[4947]: I1129 06:56:21.185685 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cv5n\" (UniqueName: \"kubernetes.io/projected/415d7008-edae-402c-8138-2c069385d502-kube-api-access-2cv5n\") pod \"415d7008-edae-402c-8138-2c069385d502\" (UID: \"415d7008-edae-402c-8138-2c069385d502\") " Nov 29 06:56:21 crc kubenswrapper[4947]: I1129 06:56:21.185769 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/415d7008-edae-402c-8138-2c069385d502-ovsdbserver-nb\") pod \"415d7008-edae-402c-8138-2c069385d502\" (UID: \"415d7008-edae-402c-8138-2c069385d502\") " Nov 29 06:56:21 crc kubenswrapper[4947]: I1129 06:56:21.185816 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/415d7008-edae-402c-8138-2c069385d502-dns-svc\") pod \"415d7008-edae-402c-8138-2c069385d502\" (UID: \"415d7008-edae-402c-8138-2c069385d502\") " Nov 29 06:56:21 crc kubenswrapper[4947]: I1129 06:56:21.185894 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/415d7008-edae-402c-8138-2c069385d502-ovsdbserver-sb\") pod \"415d7008-edae-402c-8138-2c069385d502\" (UID: \"415d7008-edae-402c-8138-2c069385d502\") " Nov 29 06:56:21 crc kubenswrapper[4947]: I1129 06:56:21.186018 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/415d7008-edae-402c-8138-2c069385d502-config\") pod \"415d7008-edae-402c-8138-2c069385d502\" (UID: \"415d7008-edae-402c-8138-2c069385d502\") " Nov 29 06:56:21 crc kubenswrapper[4947]: I1129 06:56:21.192936 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/415d7008-edae-402c-8138-2c069385d502-kube-api-access-2cv5n" (OuterVolumeSpecName: "kube-api-access-2cv5n") pod "415d7008-edae-402c-8138-2c069385d502" (UID: "415d7008-edae-402c-8138-2c069385d502"). InnerVolumeSpecName "kube-api-access-2cv5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:56:21 crc kubenswrapper[4947]: I1129 06:56:21.244367 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/415d7008-edae-402c-8138-2c069385d502-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "415d7008-edae-402c-8138-2c069385d502" (UID: "415d7008-edae-402c-8138-2c069385d502"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:56:21 crc kubenswrapper[4947]: I1129 06:56:21.246412 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/415d7008-edae-402c-8138-2c069385d502-config" (OuterVolumeSpecName: "config") pod "415d7008-edae-402c-8138-2c069385d502" (UID: "415d7008-edae-402c-8138-2c069385d502"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:56:21 crc kubenswrapper[4947]: I1129 06:56:21.248392 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/415d7008-edae-402c-8138-2c069385d502-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "415d7008-edae-402c-8138-2c069385d502" (UID: "415d7008-edae-402c-8138-2c069385d502"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:56:21 crc kubenswrapper[4947]: I1129 06:56:21.255186 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/415d7008-edae-402c-8138-2c069385d502-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "415d7008-edae-402c-8138-2c069385d502" (UID: "415d7008-edae-402c-8138-2c069385d502"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:56:21 crc kubenswrapper[4947]: I1129 06:56:21.288298 4947 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/415d7008-edae-402c-8138-2c069385d502-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 06:56:21 crc kubenswrapper[4947]: I1129 06:56:21.288365 4947 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/415d7008-edae-402c-8138-2c069385d502-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 06:56:21 crc kubenswrapper[4947]: I1129 06:56:21.288377 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/415d7008-edae-402c-8138-2c069385d502-config\") on node \"crc\" DevicePath \"\"" Nov 29 06:56:21 crc kubenswrapper[4947]: I1129 06:56:21.288389 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cv5n\" (UniqueName: \"kubernetes.io/projected/415d7008-edae-402c-8138-2c069385d502-kube-api-access-2cv5n\") on node \"crc\" DevicePath \"\"" Nov 29 06:56:21 crc kubenswrapper[4947]: I1129 06:56:21.288410 4947 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/415d7008-edae-402c-8138-2c069385d502-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 06:56:21 crc kubenswrapper[4947]: I1129 06:56:21.682244 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-s8tfs" event={"ID":"415d7008-edae-402c-8138-2c069385d502","Type":"ContainerDied","Data":"f475eb0bb03b7b1dd1276756993d4c6346c426f173ee18e0ab21a9339c340323"} Nov 29 06:56:21 crc kubenswrapper[4947]: I1129 06:56:21.682319 4947 scope.go:117] "RemoveContainer" containerID="702c25b9fc0f8fb61bbd2b4c53eadd1a17083f4fa927624a756f734a26cb553d" Nov 29 06:56:21 crc kubenswrapper[4947]: I1129 06:56:21.682479 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-s8tfs" Nov 29 06:56:21 crc kubenswrapper[4947]: I1129 06:56:21.726843 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-s8tfs"] Nov 29 06:56:21 crc kubenswrapper[4947]: I1129 06:56:21.734508 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-s8tfs"] Nov 29 06:56:22 crc kubenswrapper[4947]: I1129 06:56:22.794033 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-s8tfs" podUID="415d7008-edae-402c-8138-2c069385d502" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: i/o timeout" Nov 29 06:56:23 crc kubenswrapper[4947]: E1129 06:56:23.182646 4947 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Nov 29 06:56:23 crc kubenswrapper[4947]: E1129 06:56:23.182934 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hncz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-77d4t_openstack(f26cf011-4f52-4d26-a248-b92906824399): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 06:56:23 crc kubenswrapper[4947]: E1129 06:56:23.184131 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-77d4t" podUID="f26cf011-4f52-4d26-a248-b92906824399" Nov 29 06:56:23 crc kubenswrapper[4947]: I1129 06:56:23.191676 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="415d7008-edae-402c-8138-2c069385d502" path="/var/lib/kubelet/pods/415d7008-edae-402c-8138-2c069385d502/volumes" Nov 29 06:56:23 crc kubenswrapper[4947]: E1129 06:56:23.704530 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-77d4t" podUID="f26cf011-4f52-4d26-a248-b92906824399" Nov 29 06:56:24 crc kubenswrapper[4947]: I1129 06:56:24.285653 4947 scope.go:117] "RemoveContainer" containerID="9567132af4e2a6dc3d88eab753d4bab2c217aba48ead8063efa76404ef6d58ad" Nov 29 06:56:24 crc kubenswrapper[4947]: E1129 06:56:24.339318 4947 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Nov 29 06:56:24 crc kubenswrapper[4947]: E1129 06:56:24.339625 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rpj5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-l67pc_openstack(b341c7eb-214f-49d0-ae91-a27c56857739): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 06:56:24 crc kubenswrapper[4947]: E1129 06:56:24.340798 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-l67pc" podUID="b341c7eb-214f-49d0-ae91-a27c56857739" Nov 29 06:56:24 crc kubenswrapper[4947]: I1129 06:56:24.400865 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mh2wf" Nov 29 06:56:24 crc kubenswrapper[4947]: I1129 06:56:24.554322 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-st96r\" (UniqueName: \"kubernetes.io/projected/9735c296-f485-4f6a-b52b-e3c08ff6593b-kube-api-access-st96r\") pod \"9735c296-f485-4f6a-b52b-e3c08ff6593b\" (UID: \"9735c296-f485-4f6a-b52b-e3c08ff6593b\") " Nov 29 06:56:24 crc kubenswrapper[4947]: I1129 06:56:24.554427 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9735c296-f485-4f6a-b52b-e3c08ff6593b-config-data\") pod \"9735c296-f485-4f6a-b52b-e3c08ff6593b\" (UID: \"9735c296-f485-4f6a-b52b-e3c08ff6593b\") " Nov 29 06:56:24 crc kubenswrapper[4947]: I1129 06:56:24.554452 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9735c296-f485-4f6a-b52b-e3c08ff6593b-scripts\") pod \"9735c296-f485-4f6a-b52b-e3c08ff6593b\" (UID: \"9735c296-f485-4f6a-b52b-e3c08ff6593b\") " Nov 29 06:56:24 crc kubenswrapper[4947]: I1129 06:56:24.554494 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9735c296-f485-4f6a-b52b-e3c08ff6593b-combined-ca-bundle\") pod \"9735c296-f485-4f6a-b52b-e3c08ff6593b\" (UID: \"9735c296-f485-4f6a-b52b-e3c08ff6593b\") " Nov 29 06:56:24 crc kubenswrapper[4947]: I1129 06:56:24.554597 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9735c296-f485-4f6a-b52b-e3c08ff6593b-fernet-keys\") pod \"9735c296-f485-4f6a-b52b-e3c08ff6593b\" (UID: \"9735c296-f485-4f6a-b52b-e3c08ff6593b\") " Nov 29 06:56:24 crc kubenswrapper[4947]: I1129 06:56:24.554701 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9735c296-f485-4f6a-b52b-e3c08ff6593b-credential-keys\") pod \"9735c296-f485-4f6a-b52b-e3c08ff6593b\" (UID: \"9735c296-f485-4f6a-b52b-e3c08ff6593b\") " Nov 29 06:56:24 crc kubenswrapper[4947]: I1129 06:56:24.566466 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9735c296-f485-4f6a-b52b-e3c08ff6593b-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "9735c296-f485-4f6a-b52b-e3c08ff6593b" (UID: "9735c296-f485-4f6a-b52b-e3c08ff6593b"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:56:24 crc kubenswrapper[4947]: I1129 06:56:24.568459 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9735c296-f485-4f6a-b52b-e3c08ff6593b-kube-api-access-st96r" (OuterVolumeSpecName: "kube-api-access-st96r") pod "9735c296-f485-4f6a-b52b-e3c08ff6593b" (UID: "9735c296-f485-4f6a-b52b-e3c08ff6593b"). InnerVolumeSpecName "kube-api-access-st96r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:56:24 crc kubenswrapper[4947]: I1129 06:56:24.568568 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9735c296-f485-4f6a-b52b-e3c08ff6593b-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "9735c296-f485-4f6a-b52b-e3c08ff6593b" (UID: "9735c296-f485-4f6a-b52b-e3c08ff6593b"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:56:24 crc kubenswrapper[4947]: I1129 06:56:24.568759 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9735c296-f485-4f6a-b52b-e3c08ff6593b-scripts" (OuterVolumeSpecName: "scripts") pod "9735c296-f485-4f6a-b52b-e3c08ff6593b" (UID: "9735c296-f485-4f6a-b52b-e3c08ff6593b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:56:24 crc kubenswrapper[4947]: I1129 06:56:24.589264 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9735c296-f485-4f6a-b52b-e3c08ff6593b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9735c296-f485-4f6a-b52b-e3c08ff6593b" (UID: "9735c296-f485-4f6a-b52b-e3c08ff6593b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:56:24 crc kubenswrapper[4947]: I1129 06:56:24.594550 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9735c296-f485-4f6a-b52b-e3c08ff6593b-config-data" (OuterVolumeSpecName: "config-data") pod "9735c296-f485-4f6a-b52b-e3c08ff6593b" (UID: "9735c296-f485-4f6a-b52b-e3c08ff6593b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:56:24 crc kubenswrapper[4947]: I1129 06:56:24.657257 4947 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9735c296-f485-4f6a-b52b-e3c08ff6593b-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 29 06:56:24 crc kubenswrapper[4947]: I1129 06:56:24.657315 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-st96r\" (UniqueName: \"kubernetes.io/projected/9735c296-f485-4f6a-b52b-e3c08ff6593b-kube-api-access-st96r\") on node \"crc\" DevicePath \"\"" Nov 29 06:56:24 crc kubenswrapper[4947]: I1129 06:56:24.657334 4947 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9735c296-f485-4f6a-b52b-e3c08ff6593b-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 06:56:24 crc kubenswrapper[4947]: I1129 06:56:24.657347 4947 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9735c296-f485-4f6a-b52b-e3c08ff6593b-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 06:56:24 crc kubenswrapper[4947]: I1129 06:56:24.657364 4947 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9735c296-f485-4f6a-b52b-e3c08ff6593b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 06:56:24 crc kubenswrapper[4947]: I1129 06:56:24.657378 4947 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9735c296-f485-4f6a-b52b-e3c08ff6593b-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 29 06:56:24 crc kubenswrapper[4947]: I1129 06:56:24.721568 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mh2wf" Nov 29 06:56:24 crc kubenswrapper[4947]: I1129 06:56:24.721576 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mh2wf" event={"ID":"9735c296-f485-4f6a-b52b-e3c08ff6593b","Type":"ContainerDied","Data":"0a4b52de2d8cf83978b79f53d678e8485d9d656db6ab1bf541cd6a2e5a606d43"} Nov 29 06:56:24 crc kubenswrapper[4947]: I1129 06:56:24.721659 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a4b52de2d8cf83978b79f53d678e8485d9d656db6ab1bf541cd6a2e5a606d43" Nov 29 06:56:24 crc kubenswrapper[4947]: E1129 06:56:24.728403 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-l67pc" podUID="b341c7eb-214f-49d0-ae91-a27c56857739" Nov 29 06:56:25 crc kubenswrapper[4947]: I1129 06:56:25.499768 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-mh2wf"] Nov 29 06:56:25 crc kubenswrapper[4947]: I1129 06:56:25.508952 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-mh2wf"] Nov 29 06:56:25 crc kubenswrapper[4947]: I1129 06:56:25.615005 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-qqj6m"] Nov 29 06:56:25 crc kubenswrapper[4947]: E1129 06:56:25.615583 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="415d7008-edae-402c-8138-2c069385d502" containerName="dnsmasq-dns" Nov 29 06:56:25 crc kubenswrapper[4947]: I1129 06:56:25.615615 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="415d7008-edae-402c-8138-2c069385d502" containerName="dnsmasq-dns" Nov 29 06:56:25 crc kubenswrapper[4947]: E1129 06:56:25.615632 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="415d7008-edae-402c-8138-2c069385d502" containerName="init" Nov 29 06:56:25 crc kubenswrapper[4947]: I1129 06:56:25.615643 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="415d7008-edae-402c-8138-2c069385d502" containerName="init" Nov 29 06:56:25 crc kubenswrapper[4947]: E1129 06:56:25.615661 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0723ce9-d9b2-4b12-91f1-220f211bac72" containerName="init" Nov 29 06:56:25 crc kubenswrapper[4947]: I1129 06:56:25.615670 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0723ce9-d9b2-4b12-91f1-220f211bac72" containerName="init" Nov 29 06:56:25 crc kubenswrapper[4947]: E1129 06:56:25.615687 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9735c296-f485-4f6a-b52b-e3c08ff6593b" containerName="keystone-bootstrap" Nov 29 06:56:25 crc kubenswrapper[4947]: I1129 06:56:25.615699 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="9735c296-f485-4f6a-b52b-e3c08ff6593b" containerName="keystone-bootstrap" Nov 29 06:56:25 crc kubenswrapper[4947]: I1129 06:56:25.615913 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0723ce9-d9b2-4b12-91f1-220f211bac72" containerName="init" Nov 29 06:56:25 crc kubenswrapper[4947]: I1129 06:56:25.615931 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="415d7008-edae-402c-8138-2c069385d502" containerName="dnsmasq-dns" Nov 29 06:56:25 crc kubenswrapper[4947]: I1129 06:56:25.615951 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="9735c296-f485-4f6a-b52b-e3c08ff6593b" containerName="keystone-bootstrap" Nov 29 06:56:25 crc kubenswrapper[4947]: I1129 06:56:25.616923 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qqj6m" Nov 29 06:56:25 crc kubenswrapper[4947]: I1129 06:56:25.627101 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 29 06:56:25 crc kubenswrapper[4947]: I1129 06:56:25.627824 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 29 06:56:25 crc kubenswrapper[4947]: I1129 06:56:25.627855 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 29 06:56:25 crc kubenswrapper[4947]: I1129 06:56:25.627270 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-qqj6m"] Nov 29 06:56:25 crc kubenswrapper[4947]: I1129 06:56:25.627489 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zkrv9" Nov 29 06:56:25 crc kubenswrapper[4947]: I1129 06:56:25.628037 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 29 06:56:25 crc kubenswrapper[4947]: I1129 06:56:25.715475 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b-credential-keys\") pod \"keystone-bootstrap-qqj6m\" (UID: \"8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b\") " pod="openstack/keystone-bootstrap-qqj6m" Nov 29 06:56:25 crc kubenswrapper[4947]: I1129 06:56:25.715575 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b-config-data\") pod \"keystone-bootstrap-qqj6m\" (UID: \"8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b\") " pod="openstack/keystone-bootstrap-qqj6m" Nov 29 06:56:25 crc kubenswrapper[4947]: I1129 06:56:25.715609 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4qlg\" (UniqueName: \"kubernetes.io/projected/8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b-kube-api-access-g4qlg\") pod \"keystone-bootstrap-qqj6m\" (UID: \"8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b\") " pod="openstack/keystone-bootstrap-qqj6m" Nov 29 06:56:25 crc kubenswrapper[4947]: I1129 06:56:25.715683 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b-combined-ca-bundle\") pod \"keystone-bootstrap-qqj6m\" (UID: \"8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b\") " pod="openstack/keystone-bootstrap-qqj6m" Nov 29 06:56:25 crc kubenswrapper[4947]: I1129 06:56:25.715831 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b-fernet-keys\") pod \"keystone-bootstrap-qqj6m\" (UID: \"8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b\") " pod="openstack/keystone-bootstrap-qqj6m" Nov 29 06:56:25 crc kubenswrapper[4947]: I1129 06:56:25.715901 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b-scripts\") pod \"keystone-bootstrap-qqj6m\" (UID: \"8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b\") " pod="openstack/keystone-bootstrap-qqj6m" Nov 29 06:56:25 crc kubenswrapper[4947]: I1129 06:56:25.817646 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b-combined-ca-bundle\") pod \"keystone-bootstrap-qqj6m\" (UID: \"8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b\") " pod="openstack/keystone-bootstrap-qqj6m" Nov 29 06:56:25 crc kubenswrapper[4947]: I1129 06:56:25.817736 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b-fernet-keys\") pod \"keystone-bootstrap-qqj6m\" (UID: \"8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b\") " pod="openstack/keystone-bootstrap-qqj6m" Nov 29 06:56:25 crc kubenswrapper[4947]: I1129 06:56:25.817770 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b-scripts\") pod \"keystone-bootstrap-qqj6m\" (UID: \"8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b\") " pod="openstack/keystone-bootstrap-qqj6m" Nov 29 06:56:25 crc kubenswrapper[4947]: I1129 06:56:25.817877 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b-credential-keys\") pod \"keystone-bootstrap-qqj6m\" (UID: \"8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b\") " pod="openstack/keystone-bootstrap-qqj6m" Nov 29 06:56:25 crc kubenswrapper[4947]: I1129 06:56:25.817915 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b-config-data\") pod \"keystone-bootstrap-qqj6m\" (UID: \"8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b\") " pod="openstack/keystone-bootstrap-qqj6m" Nov 29 06:56:25 crc kubenswrapper[4947]: I1129 06:56:25.817947 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4qlg\" (UniqueName: \"kubernetes.io/projected/8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b-kube-api-access-g4qlg\") pod \"keystone-bootstrap-qqj6m\" (UID: \"8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b\") " pod="openstack/keystone-bootstrap-qqj6m" Nov 29 06:56:25 crc kubenswrapper[4947]: I1129 06:56:25.825611 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b-scripts\") pod \"keystone-bootstrap-qqj6m\" (UID: \"8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b\") " pod="openstack/keystone-bootstrap-qqj6m" Nov 29 06:56:25 crc kubenswrapper[4947]: I1129 06:56:25.825665 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b-fernet-keys\") pod \"keystone-bootstrap-qqj6m\" (UID: \"8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b\") " pod="openstack/keystone-bootstrap-qqj6m" Nov 29 06:56:25 crc kubenswrapper[4947]: I1129 06:56:25.825731 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b-combined-ca-bundle\") pod \"keystone-bootstrap-qqj6m\" (UID: \"8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b\") " pod="openstack/keystone-bootstrap-qqj6m" Nov 29 06:56:25 crc kubenswrapper[4947]: I1129 06:56:25.827065 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b-credential-keys\") pod \"keystone-bootstrap-qqj6m\" (UID: \"8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b\") " pod="openstack/keystone-bootstrap-qqj6m" Nov 29 06:56:25 crc kubenswrapper[4947]: I1129 06:56:25.828621 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b-config-data\") pod \"keystone-bootstrap-qqj6m\" (UID: \"8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b\") " pod="openstack/keystone-bootstrap-qqj6m" Nov 29 06:56:25 crc kubenswrapper[4947]: I1129 06:56:25.837315 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4qlg\" (UniqueName: \"kubernetes.io/projected/8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b-kube-api-access-g4qlg\") pod \"keystone-bootstrap-qqj6m\" (UID: \"8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b\") " pod="openstack/keystone-bootstrap-qqj6m" Nov 29 06:56:25 crc kubenswrapper[4947]: I1129 06:56:25.949907 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qqj6m" Nov 29 06:56:27 crc kubenswrapper[4947]: I1129 06:56:27.198962 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9735c296-f485-4f6a-b52b-e3c08ff6593b" path="/var/lib/kubelet/pods/9735c296-f485-4f6a-b52b-e3c08ff6593b/volumes" Nov 29 06:56:27 crc kubenswrapper[4947]: I1129 06:56:27.494549 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-qqj6m"] Nov 29 06:56:27 crc kubenswrapper[4947]: I1129 06:56:27.752084 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-55q5b" event={"ID":"42494b25-5db7-478c-b2a9-e14c7d990c0c","Type":"ContainerStarted","Data":"9f34816641510593c4f314d556b63978ad12e56bf50a7d9fd70958607ffbf0d4"} Nov 29 06:56:27 crc kubenswrapper[4947]: I1129 06:56:27.755476 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qqj6m" event={"ID":"8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b","Type":"ContainerStarted","Data":"1eb2fad9b9dc5d6de43102ccb18fc1faf81b53850822467b8df8390b76fb827b"} Nov 29 06:56:27 crc kubenswrapper[4947]: I1129 06:56:27.755533 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qqj6m" event={"ID":"8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b","Type":"ContainerStarted","Data":"294e0ae68c6fb5ffca323c9b63a8cc653b609ed61a5a07b6507f83622994edd4"} Nov 29 06:56:27 crc kubenswrapper[4947]: I1129 06:56:27.760105 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"540c7409-0d06-4ab4-89ab-e2a7dd84cb91","Type":"ContainerStarted","Data":"afdf16e6f49dc2357f8c1ef1e8d3cac3af6b2b50ae9675fc1ffab106d7c23f94"} Nov 29 06:56:27 crc kubenswrapper[4947]: I1129 06:56:27.783039 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-55q5b" podStartSLOduration=3.455943536 podStartE2EDuration="41.783017216s" podCreationTimestamp="2025-11-29 06:55:46 +0000 UTC" firstStartedPulling="2025-11-29 06:55:48.662792045 +0000 UTC m=+1299.707174126" lastFinishedPulling="2025-11-29 06:56:26.989865725 +0000 UTC m=+1338.034247806" observedRunningTime="2025-11-29 06:56:27.774095239 +0000 UTC m=+1338.818477340" watchObservedRunningTime="2025-11-29 06:56:27.783017216 +0000 UTC m=+1338.827399297" Nov 29 06:56:27 crc kubenswrapper[4947]: I1129 06:56:27.800452 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-qqj6m" podStartSLOduration=2.800433951 podStartE2EDuration="2.800433951s" podCreationTimestamp="2025-11-29 06:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:56:27.794959287 +0000 UTC m=+1338.839341368" watchObservedRunningTime="2025-11-29 06:56:27.800433951 +0000 UTC m=+1338.844816032" Nov 29 06:56:34 crc kubenswrapper[4947]: I1129 06:56:34.844749 4947 generic.go:334] "Generic (PLEG): container finished" podID="8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b" containerID="1eb2fad9b9dc5d6de43102ccb18fc1faf81b53850822467b8df8390b76fb827b" exitCode=0 Nov 29 06:56:34 crc kubenswrapper[4947]: I1129 06:56:34.845734 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qqj6m" event={"ID":"8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b","Type":"ContainerDied","Data":"1eb2fad9b9dc5d6de43102ccb18fc1faf81b53850822467b8df8390b76fb827b"} Nov 29 06:56:35 crc kubenswrapper[4947]: I1129 06:56:35.859573 4947 generic.go:334] "Generic (PLEG): container finished" podID="42494b25-5db7-478c-b2a9-e14c7d990c0c" containerID="9f34816641510593c4f314d556b63978ad12e56bf50a7d9fd70958607ffbf0d4" exitCode=0 Nov 29 06:56:35 crc kubenswrapper[4947]: I1129 06:56:35.859686 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-55q5b" event={"ID":"42494b25-5db7-478c-b2a9-e14c7d990c0c","Type":"ContainerDied","Data":"9f34816641510593c4f314d556b63978ad12e56bf50a7d9fd70958607ffbf0d4"} Nov 29 06:56:35 crc kubenswrapper[4947]: I1129 06:56:35.862947 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"540c7409-0d06-4ab4-89ab-e2a7dd84cb91","Type":"ContainerStarted","Data":"447c4d08e54bff210fbf6d19d24387f947ee760730f96174fb5b42fa399c7c13"} Nov 29 06:56:36 crc kubenswrapper[4947]: I1129 06:56:36.230065 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qqj6m" Nov 29 06:56:36 crc kubenswrapper[4947]: I1129 06:56:36.363691 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b-combined-ca-bundle\") pod \"8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b\" (UID: \"8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b\") " Nov 29 06:56:36 crc kubenswrapper[4947]: I1129 06:56:36.363871 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4qlg\" (UniqueName: \"kubernetes.io/projected/8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b-kube-api-access-g4qlg\") pod \"8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b\" (UID: \"8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b\") " Nov 29 06:56:36 crc kubenswrapper[4947]: I1129 06:56:36.364076 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b-config-data\") pod \"8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b\" (UID: \"8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b\") " Nov 29 06:56:36 crc kubenswrapper[4947]: I1129 06:56:36.364153 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b-scripts\") pod \"8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b\" (UID: \"8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b\") " Nov 29 06:56:36 crc kubenswrapper[4947]: I1129 06:56:36.364182 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b-credential-keys\") pod \"8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b\" (UID: \"8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b\") " Nov 29 06:56:36 crc kubenswrapper[4947]: I1129 06:56:36.364254 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b-fernet-keys\") pod \"8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b\" (UID: \"8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b\") " Nov 29 06:56:36 crc kubenswrapper[4947]: I1129 06:56:36.374839 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b-kube-api-access-g4qlg" (OuterVolumeSpecName: "kube-api-access-g4qlg") pod "8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b" (UID: "8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b"). InnerVolumeSpecName "kube-api-access-g4qlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:56:36 crc kubenswrapper[4947]: I1129 06:56:36.375053 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b-scripts" (OuterVolumeSpecName: "scripts") pod "8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b" (UID: "8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:56:36 crc kubenswrapper[4947]: I1129 06:56:36.375644 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b" (UID: "8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:56:36 crc kubenswrapper[4947]: I1129 06:56:36.399622 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b-config-data" (OuterVolumeSpecName: "config-data") pod "8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b" (UID: "8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:56:36 crc kubenswrapper[4947]: I1129 06:56:36.401134 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b" (UID: "8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:56:36 crc kubenswrapper[4947]: I1129 06:56:36.404909 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b" (UID: "8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:56:36 crc kubenswrapper[4947]: I1129 06:56:36.466306 4947 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 06:56:36 crc kubenswrapper[4947]: I1129 06:56:36.466351 4947 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 06:56:36 crc kubenswrapper[4947]: I1129 06:56:36.466362 4947 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 29 06:56:36 crc kubenswrapper[4947]: I1129 06:56:36.466378 4947 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 29 06:56:36 crc kubenswrapper[4947]: I1129 06:56:36.466390 4947 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 06:56:36 crc kubenswrapper[4947]: I1129 06:56:36.466402 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4qlg\" (UniqueName: \"kubernetes.io/projected/8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b-kube-api-access-g4qlg\") on node \"crc\" DevicePath \"\"" Nov 29 06:56:36 crc kubenswrapper[4947]: I1129 06:56:36.880961 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qqj6m" Nov 29 06:56:36 crc kubenswrapper[4947]: I1129 06:56:36.881297 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qqj6m" event={"ID":"8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b","Type":"ContainerDied","Data":"294e0ae68c6fb5ffca323c9b63a8cc653b609ed61a5a07b6507f83622994edd4"} Nov 29 06:56:36 crc kubenswrapper[4947]: I1129 06:56:36.881486 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="294e0ae68c6fb5ffca323c9b63a8cc653b609ed61a5a07b6507f83622994edd4" Nov 29 06:56:36 crc kubenswrapper[4947]: I1129 06:56:36.886417 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-l67pc" event={"ID":"b341c7eb-214f-49d0-ae91-a27c56857739","Type":"ContainerStarted","Data":"d094cbacc0562b9e4a52068ebcc2eb01cb11df7772d6028bab72f1ff549b5121"} Nov 29 06:56:36 crc kubenswrapper[4947]: I1129 06:56:36.959356 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-l67pc" podStartSLOduration=3.157231755 podStartE2EDuration="50.959328434s" podCreationTimestamp="2025-11-29 06:55:46 +0000 UTC" firstStartedPulling="2025-11-29 06:55:48.07915604 +0000 UTC m=+1299.123538121" lastFinishedPulling="2025-11-29 06:56:35.881252709 +0000 UTC m=+1346.925634800" observedRunningTime="2025-11-29 06:56:36.919628067 +0000 UTC m=+1347.964010158" watchObservedRunningTime="2025-11-29 06:56:36.959328434 +0000 UTC m=+1348.003710505" Nov 29 06:56:36 crc kubenswrapper[4947]: I1129 06:56:36.998901 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-569f86dc65-rkk52"] Nov 29 06:56:36 crc kubenswrapper[4947]: E1129 06:56:36.999547 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b" containerName="keystone-bootstrap" Nov 29 06:56:36 crc kubenswrapper[4947]: I1129 06:56:36.999574 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b" containerName="keystone-bootstrap" Nov 29 06:56:36 crc kubenswrapper[4947]: I1129 06:56:36.999837 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b" containerName="keystone-bootstrap" Nov 29 06:56:37 crc kubenswrapper[4947]: I1129 06:56:37.000909 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-569f86dc65-rkk52" Nov 29 06:56:37 crc kubenswrapper[4947]: I1129 06:56:37.004896 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zkrv9" Nov 29 06:56:37 crc kubenswrapper[4947]: I1129 06:56:37.005036 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 29 06:56:37 crc kubenswrapper[4947]: I1129 06:56:37.005196 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 29 06:56:37 crc kubenswrapper[4947]: I1129 06:56:37.005649 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 29 06:56:37 crc kubenswrapper[4947]: I1129 06:56:37.006066 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 29 06:56:37 crc kubenswrapper[4947]: I1129 06:56:37.006298 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 29 06:56:37 crc kubenswrapper[4947]: I1129 06:56:37.015236 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-569f86dc65-rkk52"] Nov 29 06:56:37 crc kubenswrapper[4947]: I1129 06:56:37.183625 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nfgs\" (UniqueName: \"kubernetes.io/projected/e1fdddd0-fe06-43ae-b805-15dbd74c0107-kube-api-access-7nfgs\") pod \"keystone-569f86dc65-rkk52\" (UID: \"e1fdddd0-fe06-43ae-b805-15dbd74c0107\") " pod="openstack/keystone-569f86dc65-rkk52" Nov 29 06:56:37 crc kubenswrapper[4947]: I1129 06:56:37.183732 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1fdddd0-fe06-43ae-b805-15dbd74c0107-internal-tls-certs\") pod \"keystone-569f86dc65-rkk52\" (UID: \"e1fdddd0-fe06-43ae-b805-15dbd74c0107\") " pod="openstack/keystone-569f86dc65-rkk52" Nov 29 06:56:37 crc kubenswrapper[4947]: I1129 06:56:37.183763 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1fdddd0-fe06-43ae-b805-15dbd74c0107-public-tls-certs\") pod \"keystone-569f86dc65-rkk52\" (UID: \"e1fdddd0-fe06-43ae-b805-15dbd74c0107\") " pod="openstack/keystone-569f86dc65-rkk52" Nov 29 06:56:37 crc kubenswrapper[4947]: I1129 06:56:37.183797 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1fdddd0-fe06-43ae-b805-15dbd74c0107-scripts\") pod \"keystone-569f86dc65-rkk52\" (UID: \"e1fdddd0-fe06-43ae-b805-15dbd74c0107\") " pod="openstack/keystone-569f86dc65-rkk52" Nov 29 06:56:37 crc kubenswrapper[4947]: I1129 06:56:37.183820 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1fdddd0-fe06-43ae-b805-15dbd74c0107-config-data\") pod \"keystone-569f86dc65-rkk52\" (UID: \"e1fdddd0-fe06-43ae-b805-15dbd74c0107\") " pod="openstack/keystone-569f86dc65-rkk52" Nov 29 06:56:37 crc kubenswrapper[4947]: I1129 06:56:37.183852 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e1fdddd0-fe06-43ae-b805-15dbd74c0107-fernet-keys\") pod \"keystone-569f86dc65-rkk52\" (UID: \"e1fdddd0-fe06-43ae-b805-15dbd74c0107\") " pod="openstack/keystone-569f86dc65-rkk52" Nov 29 06:56:37 crc kubenswrapper[4947]: I1129 06:56:37.183939 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e1fdddd0-fe06-43ae-b805-15dbd74c0107-credential-keys\") pod \"keystone-569f86dc65-rkk52\" (UID: \"e1fdddd0-fe06-43ae-b805-15dbd74c0107\") " pod="openstack/keystone-569f86dc65-rkk52" Nov 29 06:56:37 crc kubenswrapper[4947]: I1129 06:56:37.183968 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1fdddd0-fe06-43ae-b805-15dbd74c0107-combined-ca-bundle\") pod \"keystone-569f86dc65-rkk52\" (UID: \"e1fdddd0-fe06-43ae-b805-15dbd74c0107\") " pod="openstack/keystone-569f86dc65-rkk52" Nov 29 06:56:37 crc kubenswrapper[4947]: I1129 06:56:37.286328 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1fdddd0-fe06-43ae-b805-15dbd74c0107-scripts\") pod \"keystone-569f86dc65-rkk52\" (UID: \"e1fdddd0-fe06-43ae-b805-15dbd74c0107\") " pod="openstack/keystone-569f86dc65-rkk52" Nov 29 06:56:37 crc kubenswrapper[4947]: I1129 06:56:37.286408 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1fdddd0-fe06-43ae-b805-15dbd74c0107-config-data\") pod \"keystone-569f86dc65-rkk52\" (UID: \"e1fdddd0-fe06-43ae-b805-15dbd74c0107\") " pod="openstack/keystone-569f86dc65-rkk52" Nov 29 06:56:37 crc kubenswrapper[4947]: I1129 06:56:37.286463 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e1fdddd0-fe06-43ae-b805-15dbd74c0107-fernet-keys\") pod \"keystone-569f86dc65-rkk52\" (UID: \"e1fdddd0-fe06-43ae-b805-15dbd74c0107\") " pod="openstack/keystone-569f86dc65-rkk52" Nov 29 06:56:37 crc kubenswrapper[4947]: I1129 06:56:37.286622 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e1fdddd0-fe06-43ae-b805-15dbd74c0107-credential-keys\") pod \"keystone-569f86dc65-rkk52\" (UID: \"e1fdddd0-fe06-43ae-b805-15dbd74c0107\") " pod="openstack/keystone-569f86dc65-rkk52" Nov 29 06:56:37 crc kubenswrapper[4947]: I1129 06:56:37.286652 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1fdddd0-fe06-43ae-b805-15dbd74c0107-combined-ca-bundle\") pod \"keystone-569f86dc65-rkk52\" (UID: \"e1fdddd0-fe06-43ae-b805-15dbd74c0107\") " pod="openstack/keystone-569f86dc65-rkk52" Nov 29 06:56:37 crc kubenswrapper[4947]: I1129 06:56:37.286704 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nfgs\" (UniqueName: \"kubernetes.io/projected/e1fdddd0-fe06-43ae-b805-15dbd74c0107-kube-api-access-7nfgs\") pod \"keystone-569f86dc65-rkk52\" (UID: \"e1fdddd0-fe06-43ae-b805-15dbd74c0107\") " pod="openstack/keystone-569f86dc65-rkk52" Nov 29 06:56:37 crc kubenswrapper[4947]: I1129 06:56:37.286753 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1fdddd0-fe06-43ae-b805-15dbd74c0107-internal-tls-certs\") pod \"keystone-569f86dc65-rkk52\" (UID: \"e1fdddd0-fe06-43ae-b805-15dbd74c0107\") " pod="openstack/keystone-569f86dc65-rkk52" Nov 29 06:56:37 crc kubenswrapper[4947]: I1129 06:56:37.286788 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1fdddd0-fe06-43ae-b805-15dbd74c0107-public-tls-certs\") pod \"keystone-569f86dc65-rkk52\" (UID: \"e1fdddd0-fe06-43ae-b805-15dbd74c0107\") " pod="openstack/keystone-569f86dc65-rkk52" Nov 29 06:56:37 crc kubenswrapper[4947]: I1129 06:56:37.304488 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1fdddd0-fe06-43ae-b805-15dbd74c0107-internal-tls-certs\") pod \"keystone-569f86dc65-rkk52\" (UID: \"e1fdddd0-fe06-43ae-b805-15dbd74c0107\") " pod="openstack/keystone-569f86dc65-rkk52" Nov 29 06:56:37 crc kubenswrapper[4947]: I1129 06:56:37.306058 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1fdddd0-fe06-43ae-b805-15dbd74c0107-config-data\") pod \"keystone-569f86dc65-rkk52\" (UID: \"e1fdddd0-fe06-43ae-b805-15dbd74c0107\") " pod="openstack/keystone-569f86dc65-rkk52" Nov 29 06:56:37 crc kubenswrapper[4947]: I1129 06:56:37.309979 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1fdddd0-fe06-43ae-b805-15dbd74c0107-combined-ca-bundle\") pod \"keystone-569f86dc65-rkk52\" (UID: \"e1fdddd0-fe06-43ae-b805-15dbd74c0107\") " pod="openstack/keystone-569f86dc65-rkk52" Nov 29 06:56:37 crc kubenswrapper[4947]: I1129 06:56:37.310210 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e1fdddd0-fe06-43ae-b805-15dbd74c0107-fernet-keys\") pod \"keystone-569f86dc65-rkk52\" (UID: \"e1fdddd0-fe06-43ae-b805-15dbd74c0107\") " pod="openstack/keystone-569f86dc65-rkk52" Nov 29 06:56:37 crc kubenswrapper[4947]: I1129 06:56:37.310281 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1fdddd0-fe06-43ae-b805-15dbd74c0107-public-tls-certs\") pod \"keystone-569f86dc65-rkk52\" (UID: \"e1fdddd0-fe06-43ae-b805-15dbd74c0107\") " pod="openstack/keystone-569f86dc65-rkk52" Nov 29 06:56:37 crc kubenswrapper[4947]: I1129 06:56:37.315566 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1fdddd0-fe06-43ae-b805-15dbd74c0107-scripts\") pod \"keystone-569f86dc65-rkk52\" (UID: \"e1fdddd0-fe06-43ae-b805-15dbd74c0107\") " pod="openstack/keystone-569f86dc65-rkk52" Nov 29 06:56:37 crc kubenswrapper[4947]: I1129 06:56:37.319884 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e1fdddd0-fe06-43ae-b805-15dbd74c0107-credential-keys\") pod \"keystone-569f86dc65-rkk52\" (UID: \"e1fdddd0-fe06-43ae-b805-15dbd74c0107\") " pod="openstack/keystone-569f86dc65-rkk52" Nov 29 06:56:37 crc kubenswrapper[4947]: I1129 06:56:37.325899 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nfgs\" (UniqueName: \"kubernetes.io/projected/e1fdddd0-fe06-43ae-b805-15dbd74c0107-kube-api-access-7nfgs\") pod \"keystone-569f86dc65-rkk52\" (UID: \"e1fdddd0-fe06-43ae-b805-15dbd74c0107\") " pod="openstack/keystone-569f86dc65-rkk52" Nov 29 06:56:37 crc kubenswrapper[4947]: I1129 06:56:37.346409 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-569f86dc65-rkk52" Nov 29 06:56:37 crc kubenswrapper[4947]: I1129 06:56:37.476239 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-55q5b" Nov 29 06:56:37 crc kubenswrapper[4947]: I1129 06:56:37.592327 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42494b25-5db7-478c-b2a9-e14c7d990c0c-combined-ca-bundle\") pod \"42494b25-5db7-478c-b2a9-e14c7d990c0c\" (UID: \"42494b25-5db7-478c-b2a9-e14c7d990c0c\") " Nov 29 06:56:37 crc kubenswrapper[4947]: I1129 06:56:37.592485 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42494b25-5db7-478c-b2a9-e14c7d990c0c-config-data\") pod \"42494b25-5db7-478c-b2a9-e14c7d990c0c\" (UID: \"42494b25-5db7-478c-b2a9-e14c7d990c0c\") " Nov 29 06:56:37 crc kubenswrapper[4947]: I1129 06:56:37.592560 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42494b25-5db7-478c-b2a9-e14c7d990c0c-scripts\") pod \"42494b25-5db7-478c-b2a9-e14c7d990c0c\" (UID: \"42494b25-5db7-478c-b2a9-e14c7d990c0c\") " Nov 29 06:56:37 crc kubenswrapper[4947]: I1129 06:56:37.592660 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bt8w8\" (UniqueName: \"kubernetes.io/projected/42494b25-5db7-478c-b2a9-e14c7d990c0c-kube-api-access-bt8w8\") pod \"42494b25-5db7-478c-b2a9-e14c7d990c0c\" (UID: \"42494b25-5db7-478c-b2a9-e14c7d990c0c\") " Nov 29 06:56:37 crc kubenswrapper[4947]: I1129 06:56:37.592695 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42494b25-5db7-478c-b2a9-e14c7d990c0c-logs\") pod \"42494b25-5db7-478c-b2a9-e14c7d990c0c\" (UID: \"42494b25-5db7-478c-b2a9-e14c7d990c0c\") " Nov 29 06:56:37 crc kubenswrapper[4947]: I1129 06:56:37.595209 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42494b25-5db7-478c-b2a9-e14c7d990c0c-logs" (OuterVolumeSpecName: "logs") pod "42494b25-5db7-478c-b2a9-e14c7d990c0c" (UID: "42494b25-5db7-478c-b2a9-e14c7d990c0c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:56:37 crc kubenswrapper[4947]: I1129 06:56:37.600351 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42494b25-5db7-478c-b2a9-e14c7d990c0c-scripts" (OuterVolumeSpecName: "scripts") pod "42494b25-5db7-478c-b2a9-e14c7d990c0c" (UID: "42494b25-5db7-478c-b2a9-e14c7d990c0c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:56:37 crc kubenswrapper[4947]: I1129 06:56:37.600384 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42494b25-5db7-478c-b2a9-e14c7d990c0c-kube-api-access-bt8w8" (OuterVolumeSpecName: "kube-api-access-bt8w8") pod "42494b25-5db7-478c-b2a9-e14c7d990c0c" (UID: "42494b25-5db7-478c-b2a9-e14c7d990c0c"). InnerVolumeSpecName "kube-api-access-bt8w8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:56:37 crc kubenswrapper[4947]: I1129 06:56:37.682014 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42494b25-5db7-478c-b2a9-e14c7d990c0c-config-data" (OuterVolumeSpecName: "config-data") pod "42494b25-5db7-478c-b2a9-e14c7d990c0c" (UID: "42494b25-5db7-478c-b2a9-e14c7d990c0c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:56:37 crc kubenswrapper[4947]: I1129 06:56:37.686581 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42494b25-5db7-478c-b2a9-e14c7d990c0c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "42494b25-5db7-478c-b2a9-e14c7d990c0c" (UID: "42494b25-5db7-478c-b2a9-e14c7d990c0c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:56:37 crc kubenswrapper[4947]: I1129 06:56:37.695061 4947 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42494b25-5db7-478c-b2a9-e14c7d990c0c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 06:56:37 crc kubenswrapper[4947]: I1129 06:56:37.695102 4947 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42494b25-5db7-478c-b2a9-e14c7d990c0c-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 06:56:37 crc kubenswrapper[4947]: I1129 06:56:37.695114 4947 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42494b25-5db7-478c-b2a9-e14c7d990c0c-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 06:56:37 crc kubenswrapper[4947]: I1129 06:56:37.695123 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bt8w8\" (UniqueName: \"kubernetes.io/projected/42494b25-5db7-478c-b2a9-e14c7d990c0c-kube-api-access-bt8w8\") on node \"crc\" DevicePath \"\"" Nov 29 06:56:37 crc kubenswrapper[4947]: I1129 06:56:37.695141 4947 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42494b25-5db7-478c-b2a9-e14c7d990c0c-logs\") on node \"crc\" DevicePath \"\"" Nov 29 06:56:37 crc kubenswrapper[4947]: I1129 06:56:37.742055 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-569f86dc65-rkk52"] Nov 29 06:56:37 crc kubenswrapper[4947]: W1129 06:56:37.760405 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1fdddd0_fe06_43ae_b805_15dbd74c0107.slice/crio-4d3aa7c34785453f01a4e21db48e729bdfa035ac67fc832f5373bb5f766e3714 WatchSource:0}: Error finding container 4d3aa7c34785453f01a4e21db48e729bdfa035ac67fc832f5373bb5f766e3714: Status 404 returned error can't find the container with id 4d3aa7c34785453f01a4e21db48e729bdfa035ac67fc832f5373bb5f766e3714 Nov 29 06:56:37 crc kubenswrapper[4947]: I1129 06:56:37.905891 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-55q5b" Nov 29 06:56:37 crc kubenswrapper[4947]: I1129 06:56:37.906909 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-55q5b" event={"ID":"42494b25-5db7-478c-b2a9-e14c7d990c0c","Type":"ContainerDied","Data":"296073b3fc8d78a741670bf787d41010af1b820b66f8aa8017a9c73ed2531174"} Nov 29 06:56:37 crc kubenswrapper[4947]: I1129 06:56:37.906984 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="296073b3fc8d78a741670bf787d41010af1b820b66f8aa8017a9c73ed2531174" Nov 29 06:56:37 crc kubenswrapper[4947]: I1129 06:56:37.930804 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-569f86dc65-rkk52" event={"ID":"e1fdddd0-fe06-43ae-b805-15dbd74c0107","Type":"ContainerStarted","Data":"4d3aa7c34785453f01a4e21db48e729bdfa035ac67fc832f5373bb5f766e3714"} Nov 29 06:56:38 crc kubenswrapper[4947]: I1129 06:56:38.010641 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-586fc4c58d-gvrpk"] Nov 29 06:56:38 crc kubenswrapper[4947]: E1129 06:56:38.011306 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42494b25-5db7-478c-b2a9-e14c7d990c0c" containerName="placement-db-sync" Nov 29 06:56:38 crc kubenswrapper[4947]: I1129 06:56:38.011334 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="42494b25-5db7-478c-b2a9-e14c7d990c0c" containerName="placement-db-sync" Nov 29 06:56:38 crc kubenswrapper[4947]: I1129 06:56:38.011577 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="42494b25-5db7-478c-b2a9-e14c7d990c0c" containerName="placement-db-sync" Nov 29 06:56:38 crc kubenswrapper[4947]: I1129 06:56:38.012946 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-586fc4c58d-gvrpk" Nov 29 06:56:38 crc kubenswrapper[4947]: I1129 06:56:38.019010 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 29 06:56:38 crc kubenswrapper[4947]: I1129 06:56:38.019176 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 29 06:56:38 crc kubenswrapper[4947]: I1129 06:56:38.019326 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 29 06:56:38 crc kubenswrapper[4947]: I1129 06:56:38.019561 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-bf7dn" Nov 29 06:56:38 crc kubenswrapper[4947]: I1129 06:56:38.019729 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 29 06:56:38 crc kubenswrapper[4947]: I1129 06:56:38.034922 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-586fc4c58d-gvrpk"] Nov 29 06:56:38 crc kubenswrapper[4947]: I1129 06:56:38.106396 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k654j\" (UniqueName: \"kubernetes.io/projected/08d2e91f-9187-4632-b83f-b966435ebe7f-kube-api-access-k654j\") pod \"placement-586fc4c58d-gvrpk\" (UID: \"08d2e91f-9187-4632-b83f-b966435ebe7f\") " pod="openstack/placement-586fc4c58d-gvrpk" Nov 29 06:56:38 crc kubenswrapper[4947]: I1129 06:56:38.106443 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08d2e91f-9187-4632-b83f-b966435ebe7f-combined-ca-bundle\") pod \"placement-586fc4c58d-gvrpk\" (UID: \"08d2e91f-9187-4632-b83f-b966435ebe7f\") " pod="openstack/placement-586fc4c58d-gvrpk" Nov 29 06:56:38 crc kubenswrapper[4947]: I1129 06:56:38.106511 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08d2e91f-9187-4632-b83f-b966435ebe7f-logs\") pod \"placement-586fc4c58d-gvrpk\" (UID: \"08d2e91f-9187-4632-b83f-b966435ebe7f\") " pod="openstack/placement-586fc4c58d-gvrpk" Nov 29 06:56:38 crc kubenswrapper[4947]: I1129 06:56:38.106589 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08d2e91f-9187-4632-b83f-b966435ebe7f-config-data\") pod \"placement-586fc4c58d-gvrpk\" (UID: \"08d2e91f-9187-4632-b83f-b966435ebe7f\") " pod="openstack/placement-586fc4c58d-gvrpk" Nov 29 06:56:38 crc kubenswrapper[4947]: I1129 06:56:38.106623 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08d2e91f-9187-4632-b83f-b966435ebe7f-scripts\") pod \"placement-586fc4c58d-gvrpk\" (UID: \"08d2e91f-9187-4632-b83f-b966435ebe7f\") " pod="openstack/placement-586fc4c58d-gvrpk" Nov 29 06:56:38 crc kubenswrapper[4947]: I1129 06:56:38.106645 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08d2e91f-9187-4632-b83f-b966435ebe7f-public-tls-certs\") pod \"placement-586fc4c58d-gvrpk\" (UID: \"08d2e91f-9187-4632-b83f-b966435ebe7f\") " pod="openstack/placement-586fc4c58d-gvrpk" Nov 29 06:56:38 crc kubenswrapper[4947]: I1129 06:56:38.106687 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08d2e91f-9187-4632-b83f-b966435ebe7f-internal-tls-certs\") pod \"placement-586fc4c58d-gvrpk\" (UID: \"08d2e91f-9187-4632-b83f-b966435ebe7f\") " pod="openstack/placement-586fc4c58d-gvrpk" Nov 29 06:56:38 crc kubenswrapper[4947]: I1129 06:56:38.209070 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k654j\" (UniqueName: \"kubernetes.io/projected/08d2e91f-9187-4632-b83f-b966435ebe7f-kube-api-access-k654j\") pod \"placement-586fc4c58d-gvrpk\" (UID: \"08d2e91f-9187-4632-b83f-b966435ebe7f\") " pod="openstack/placement-586fc4c58d-gvrpk" Nov 29 06:56:38 crc kubenswrapper[4947]: I1129 06:56:38.209147 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08d2e91f-9187-4632-b83f-b966435ebe7f-combined-ca-bundle\") pod \"placement-586fc4c58d-gvrpk\" (UID: \"08d2e91f-9187-4632-b83f-b966435ebe7f\") " pod="openstack/placement-586fc4c58d-gvrpk" Nov 29 06:56:38 crc kubenswrapper[4947]: I1129 06:56:38.209258 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08d2e91f-9187-4632-b83f-b966435ebe7f-logs\") pod \"placement-586fc4c58d-gvrpk\" (UID: \"08d2e91f-9187-4632-b83f-b966435ebe7f\") " pod="openstack/placement-586fc4c58d-gvrpk" Nov 29 06:56:38 crc kubenswrapper[4947]: I1129 06:56:38.209301 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08d2e91f-9187-4632-b83f-b966435ebe7f-config-data\") pod \"placement-586fc4c58d-gvrpk\" (UID: \"08d2e91f-9187-4632-b83f-b966435ebe7f\") " pod="openstack/placement-586fc4c58d-gvrpk" Nov 29 06:56:38 crc kubenswrapper[4947]: I1129 06:56:38.209341 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08d2e91f-9187-4632-b83f-b966435ebe7f-scripts\") pod \"placement-586fc4c58d-gvrpk\" (UID: \"08d2e91f-9187-4632-b83f-b966435ebe7f\") " pod="openstack/placement-586fc4c58d-gvrpk" Nov 29 06:56:38 crc kubenswrapper[4947]: I1129 06:56:38.209363 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08d2e91f-9187-4632-b83f-b966435ebe7f-public-tls-certs\") pod \"placement-586fc4c58d-gvrpk\" (UID: \"08d2e91f-9187-4632-b83f-b966435ebe7f\") " pod="openstack/placement-586fc4c58d-gvrpk" Nov 29 06:56:38 crc kubenswrapper[4947]: I1129 06:56:38.209403 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08d2e91f-9187-4632-b83f-b966435ebe7f-internal-tls-certs\") pod \"placement-586fc4c58d-gvrpk\" (UID: \"08d2e91f-9187-4632-b83f-b966435ebe7f\") " pod="openstack/placement-586fc4c58d-gvrpk" Nov 29 06:56:38 crc kubenswrapper[4947]: I1129 06:56:38.210459 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08d2e91f-9187-4632-b83f-b966435ebe7f-logs\") pod \"placement-586fc4c58d-gvrpk\" (UID: \"08d2e91f-9187-4632-b83f-b966435ebe7f\") " pod="openstack/placement-586fc4c58d-gvrpk" Nov 29 06:56:38 crc kubenswrapper[4947]: I1129 06:56:38.216103 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08d2e91f-9187-4632-b83f-b966435ebe7f-internal-tls-certs\") pod \"placement-586fc4c58d-gvrpk\" (UID: \"08d2e91f-9187-4632-b83f-b966435ebe7f\") " pod="openstack/placement-586fc4c58d-gvrpk" Nov 29 06:56:38 crc kubenswrapper[4947]: I1129 06:56:38.216992 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08d2e91f-9187-4632-b83f-b966435ebe7f-combined-ca-bundle\") pod \"placement-586fc4c58d-gvrpk\" (UID: \"08d2e91f-9187-4632-b83f-b966435ebe7f\") " pod="openstack/placement-586fc4c58d-gvrpk" Nov 29 06:56:38 crc kubenswrapper[4947]: I1129 06:56:38.219326 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08d2e91f-9187-4632-b83f-b966435ebe7f-config-data\") pod \"placement-586fc4c58d-gvrpk\" (UID: \"08d2e91f-9187-4632-b83f-b966435ebe7f\") " pod="openstack/placement-586fc4c58d-gvrpk" Nov 29 06:56:38 crc kubenswrapper[4947]: I1129 06:56:38.220942 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08d2e91f-9187-4632-b83f-b966435ebe7f-public-tls-certs\") pod \"placement-586fc4c58d-gvrpk\" (UID: \"08d2e91f-9187-4632-b83f-b966435ebe7f\") " pod="openstack/placement-586fc4c58d-gvrpk" Nov 29 06:56:38 crc kubenswrapper[4947]: I1129 06:56:38.232417 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08d2e91f-9187-4632-b83f-b966435ebe7f-scripts\") pod \"placement-586fc4c58d-gvrpk\" (UID: \"08d2e91f-9187-4632-b83f-b966435ebe7f\") " pod="openstack/placement-586fc4c58d-gvrpk" Nov 29 06:56:38 crc kubenswrapper[4947]: I1129 06:56:38.236324 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k654j\" (UniqueName: \"kubernetes.io/projected/08d2e91f-9187-4632-b83f-b966435ebe7f-kube-api-access-k654j\") pod \"placement-586fc4c58d-gvrpk\" (UID: \"08d2e91f-9187-4632-b83f-b966435ebe7f\") " pod="openstack/placement-586fc4c58d-gvrpk" Nov 29 06:56:38 crc kubenswrapper[4947]: I1129 06:56:38.340838 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-586fc4c58d-gvrpk" Nov 29 06:56:38 crc kubenswrapper[4947]: I1129 06:56:38.930545 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-586fc4c58d-gvrpk"] Nov 29 06:56:38 crc kubenswrapper[4947]: W1129 06:56:38.953549 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08d2e91f_9187_4632_b83f_b966435ebe7f.slice/crio-671a889ec5cb8eb34960f8dfad6697f91488127774f69b672b4f2db4f7a6bd4e WatchSource:0}: Error finding container 671a889ec5cb8eb34960f8dfad6697f91488127774f69b672b4f2db4f7a6bd4e: Status 404 returned error can't find the container with id 671a889ec5cb8eb34960f8dfad6697f91488127774f69b672b4f2db4f7a6bd4e Nov 29 06:56:38 crc kubenswrapper[4947]: I1129 06:56:38.956070 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-569f86dc65-rkk52" event={"ID":"e1fdddd0-fe06-43ae-b805-15dbd74c0107","Type":"ContainerStarted","Data":"61cc812d6eaf78b69435e08fba77301c2014c25454b0635cde3f6ad68d82b501"} Nov 29 06:56:38 crc kubenswrapper[4947]: I1129 06:56:38.958409 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-569f86dc65-rkk52" Nov 29 06:56:38 crc kubenswrapper[4947]: I1129 06:56:38.995848 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-569f86dc65-rkk52" podStartSLOduration=2.995827088 podStartE2EDuration="2.995827088s" podCreationTimestamp="2025-11-29 06:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:56:38.991451692 +0000 UTC m=+1350.035833773" watchObservedRunningTime="2025-11-29 06:56:38.995827088 +0000 UTC m=+1350.040209169" Nov 29 06:56:39 crc kubenswrapper[4947]: I1129 06:56:39.967179 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-586fc4c58d-gvrpk" event={"ID":"08d2e91f-9187-4632-b83f-b966435ebe7f","Type":"ContainerStarted","Data":"671a889ec5cb8eb34960f8dfad6697f91488127774f69b672b4f2db4f7a6bd4e"} Nov 29 06:56:41 crc kubenswrapper[4947]: I1129 06:56:41.992887 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-586fc4c58d-gvrpk" event={"ID":"08d2e91f-9187-4632-b83f-b966435ebe7f","Type":"ContainerStarted","Data":"b098f378a37f7b0e0dbd8a08c079f91afded5afae097c0cba12eb1730dbc29e9"} Nov 29 06:56:47 crc kubenswrapper[4947]: E1129 06:56:47.812929 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="540c7409-0d06-4ab4-89ab-e2a7dd84cb91" Nov 29 06:56:48 crc kubenswrapper[4947]: I1129 06:56:48.056400 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-586fc4c58d-gvrpk" event={"ID":"08d2e91f-9187-4632-b83f-b966435ebe7f","Type":"ContainerStarted","Data":"1fd061e01bd83948813e7b3408cd32efb34bd44eaf642850fc65259955768edc"} Nov 29 06:56:48 crc kubenswrapper[4947]: I1129 06:56:48.056556 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-586fc4c58d-gvrpk" Nov 29 06:56:48 crc kubenswrapper[4947]: I1129 06:56:48.056847 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-586fc4c58d-gvrpk" Nov 29 06:56:48 crc kubenswrapper[4947]: I1129 06:56:48.061783 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"540c7409-0d06-4ab4-89ab-e2a7dd84cb91","Type":"ContainerStarted","Data":"3aac16915b7ace4044ddfb53e6258f011f1560db3fcf06b6f5e12327f6b17a5c"} Nov 29 06:56:48 crc kubenswrapper[4947]: I1129 06:56:48.061960 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="540c7409-0d06-4ab4-89ab-e2a7dd84cb91" containerName="ceilometer-notification-agent" containerID="cri-o://afdf16e6f49dc2357f8c1ef1e8d3cac3af6b2b50ae9675fc1ffab106d7c23f94" gracePeriod=30 Nov 29 06:56:48 crc kubenswrapper[4947]: I1129 06:56:48.061989 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="540c7409-0d06-4ab4-89ab-e2a7dd84cb91" containerName="sg-core" containerID="cri-o://447c4d08e54bff210fbf6d19d24387f947ee760730f96174fb5b42fa399c7c13" gracePeriod=30 Nov 29 06:56:48 crc kubenswrapper[4947]: I1129 06:56:48.061996 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 29 06:56:48 crc kubenswrapper[4947]: I1129 06:56:48.062003 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="540c7409-0d06-4ab4-89ab-e2a7dd84cb91" containerName="proxy-httpd" containerID="cri-o://3aac16915b7ace4044ddfb53e6258f011f1560db3fcf06b6f5e12327f6b17a5c" gracePeriod=30 Nov 29 06:56:48 crc kubenswrapper[4947]: I1129 06:56:48.064703 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-77d4t" event={"ID":"f26cf011-4f52-4d26-a248-b92906824399","Type":"ContainerStarted","Data":"f5adf08f1c8170eaf109bc2ebac79fcb7d835e3f7563f1e03ca4be766e37dffc"} Nov 29 06:56:48 crc kubenswrapper[4947]: I1129 06:56:48.084714 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-586fc4c58d-gvrpk" podStartSLOduration=11.084687684 podStartE2EDuration="11.084687684s" podCreationTimestamp="2025-11-29 06:56:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:56:48.083004763 +0000 UTC m=+1359.127386844" watchObservedRunningTime="2025-11-29 06:56:48.084687684 +0000 UTC m=+1359.129069765" Nov 29 06:56:48 crc kubenswrapper[4947]: I1129 06:56:48.129743 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-77d4t" podStartSLOduration=3.196980914 podStartE2EDuration="1m2.129710682s" podCreationTimestamp="2025-11-29 06:55:46 +0000 UTC" firstStartedPulling="2025-11-29 06:55:48.518023617 +0000 UTC m=+1299.562405688" lastFinishedPulling="2025-11-29 06:56:47.450753375 +0000 UTC m=+1358.495135456" observedRunningTime="2025-11-29 06:56:48.125365596 +0000 UTC m=+1359.169747677" watchObservedRunningTime="2025-11-29 06:56:48.129710682 +0000 UTC m=+1359.174092763" Nov 29 06:56:49 crc kubenswrapper[4947]: I1129 06:56:49.077734 4947 generic.go:334] "Generic (PLEG): container finished" podID="540c7409-0d06-4ab4-89ab-e2a7dd84cb91" containerID="3aac16915b7ace4044ddfb53e6258f011f1560db3fcf06b6f5e12327f6b17a5c" exitCode=0 Nov 29 06:56:49 crc kubenswrapper[4947]: I1129 06:56:49.078137 4947 generic.go:334] "Generic (PLEG): container finished" podID="540c7409-0d06-4ab4-89ab-e2a7dd84cb91" containerID="447c4d08e54bff210fbf6d19d24387f947ee760730f96174fb5b42fa399c7c13" exitCode=2 Nov 29 06:56:49 crc kubenswrapper[4947]: I1129 06:56:49.077828 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"540c7409-0d06-4ab4-89ab-e2a7dd84cb91","Type":"ContainerDied","Data":"3aac16915b7ace4044ddfb53e6258f011f1560db3fcf06b6f5e12327f6b17a5c"} Nov 29 06:56:49 crc kubenswrapper[4947]: I1129 06:56:49.078209 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"540c7409-0d06-4ab4-89ab-e2a7dd84cb91","Type":"ContainerDied","Data":"447c4d08e54bff210fbf6d19d24387f947ee760730f96174fb5b42fa399c7c13"} Nov 29 06:56:50 crc kubenswrapper[4947]: I1129 06:56:50.492812 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-586fc4c58d-gvrpk" Nov 29 06:56:51 crc kubenswrapper[4947]: I1129 06:56:51.101851 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 06:56:51 crc kubenswrapper[4947]: I1129 06:56:51.104909 4947 generic.go:334] "Generic (PLEG): container finished" podID="540c7409-0d06-4ab4-89ab-e2a7dd84cb91" containerID="afdf16e6f49dc2357f8c1ef1e8d3cac3af6b2b50ae9675fc1ffab106d7c23f94" exitCode=0 Nov 29 06:56:51 crc kubenswrapper[4947]: I1129 06:56:51.104966 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"540c7409-0d06-4ab4-89ab-e2a7dd84cb91","Type":"ContainerDied","Data":"afdf16e6f49dc2357f8c1ef1e8d3cac3af6b2b50ae9675fc1ffab106d7c23f94"} Nov 29 06:56:51 crc kubenswrapper[4947]: I1129 06:56:51.105026 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"540c7409-0d06-4ab4-89ab-e2a7dd84cb91","Type":"ContainerDied","Data":"dfa49f481f5ad06875e6f1168ac4ab94504b22beabc02c88f36a809c752d7a69"} Nov 29 06:56:51 crc kubenswrapper[4947]: I1129 06:56:51.105051 4947 scope.go:117] "RemoveContainer" containerID="3aac16915b7ace4044ddfb53e6258f011f1560db3fcf06b6f5e12327f6b17a5c" Nov 29 06:56:51 crc kubenswrapper[4947]: I1129 06:56:51.137478 4947 scope.go:117] "RemoveContainer" containerID="447c4d08e54bff210fbf6d19d24387f947ee760730f96174fb5b42fa399c7c13" Nov 29 06:56:51 crc kubenswrapper[4947]: I1129 06:56:51.160251 4947 scope.go:117] "RemoveContainer" containerID="afdf16e6f49dc2357f8c1ef1e8d3cac3af6b2b50ae9675fc1ffab106d7c23f94" Nov 29 06:56:51 crc kubenswrapper[4947]: I1129 06:56:51.188544 4947 scope.go:117] "RemoveContainer" containerID="3aac16915b7ace4044ddfb53e6258f011f1560db3fcf06b6f5e12327f6b17a5c" Nov 29 06:56:51 crc kubenswrapper[4947]: E1129 06:56:51.192905 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3aac16915b7ace4044ddfb53e6258f011f1560db3fcf06b6f5e12327f6b17a5c\": container with ID starting with 3aac16915b7ace4044ddfb53e6258f011f1560db3fcf06b6f5e12327f6b17a5c not found: ID does not exist" containerID="3aac16915b7ace4044ddfb53e6258f011f1560db3fcf06b6f5e12327f6b17a5c" Nov 29 06:56:51 crc kubenswrapper[4947]: I1129 06:56:51.193109 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3aac16915b7ace4044ddfb53e6258f011f1560db3fcf06b6f5e12327f6b17a5c"} err="failed to get container status \"3aac16915b7ace4044ddfb53e6258f011f1560db3fcf06b6f5e12327f6b17a5c\": rpc error: code = NotFound desc = could not find container \"3aac16915b7ace4044ddfb53e6258f011f1560db3fcf06b6f5e12327f6b17a5c\": container with ID starting with 3aac16915b7ace4044ddfb53e6258f011f1560db3fcf06b6f5e12327f6b17a5c not found: ID does not exist" Nov 29 06:56:51 crc kubenswrapper[4947]: I1129 06:56:51.193152 4947 scope.go:117] "RemoveContainer" containerID="447c4d08e54bff210fbf6d19d24387f947ee760730f96174fb5b42fa399c7c13" Nov 29 06:56:51 crc kubenswrapper[4947]: E1129 06:56:51.193785 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"447c4d08e54bff210fbf6d19d24387f947ee760730f96174fb5b42fa399c7c13\": container with ID starting with 447c4d08e54bff210fbf6d19d24387f947ee760730f96174fb5b42fa399c7c13 not found: ID does not exist" containerID="447c4d08e54bff210fbf6d19d24387f947ee760730f96174fb5b42fa399c7c13" Nov 29 06:56:51 crc kubenswrapper[4947]: I1129 06:56:51.194142 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"447c4d08e54bff210fbf6d19d24387f947ee760730f96174fb5b42fa399c7c13"} err="failed to get container status \"447c4d08e54bff210fbf6d19d24387f947ee760730f96174fb5b42fa399c7c13\": rpc error: code = NotFound desc = could not find container \"447c4d08e54bff210fbf6d19d24387f947ee760730f96174fb5b42fa399c7c13\": container with ID starting with 447c4d08e54bff210fbf6d19d24387f947ee760730f96174fb5b42fa399c7c13 not found: ID does not exist" Nov 29 06:56:51 crc kubenswrapper[4947]: I1129 06:56:51.194164 4947 scope.go:117] "RemoveContainer" containerID="afdf16e6f49dc2357f8c1ef1e8d3cac3af6b2b50ae9675fc1ffab106d7c23f94" Nov 29 06:56:51 crc kubenswrapper[4947]: E1129 06:56:51.194975 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afdf16e6f49dc2357f8c1ef1e8d3cac3af6b2b50ae9675fc1ffab106d7c23f94\": container with ID starting with afdf16e6f49dc2357f8c1ef1e8d3cac3af6b2b50ae9675fc1ffab106d7c23f94 not found: ID does not exist" containerID="afdf16e6f49dc2357f8c1ef1e8d3cac3af6b2b50ae9675fc1ffab106d7c23f94" Nov 29 06:56:51 crc kubenswrapper[4947]: I1129 06:56:51.195042 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afdf16e6f49dc2357f8c1ef1e8d3cac3af6b2b50ae9675fc1ffab106d7c23f94"} err="failed to get container status \"afdf16e6f49dc2357f8c1ef1e8d3cac3af6b2b50ae9675fc1ffab106d7c23f94\": rpc error: code = NotFound desc = could not find container \"afdf16e6f49dc2357f8c1ef1e8d3cac3af6b2b50ae9675fc1ffab106d7c23f94\": container with ID starting with afdf16e6f49dc2357f8c1ef1e8d3cac3af6b2b50ae9675fc1ffab106d7c23f94 not found: ID does not exist" Nov 29 06:56:51 crc kubenswrapper[4947]: I1129 06:56:51.289670 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/540c7409-0d06-4ab4-89ab-e2a7dd84cb91-run-httpd\") pod \"540c7409-0d06-4ab4-89ab-e2a7dd84cb91\" (UID: \"540c7409-0d06-4ab4-89ab-e2a7dd84cb91\") " Nov 29 06:56:51 crc kubenswrapper[4947]: I1129 06:56:51.289759 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/540c7409-0d06-4ab4-89ab-e2a7dd84cb91-sg-core-conf-yaml\") pod \"540c7409-0d06-4ab4-89ab-e2a7dd84cb91\" (UID: \"540c7409-0d06-4ab4-89ab-e2a7dd84cb91\") " Nov 29 06:56:51 crc kubenswrapper[4947]: I1129 06:56:51.289800 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/540c7409-0d06-4ab4-89ab-e2a7dd84cb91-config-data\") pod \"540c7409-0d06-4ab4-89ab-e2a7dd84cb91\" (UID: \"540c7409-0d06-4ab4-89ab-e2a7dd84cb91\") " Nov 29 06:56:51 crc kubenswrapper[4947]: I1129 06:56:51.289859 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/540c7409-0d06-4ab4-89ab-e2a7dd84cb91-scripts\") pod \"540c7409-0d06-4ab4-89ab-e2a7dd84cb91\" (UID: \"540c7409-0d06-4ab4-89ab-e2a7dd84cb91\") " Nov 29 06:56:51 crc kubenswrapper[4947]: I1129 06:56:51.289892 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/540c7409-0d06-4ab4-89ab-e2a7dd84cb91-combined-ca-bundle\") pod \"540c7409-0d06-4ab4-89ab-e2a7dd84cb91\" (UID: \"540c7409-0d06-4ab4-89ab-e2a7dd84cb91\") " Nov 29 06:56:51 crc kubenswrapper[4947]: I1129 06:56:51.289972 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55z55\" (UniqueName: \"kubernetes.io/projected/540c7409-0d06-4ab4-89ab-e2a7dd84cb91-kube-api-access-55z55\") pod \"540c7409-0d06-4ab4-89ab-e2a7dd84cb91\" (UID: \"540c7409-0d06-4ab4-89ab-e2a7dd84cb91\") " Nov 29 06:56:51 crc kubenswrapper[4947]: I1129 06:56:51.290003 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/540c7409-0d06-4ab4-89ab-e2a7dd84cb91-log-httpd\") pod \"540c7409-0d06-4ab4-89ab-e2a7dd84cb91\" (UID: \"540c7409-0d06-4ab4-89ab-e2a7dd84cb91\") " Nov 29 06:56:51 crc kubenswrapper[4947]: I1129 06:56:51.290639 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/540c7409-0d06-4ab4-89ab-e2a7dd84cb91-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "540c7409-0d06-4ab4-89ab-e2a7dd84cb91" (UID: "540c7409-0d06-4ab4-89ab-e2a7dd84cb91"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:56:51 crc kubenswrapper[4947]: I1129 06:56:51.290878 4947 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/540c7409-0d06-4ab4-89ab-e2a7dd84cb91-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 06:56:51 crc kubenswrapper[4947]: I1129 06:56:51.291211 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/540c7409-0d06-4ab4-89ab-e2a7dd84cb91-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "540c7409-0d06-4ab4-89ab-e2a7dd84cb91" (UID: "540c7409-0d06-4ab4-89ab-e2a7dd84cb91"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:56:51 crc kubenswrapper[4947]: I1129 06:56:51.298039 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/540c7409-0d06-4ab4-89ab-e2a7dd84cb91-scripts" (OuterVolumeSpecName: "scripts") pod "540c7409-0d06-4ab4-89ab-e2a7dd84cb91" (UID: "540c7409-0d06-4ab4-89ab-e2a7dd84cb91"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:56:51 crc kubenswrapper[4947]: I1129 06:56:51.298456 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/540c7409-0d06-4ab4-89ab-e2a7dd84cb91-kube-api-access-55z55" (OuterVolumeSpecName: "kube-api-access-55z55") pod "540c7409-0d06-4ab4-89ab-e2a7dd84cb91" (UID: "540c7409-0d06-4ab4-89ab-e2a7dd84cb91"). InnerVolumeSpecName "kube-api-access-55z55". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:56:51 crc kubenswrapper[4947]: I1129 06:56:51.322405 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/540c7409-0d06-4ab4-89ab-e2a7dd84cb91-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "540c7409-0d06-4ab4-89ab-e2a7dd84cb91" (UID: "540c7409-0d06-4ab4-89ab-e2a7dd84cb91"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:56:51 crc kubenswrapper[4947]: I1129 06:56:51.352269 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/540c7409-0d06-4ab4-89ab-e2a7dd84cb91-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "540c7409-0d06-4ab4-89ab-e2a7dd84cb91" (UID: "540c7409-0d06-4ab4-89ab-e2a7dd84cb91"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:56:51 crc kubenswrapper[4947]: I1129 06:56:51.384173 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/540c7409-0d06-4ab4-89ab-e2a7dd84cb91-config-data" (OuterVolumeSpecName: "config-data") pod "540c7409-0d06-4ab4-89ab-e2a7dd84cb91" (UID: "540c7409-0d06-4ab4-89ab-e2a7dd84cb91"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:56:51 crc kubenswrapper[4947]: I1129 06:56:51.392897 4947 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/540c7409-0d06-4ab4-89ab-e2a7dd84cb91-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 06:56:51 crc kubenswrapper[4947]: I1129 06:56:51.392938 4947 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/540c7409-0d06-4ab4-89ab-e2a7dd84cb91-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 06:56:51 crc kubenswrapper[4947]: I1129 06:56:51.392960 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55z55\" (UniqueName: \"kubernetes.io/projected/540c7409-0d06-4ab4-89ab-e2a7dd84cb91-kube-api-access-55z55\") on node \"crc\" DevicePath \"\"" Nov 29 06:56:51 crc kubenswrapper[4947]: I1129 06:56:51.392982 4947 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/540c7409-0d06-4ab4-89ab-e2a7dd84cb91-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 06:56:51 crc kubenswrapper[4947]: I1129 06:56:51.392997 4947 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/540c7409-0d06-4ab4-89ab-e2a7dd84cb91-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 29 06:56:51 crc kubenswrapper[4947]: I1129 06:56:51.393008 4947 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/540c7409-0d06-4ab4-89ab-e2a7dd84cb91-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 06:56:52 crc kubenswrapper[4947]: I1129 06:56:52.117428 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 06:56:52 crc kubenswrapper[4947]: I1129 06:56:52.119042 4947 generic.go:334] "Generic (PLEG): container finished" podID="7d3675d1-9c60-4463-936e-95953f64b250" containerID="dea28b99f1041cfd6ccad7f37b448d068eec67196b39a0e12bbb99e845020b24" exitCode=0 Nov 29 06:56:52 crc kubenswrapper[4947]: I1129 06:56:52.119108 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6njvc" event={"ID":"7d3675d1-9c60-4463-936e-95953f64b250","Type":"ContainerDied","Data":"dea28b99f1041cfd6ccad7f37b448d068eec67196b39a0e12bbb99e845020b24"} Nov 29 06:56:52 crc kubenswrapper[4947]: I1129 06:56:52.194311 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 06:56:52 crc kubenswrapper[4947]: I1129 06:56:52.204384 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 29 06:56:52 crc kubenswrapper[4947]: I1129 06:56:52.228069 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 29 06:56:52 crc kubenswrapper[4947]: E1129 06:56:52.228683 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="540c7409-0d06-4ab4-89ab-e2a7dd84cb91" containerName="proxy-httpd" Nov 29 06:56:52 crc kubenswrapper[4947]: I1129 06:56:52.228711 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="540c7409-0d06-4ab4-89ab-e2a7dd84cb91" containerName="proxy-httpd" Nov 29 06:56:52 crc kubenswrapper[4947]: E1129 06:56:52.228735 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="540c7409-0d06-4ab4-89ab-e2a7dd84cb91" containerName="sg-core" Nov 29 06:56:52 crc kubenswrapper[4947]: I1129 06:56:52.228746 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="540c7409-0d06-4ab4-89ab-e2a7dd84cb91" containerName="sg-core" Nov 29 06:56:52 crc kubenswrapper[4947]: E1129 06:56:52.228767 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="540c7409-0d06-4ab4-89ab-e2a7dd84cb91" containerName="ceilometer-notification-agent" Nov 29 06:56:52 crc kubenswrapper[4947]: I1129 06:56:52.228776 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="540c7409-0d06-4ab4-89ab-e2a7dd84cb91" containerName="ceilometer-notification-agent" Nov 29 06:56:52 crc kubenswrapper[4947]: I1129 06:56:52.229005 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="540c7409-0d06-4ab4-89ab-e2a7dd84cb91" containerName="proxy-httpd" Nov 29 06:56:52 crc kubenswrapper[4947]: I1129 06:56:52.229062 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="540c7409-0d06-4ab4-89ab-e2a7dd84cb91" containerName="ceilometer-notification-agent" Nov 29 06:56:52 crc kubenswrapper[4947]: I1129 06:56:52.229110 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="540c7409-0d06-4ab4-89ab-e2a7dd84cb91" containerName="sg-core" Nov 29 06:56:52 crc kubenswrapper[4947]: I1129 06:56:52.231446 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 06:56:52 crc kubenswrapper[4947]: I1129 06:56:52.234968 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 29 06:56:52 crc kubenswrapper[4947]: I1129 06:56:52.236297 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 29 06:56:52 crc kubenswrapper[4947]: I1129 06:56:52.241489 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 06:56:52 crc kubenswrapper[4947]: I1129 06:56:52.411300 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9825e55-7596-4c45-aa4c-0b74cc470e65-scripts\") pod \"ceilometer-0\" (UID: \"c9825e55-7596-4c45-aa4c-0b74cc470e65\") " pod="openstack/ceilometer-0" Nov 29 06:56:52 crc kubenswrapper[4947]: I1129 06:56:52.411393 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glvt6\" (UniqueName: \"kubernetes.io/projected/c9825e55-7596-4c45-aa4c-0b74cc470e65-kube-api-access-glvt6\") pod \"ceilometer-0\" (UID: \"c9825e55-7596-4c45-aa4c-0b74cc470e65\") " pod="openstack/ceilometer-0" Nov 29 06:56:52 crc kubenswrapper[4947]: I1129 06:56:52.412337 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9825e55-7596-4c45-aa4c-0b74cc470e65-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c9825e55-7596-4c45-aa4c-0b74cc470e65\") " pod="openstack/ceilometer-0" Nov 29 06:56:52 crc kubenswrapper[4947]: I1129 06:56:52.412399 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9825e55-7596-4c45-aa4c-0b74cc470e65-config-data\") pod \"ceilometer-0\" (UID: \"c9825e55-7596-4c45-aa4c-0b74cc470e65\") " pod="openstack/ceilometer-0" Nov 29 06:56:52 crc kubenswrapper[4947]: I1129 06:56:52.412494 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9825e55-7596-4c45-aa4c-0b74cc470e65-log-httpd\") pod \"ceilometer-0\" (UID: \"c9825e55-7596-4c45-aa4c-0b74cc470e65\") " pod="openstack/ceilometer-0" Nov 29 06:56:52 crc kubenswrapper[4947]: I1129 06:56:52.412615 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c9825e55-7596-4c45-aa4c-0b74cc470e65-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c9825e55-7596-4c45-aa4c-0b74cc470e65\") " pod="openstack/ceilometer-0" Nov 29 06:56:52 crc kubenswrapper[4947]: I1129 06:56:52.412650 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9825e55-7596-4c45-aa4c-0b74cc470e65-run-httpd\") pod \"ceilometer-0\" (UID: \"c9825e55-7596-4c45-aa4c-0b74cc470e65\") " pod="openstack/ceilometer-0" Nov 29 06:56:52 crc kubenswrapper[4947]: I1129 06:56:52.514544 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9825e55-7596-4c45-aa4c-0b74cc470e65-scripts\") pod \"ceilometer-0\" (UID: \"c9825e55-7596-4c45-aa4c-0b74cc470e65\") " pod="openstack/ceilometer-0" Nov 29 06:56:52 crc kubenswrapper[4947]: I1129 06:56:52.515025 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glvt6\" (UniqueName: \"kubernetes.io/projected/c9825e55-7596-4c45-aa4c-0b74cc470e65-kube-api-access-glvt6\") pod \"ceilometer-0\" (UID: \"c9825e55-7596-4c45-aa4c-0b74cc470e65\") " pod="openstack/ceilometer-0" Nov 29 06:56:52 crc kubenswrapper[4947]: I1129 06:56:52.515195 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9825e55-7596-4c45-aa4c-0b74cc470e65-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c9825e55-7596-4c45-aa4c-0b74cc470e65\") " pod="openstack/ceilometer-0" Nov 29 06:56:52 crc kubenswrapper[4947]: I1129 06:56:52.515356 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9825e55-7596-4c45-aa4c-0b74cc470e65-config-data\") pod \"ceilometer-0\" (UID: \"c9825e55-7596-4c45-aa4c-0b74cc470e65\") " pod="openstack/ceilometer-0" Nov 29 06:56:52 crc kubenswrapper[4947]: I1129 06:56:52.515472 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9825e55-7596-4c45-aa4c-0b74cc470e65-log-httpd\") pod \"ceilometer-0\" (UID: \"c9825e55-7596-4c45-aa4c-0b74cc470e65\") " pod="openstack/ceilometer-0" Nov 29 06:56:52 crc kubenswrapper[4947]: I1129 06:56:52.515635 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c9825e55-7596-4c45-aa4c-0b74cc470e65-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c9825e55-7596-4c45-aa4c-0b74cc470e65\") " pod="openstack/ceilometer-0" Nov 29 06:56:52 crc kubenswrapper[4947]: I1129 06:56:52.515754 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9825e55-7596-4c45-aa4c-0b74cc470e65-run-httpd\") pod \"ceilometer-0\" (UID: \"c9825e55-7596-4c45-aa4c-0b74cc470e65\") " pod="openstack/ceilometer-0" Nov 29 06:56:52 crc kubenswrapper[4947]: I1129 06:56:52.516287 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9825e55-7596-4c45-aa4c-0b74cc470e65-log-httpd\") pod \"ceilometer-0\" (UID: \"c9825e55-7596-4c45-aa4c-0b74cc470e65\") " pod="openstack/ceilometer-0" Nov 29 06:56:52 crc kubenswrapper[4947]: I1129 06:56:52.516680 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9825e55-7596-4c45-aa4c-0b74cc470e65-run-httpd\") pod \"ceilometer-0\" (UID: \"c9825e55-7596-4c45-aa4c-0b74cc470e65\") " pod="openstack/ceilometer-0" Nov 29 06:56:52 crc kubenswrapper[4947]: I1129 06:56:52.524292 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9825e55-7596-4c45-aa4c-0b74cc470e65-config-data\") pod \"ceilometer-0\" (UID: \"c9825e55-7596-4c45-aa4c-0b74cc470e65\") " pod="openstack/ceilometer-0" Nov 29 06:56:52 crc kubenswrapper[4947]: I1129 06:56:52.525303 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9825e55-7596-4c45-aa4c-0b74cc470e65-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c9825e55-7596-4c45-aa4c-0b74cc470e65\") " pod="openstack/ceilometer-0" Nov 29 06:56:52 crc kubenswrapper[4947]: I1129 06:56:52.532210 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c9825e55-7596-4c45-aa4c-0b74cc470e65-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c9825e55-7596-4c45-aa4c-0b74cc470e65\") " pod="openstack/ceilometer-0" Nov 29 06:56:52 crc kubenswrapper[4947]: I1129 06:56:52.532336 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9825e55-7596-4c45-aa4c-0b74cc470e65-scripts\") pod \"ceilometer-0\" (UID: \"c9825e55-7596-4c45-aa4c-0b74cc470e65\") " pod="openstack/ceilometer-0" Nov 29 06:56:52 crc kubenswrapper[4947]: I1129 06:56:52.536189 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glvt6\" (UniqueName: \"kubernetes.io/projected/c9825e55-7596-4c45-aa4c-0b74cc470e65-kube-api-access-glvt6\") pod \"ceilometer-0\" (UID: \"c9825e55-7596-4c45-aa4c-0b74cc470e65\") " pod="openstack/ceilometer-0" Nov 29 06:56:52 crc kubenswrapper[4947]: I1129 06:56:52.562464 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 06:56:53 crc kubenswrapper[4947]: I1129 06:56:53.146158 4947 generic.go:334] "Generic (PLEG): container finished" podID="b341c7eb-214f-49d0-ae91-a27c56857739" containerID="d094cbacc0562b9e4a52068ebcc2eb01cb11df7772d6028bab72f1ff549b5121" exitCode=0 Nov 29 06:56:53 crc kubenswrapper[4947]: I1129 06:56:53.146268 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-l67pc" event={"ID":"b341c7eb-214f-49d0-ae91-a27c56857739","Type":"ContainerDied","Data":"d094cbacc0562b9e4a52068ebcc2eb01cb11df7772d6028bab72f1ff549b5121"} Nov 29 06:56:53 crc kubenswrapper[4947]: I1129 06:56:53.193094 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="540c7409-0d06-4ab4-89ab-e2a7dd84cb91" path="/var/lib/kubelet/pods/540c7409-0d06-4ab4-89ab-e2a7dd84cb91/volumes" Nov 29 06:56:53 crc kubenswrapper[4947]: I1129 06:56:53.664741 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6njvc" Nov 29 06:56:53 crc kubenswrapper[4947]: I1129 06:56:53.772139 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 06:56:53 crc kubenswrapper[4947]: W1129 06:56:53.777171 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9825e55_7596_4c45_aa4c_0b74cc470e65.slice/crio-7546638c37d626da1f4f4b7c74ce66a7c1fc52c5acd9d42cd360cb93676dbf8a WatchSource:0}: Error finding container 7546638c37d626da1f4f4b7c74ce66a7c1fc52c5acd9d42cd360cb93676dbf8a: Status 404 returned error can't find the container with id 7546638c37d626da1f4f4b7c74ce66a7c1fc52c5acd9d42cd360cb93676dbf8a Nov 29 06:56:53 crc kubenswrapper[4947]: I1129 06:56:53.782982 4947 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 06:56:53 crc kubenswrapper[4947]: I1129 06:56:53.838498 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7d3675d1-9c60-4463-936e-95953f64b250-db-sync-config-data\") pod \"7d3675d1-9c60-4463-936e-95953f64b250\" (UID: \"7d3675d1-9c60-4463-936e-95953f64b250\") " Nov 29 06:56:53 crc kubenswrapper[4947]: I1129 06:56:53.838697 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d3675d1-9c60-4463-936e-95953f64b250-combined-ca-bundle\") pod \"7d3675d1-9c60-4463-936e-95953f64b250\" (UID: \"7d3675d1-9c60-4463-936e-95953f64b250\") " Nov 29 06:56:53 crc kubenswrapper[4947]: I1129 06:56:53.838759 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkm2g\" (UniqueName: \"kubernetes.io/projected/7d3675d1-9c60-4463-936e-95953f64b250-kube-api-access-bkm2g\") pod \"7d3675d1-9c60-4463-936e-95953f64b250\" (UID: \"7d3675d1-9c60-4463-936e-95953f64b250\") " Nov 29 06:56:53 crc kubenswrapper[4947]: I1129 06:56:53.838868 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d3675d1-9c60-4463-936e-95953f64b250-config-data\") pod \"7d3675d1-9c60-4463-936e-95953f64b250\" (UID: \"7d3675d1-9c60-4463-936e-95953f64b250\") " Nov 29 06:56:53 crc kubenswrapper[4947]: I1129 06:56:53.846640 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d3675d1-9c60-4463-936e-95953f64b250-kube-api-access-bkm2g" (OuterVolumeSpecName: "kube-api-access-bkm2g") pod "7d3675d1-9c60-4463-936e-95953f64b250" (UID: "7d3675d1-9c60-4463-936e-95953f64b250"). InnerVolumeSpecName "kube-api-access-bkm2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:56:53 crc kubenswrapper[4947]: I1129 06:56:53.847691 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d3675d1-9c60-4463-936e-95953f64b250-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "7d3675d1-9c60-4463-936e-95953f64b250" (UID: "7d3675d1-9c60-4463-936e-95953f64b250"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:56:53 crc kubenswrapper[4947]: I1129 06:56:53.869232 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d3675d1-9c60-4463-936e-95953f64b250-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7d3675d1-9c60-4463-936e-95953f64b250" (UID: "7d3675d1-9c60-4463-936e-95953f64b250"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:56:53 crc kubenswrapper[4947]: I1129 06:56:53.890351 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d3675d1-9c60-4463-936e-95953f64b250-config-data" (OuterVolumeSpecName: "config-data") pod "7d3675d1-9c60-4463-936e-95953f64b250" (UID: "7d3675d1-9c60-4463-936e-95953f64b250"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:56:53 crc kubenswrapper[4947]: I1129 06:56:53.940705 4947 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7d3675d1-9c60-4463-936e-95953f64b250-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 06:56:53 crc kubenswrapper[4947]: I1129 06:56:53.940765 4947 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d3675d1-9c60-4463-936e-95953f64b250-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 06:56:53 crc kubenswrapper[4947]: I1129 06:56:53.940783 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkm2g\" (UniqueName: \"kubernetes.io/projected/7d3675d1-9c60-4463-936e-95953f64b250-kube-api-access-bkm2g\") on node \"crc\" DevicePath \"\"" Nov 29 06:56:53 crc kubenswrapper[4947]: I1129 06:56:53.940798 4947 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d3675d1-9c60-4463-936e-95953f64b250-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 06:56:54 crc kubenswrapper[4947]: I1129 06:56:54.157117 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9825e55-7596-4c45-aa4c-0b74cc470e65","Type":"ContainerStarted","Data":"7546638c37d626da1f4f4b7c74ce66a7c1fc52c5acd9d42cd360cb93676dbf8a"} Nov 29 06:56:54 crc kubenswrapper[4947]: I1129 06:56:54.159327 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6njvc" Nov 29 06:56:54 crc kubenswrapper[4947]: I1129 06:56:54.159503 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6njvc" event={"ID":"7d3675d1-9c60-4463-936e-95953f64b250","Type":"ContainerDied","Data":"47e7035241b780ef8abfda771ec110db3ef2e45156a801a53e025ddbabc06125"} Nov 29 06:56:54 crc kubenswrapper[4947]: I1129 06:56:54.160125 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47e7035241b780ef8abfda771ec110db3ef2e45156a801a53e025ddbabc06125" Nov 29 06:56:54 crc kubenswrapper[4947]: I1129 06:56:54.165369 4947 generic.go:334] "Generic (PLEG): container finished" podID="f26cf011-4f52-4d26-a248-b92906824399" containerID="f5adf08f1c8170eaf109bc2ebac79fcb7d835e3f7563f1e03ca4be766e37dffc" exitCode=0 Nov 29 06:56:54 crc kubenswrapper[4947]: I1129 06:56:54.165471 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-77d4t" event={"ID":"f26cf011-4f52-4d26-a248-b92906824399","Type":"ContainerDied","Data":"f5adf08f1c8170eaf109bc2ebac79fcb7d835e3f7563f1e03ca4be766e37dffc"} Nov 29 06:56:54 crc kubenswrapper[4947]: I1129 06:56:54.511989 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-l67pc" Nov 29 06:56:54 crc kubenswrapper[4947]: I1129 06:56:54.650695 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-zxm5n"] Nov 29 06:56:54 crc kubenswrapper[4947]: E1129 06:56:54.651588 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b341c7eb-214f-49d0-ae91-a27c56857739" containerName="cinder-db-sync" Nov 29 06:56:54 crc kubenswrapper[4947]: I1129 06:56:54.651612 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="b341c7eb-214f-49d0-ae91-a27c56857739" containerName="cinder-db-sync" Nov 29 06:56:54 crc kubenswrapper[4947]: E1129 06:56:54.651625 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d3675d1-9c60-4463-936e-95953f64b250" containerName="glance-db-sync" Nov 29 06:56:54 crc kubenswrapper[4947]: I1129 06:56:54.651631 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d3675d1-9c60-4463-936e-95953f64b250" containerName="glance-db-sync" Nov 29 06:56:54 crc kubenswrapper[4947]: I1129 06:56:54.651797 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="b341c7eb-214f-49d0-ae91-a27c56857739" containerName="cinder-db-sync" Nov 29 06:56:54 crc kubenswrapper[4947]: I1129 06:56:54.651836 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d3675d1-9c60-4463-936e-95953f64b250" containerName="glance-db-sync" Nov 29 06:56:54 crc kubenswrapper[4947]: I1129 06:56:54.655030 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-zxm5n" Nov 29 06:56:54 crc kubenswrapper[4947]: I1129 06:56:54.656484 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpj5m\" (UniqueName: \"kubernetes.io/projected/b341c7eb-214f-49d0-ae91-a27c56857739-kube-api-access-rpj5m\") pod \"b341c7eb-214f-49d0-ae91-a27c56857739\" (UID: \"b341c7eb-214f-49d0-ae91-a27c56857739\") " Nov 29 06:56:54 crc kubenswrapper[4947]: I1129 06:56:54.656579 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b341c7eb-214f-49d0-ae91-a27c56857739-scripts\") pod \"b341c7eb-214f-49d0-ae91-a27c56857739\" (UID: \"b341c7eb-214f-49d0-ae91-a27c56857739\") " Nov 29 06:56:54 crc kubenswrapper[4947]: I1129 06:56:54.656633 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b341c7eb-214f-49d0-ae91-a27c56857739-combined-ca-bundle\") pod \"b341c7eb-214f-49d0-ae91-a27c56857739\" (UID: \"b341c7eb-214f-49d0-ae91-a27c56857739\") " Nov 29 06:56:54 crc kubenswrapper[4947]: I1129 06:56:54.656694 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b341c7eb-214f-49d0-ae91-a27c56857739-etc-machine-id\") pod \"b341c7eb-214f-49d0-ae91-a27c56857739\" (UID: \"b341c7eb-214f-49d0-ae91-a27c56857739\") " Nov 29 06:56:54 crc kubenswrapper[4947]: I1129 06:56:54.656917 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b341c7eb-214f-49d0-ae91-a27c56857739-db-sync-config-data\") pod \"b341c7eb-214f-49d0-ae91-a27c56857739\" (UID: \"b341c7eb-214f-49d0-ae91-a27c56857739\") " Nov 29 06:56:54 crc kubenswrapper[4947]: I1129 06:56:54.657037 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b341c7eb-214f-49d0-ae91-a27c56857739-config-data\") pod \"b341c7eb-214f-49d0-ae91-a27c56857739\" (UID: \"b341c7eb-214f-49d0-ae91-a27c56857739\") " Nov 29 06:56:54 crc kubenswrapper[4947]: I1129 06:56:54.657509 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b341c7eb-214f-49d0-ae91-a27c56857739-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b341c7eb-214f-49d0-ae91-a27c56857739" (UID: "b341c7eb-214f-49d0-ae91-a27c56857739"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 06:56:54 crc kubenswrapper[4947]: I1129 06:56:54.685035 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b341c7eb-214f-49d0-ae91-a27c56857739-kube-api-access-rpj5m" (OuterVolumeSpecName: "kube-api-access-rpj5m") pod "b341c7eb-214f-49d0-ae91-a27c56857739" (UID: "b341c7eb-214f-49d0-ae91-a27c56857739"). InnerVolumeSpecName "kube-api-access-rpj5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:56:54 crc kubenswrapper[4947]: I1129 06:56:54.685156 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b341c7eb-214f-49d0-ae91-a27c56857739-scripts" (OuterVolumeSpecName: "scripts") pod "b341c7eb-214f-49d0-ae91-a27c56857739" (UID: "b341c7eb-214f-49d0-ae91-a27c56857739"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:56:54 crc kubenswrapper[4947]: I1129 06:56:54.685402 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b341c7eb-214f-49d0-ae91-a27c56857739-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b341c7eb-214f-49d0-ae91-a27c56857739" (UID: "b341c7eb-214f-49d0-ae91-a27c56857739"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:56:54 crc kubenswrapper[4947]: I1129 06:56:54.694452 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-zxm5n"] Nov 29 06:56:54 crc kubenswrapper[4947]: I1129 06:56:54.736036 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b341c7eb-214f-49d0-ae91-a27c56857739-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b341c7eb-214f-49d0-ae91-a27c56857739" (UID: "b341c7eb-214f-49d0-ae91-a27c56857739"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:56:54 crc kubenswrapper[4947]: I1129 06:56:54.770495 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b906095-1e6e-42c3-9952-ad436efa3fbf-dns-svc\") pod \"dnsmasq-dns-7987f74bbc-zxm5n\" (UID: \"5b906095-1e6e-42c3-9952-ad436efa3fbf\") " pod="openstack/dnsmasq-dns-7987f74bbc-zxm5n" Nov 29 06:56:54 crc kubenswrapper[4947]: I1129 06:56:54.770595 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x4dl\" (UniqueName: \"kubernetes.io/projected/5b906095-1e6e-42c3-9952-ad436efa3fbf-kube-api-access-6x4dl\") pod \"dnsmasq-dns-7987f74bbc-zxm5n\" (UID: \"5b906095-1e6e-42c3-9952-ad436efa3fbf\") " pod="openstack/dnsmasq-dns-7987f74bbc-zxm5n" Nov 29 06:56:54 crc kubenswrapper[4947]: I1129 06:56:54.770714 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b906095-1e6e-42c3-9952-ad436efa3fbf-config\") pod \"dnsmasq-dns-7987f74bbc-zxm5n\" (UID: \"5b906095-1e6e-42c3-9952-ad436efa3fbf\") " pod="openstack/dnsmasq-dns-7987f74bbc-zxm5n" Nov 29 06:56:54 crc kubenswrapper[4947]: I1129 06:56:54.770788 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5b906095-1e6e-42c3-9952-ad436efa3fbf-ovsdbserver-nb\") pod \"dnsmasq-dns-7987f74bbc-zxm5n\" (UID: \"5b906095-1e6e-42c3-9952-ad436efa3fbf\") " pod="openstack/dnsmasq-dns-7987f74bbc-zxm5n" Nov 29 06:56:54 crc kubenswrapper[4947]: I1129 06:56:54.771435 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5b906095-1e6e-42c3-9952-ad436efa3fbf-ovsdbserver-sb\") pod \"dnsmasq-dns-7987f74bbc-zxm5n\" (UID: \"5b906095-1e6e-42c3-9952-ad436efa3fbf\") " pod="openstack/dnsmasq-dns-7987f74bbc-zxm5n" Nov 29 06:56:54 crc kubenswrapper[4947]: I1129 06:56:54.776230 4947 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b341c7eb-214f-49d0-ae91-a27c56857739-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 06:56:54 crc kubenswrapper[4947]: I1129 06:56:54.776271 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpj5m\" (UniqueName: \"kubernetes.io/projected/b341c7eb-214f-49d0-ae91-a27c56857739-kube-api-access-rpj5m\") on node \"crc\" DevicePath \"\"" Nov 29 06:56:54 crc kubenswrapper[4947]: I1129 06:56:54.776290 4947 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b341c7eb-214f-49d0-ae91-a27c56857739-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 06:56:54 crc kubenswrapper[4947]: I1129 06:56:54.776308 4947 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b341c7eb-214f-49d0-ae91-a27c56857739-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 06:56:54 crc kubenswrapper[4947]: I1129 06:56:54.776320 4947 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b341c7eb-214f-49d0-ae91-a27c56857739-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 29 06:56:54 crc kubenswrapper[4947]: I1129 06:56:54.785730 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b341c7eb-214f-49d0-ae91-a27c56857739-config-data" (OuterVolumeSpecName: "config-data") pod "b341c7eb-214f-49d0-ae91-a27c56857739" (UID: "b341c7eb-214f-49d0-ae91-a27c56857739"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:56:54 crc kubenswrapper[4947]: I1129 06:56:54.877483 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b906095-1e6e-42c3-9952-ad436efa3fbf-dns-svc\") pod \"dnsmasq-dns-7987f74bbc-zxm5n\" (UID: \"5b906095-1e6e-42c3-9952-ad436efa3fbf\") " pod="openstack/dnsmasq-dns-7987f74bbc-zxm5n" Nov 29 06:56:54 crc kubenswrapper[4947]: I1129 06:56:54.877539 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x4dl\" (UniqueName: \"kubernetes.io/projected/5b906095-1e6e-42c3-9952-ad436efa3fbf-kube-api-access-6x4dl\") pod \"dnsmasq-dns-7987f74bbc-zxm5n\" (UID: \"5b906095-1e6e-42c3-9952-ad436efa3fbf\") " pod="openstack/dnsmasq-dns-7987f74bbc-zxm5n" Nov 29 06:56:54 crc kubenswrapper[4947]: I1129 06:56:54.877614 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b906095-1e6e-42c3-9952-ad436efa3fbf-config\") pod \"dnsmasq-dns-7987f74bbc-zxm5n\" (UID: \"5b906095-1e6e-42c3-9952-ad436efa3fbf\") " pod="openstack/dnsmasq-dns-7987f74bbc-zxm5n" Nov 29 06:56:54 crc kubenswrapper[4947]: I1129 06:56:54.877633 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5b906095-1e6e-42c3-9952-ad436efa3fbf-ovsdbserver-nb\") pod \"dnsmasq-dns-7987f74bbc-zxm5n\" (UID: \"5b906095-1e6e-42c3-9952-ad436efa3fbf\") " pod="openstack/dnsmasq-dns-7987f74bbc-zxm5n" Nov 29 06:56:54 crc kubenswrapper[4947]: I1129 06:56:54.877697 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5b906095-1e6e-42c3-9952-ad436efa3fbf-ovsdbserver-sb\") pod \"dnsmasq-dns-7987f74bbc-zxm5n\" (UID: \"5b906095-1e6e-42c3-9952-ad436efa3fbf\") " pod="openstack/dnsmasq-dns-7987f74bbc-zxm5n" Nov 29 06:56:54 crc kubenswrapper[4947]: I1129 06:56:54.877759 4947 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b341c7eb-214f-49d0-ae91-a27c56857739-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 06:56:54 crc kubenswrapper[4947]: I1129 06:56:54.878782 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5b906095-1e6e-42c3-9952-ad436efa3fbf-ovsdbserver-sb\") pod \"dnsmasq-dns-7987f74bbc-zxm5n\" (UID: \"5b906095-1e6e-42c3-9952-ad436efa3fbf\") " pod="openstack/dnsmasq-dns-7987f74bbc-zxm5n" Nov 29 06:56:54 crc kubenswrapper[4947]: I1129 06:56:54.879821 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b906095-1e6e-42c3-9952-ad436efa3fbf-dns-svc\") pod \"dnsmasq-dns-7987f74bbc-zxm5n\" (UID: \"5b906095-1e6e-42c3-9952-ad436efa3fbf\") " pod="openstack/dnsmasq-dns-7987f74bbc-zxm5n" Nov 29 06:56:54 crc kubenswrapper[4947]: I1129 06:56:54.880797 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5b906095-1e6e-42c3-9952-ad436efa3fbf-ovsdbserver-nb\") pod \"dnsmasq-dns-7987f74bbc-zxm5n\" (UID: \"5b906095-1e6e-42c3-9952-ad436efa3fbf\") " pod="openstack/dnsmasq-dns-7987f74bbc-zxm5n" Nov 29 06:56:54 crc kubenswrapper[4947]: I1129 06:56:54.884575 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b906095-1e6e-42c3-9952-ad436efa3fbf-config\") pod \"dnsmasq-dns-7987f74bbc-zxm5n\" (UID: \"5b906095-1e6e-42c3-9952-ad436efa3fbf\") " pod="openstack/dnsmasq-dns-7987f74bbc-zxm5n" Nov 29 06:56:54 crc kubenswrapper[4947]: I1129 06:56:54.901144 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x4dl\" (UniqueName: \"kubernetes.io/projected/5b906095-1e6e-42c3-9952-ad436efa3fbf-kube-api-access-6x4dl\") pod \"dnsmasq-dns-7987f74bbc-zxm5n\" (UID: \"5b906095-1e6e-42c3-9952-ad436efa3fbf\") " pod="openstack/dnsmasq-dns-7987f74bbc-zxm5n" Nov 29 06:56:55 crc kubenswrapper[4947]: I1129 06:56:55.140402 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-zxm5n" Nov 29 06:56:55 crc kubenswrapper[4947]: I1129 06:56:55.178103 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-l67pc" event={"ID":"b341c7eb-214f-49d0-ae91-a27c56857739","Type":"ContainerDied","Data":"eda23693ae8e4d49522fc87a6b8ae86c3c1fe0f5471bf0148e0c8c0c5e4befb5"} Nov 29 06:56:55 crc kubenswrapper[4947]: I1129 06:56:55.178160 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eda23693ae8e4d49522fc87a6b8ae86c3c1fe0f5471bf0148e0c8c0c5e4befb5" Nov 29 06:56:55 crc kubenswrapper[4947]: I1129 06:56:55.178358 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-l67pc" Nov 29 06:56:55 crc kubenswrapper[4947]: I1129 06:56:55.205279 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9825e55-7596-4c45-aa4c-0b74cc470e65","Type":"ContainerStarted","Data":"c2398ce9e4ac38cb6fbea2be89e944f75376f74a1462601f3734203cb62d622a"} Nov 29 06:56:55 crc kubenswrapper[4947]: I1129 06:56:55.475248 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 29 06:56:55 crc kubenswrapper[4947]: I1129 06:56:55.477868 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 29 06:56:55 crc kubenswrapper[4947]: I1129 06:56:55.483544 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 29 06:56:55 crc kubenswrapper[4947]: I1129 06:56:55.483649 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-rss2f" Nov 29 06:56:55 crc kubenswrapper[4947]: I1129 06:56:55.484108 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 29 06:56:55 crc kubenswrapper[4947]: I1129 06:56:55.485325 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 29 06:56:55 crc kubenswrapper[4947]: I1129 06:56:55.490150 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 29 06:56:55 crc kubenswrapper[4947]: I1129 06:56:55.533327 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-zxm5n"] Nov 29 06:56:55 crc kubenswrapper[4947]: I1129 06:56:55.597499 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3ec3135-6049-4853-b571-d23200456fc5-config-data\") pod \"cinder-scheduler-0\" (UID: \"f3ec3135-6049-4853-b571-d23200456fc5\") " pod="openstack/cinder-scheduler-0" Nov 29 06:56:55 crc kubenswrapper[4947]: I1129 06:56:55.597554 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3ec3135-6049-4853-b571-d23200456fc5-scripts\") pod \"cinder-scheduler-0\" (UID: \"f3ec3135-6049-4853-b571-d23200456fc5\") " pod="openstack/cinder-scheduler-0" Nov 29 06:56:55 crc kubenswrapper[4947]: I1129 06:56:55.597582 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f22vf\" (UniqueName: \"kubernetes.io/projected/f3ec3135-6049-4853-b571-d23200456fc5-kube-api-access-f22vf\") pod \"cinder-scheduler-0\" (UID: \"f3ec3135-6049-4853-b571-d23200456fc5\") " pod="openstack/cinder-scheduler-0" Nov 29 06:56:55 crc kubenswrapper[4947]: I1129 06:56:55.597651 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3ec3135-6049-4853-b571-d23200456fc5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f3ec3135-6049-4853-b571-d23200456fc5\") " pod="openstack/cinder-scheduler-0" Nov 29 06:56:55 crc kubenswrapper[4947]: I1129 06:56:55.597687 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f3ec3135-6049-4853-b571-d23200456fc5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f3ec3135-6049-4853-b571-d23200456fc5\") " pod="openstack/cinder-scheduler-0" Nov 29 06:56:55 crc kubenswrapper[4947]: I1129 06:56:55.597716 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3ec3135-6049-4853-b571-d23200456fc5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f3ec3135-6049-4853-b571-d23200456fc5\") " pod="openstack/cinder-scheduler-0" Nov 29 06:56:55 crc kubenswrapper[4947]: I1129 06:56:55.597812 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-64cd874c85-ts9sk"] Nov 29 06:56:55 crc kubenswrapper[4947]: I1129 06:56:55.600559 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64cd874c85-ts9sk" Nov 29 06:56:55 crc kubenswrapper[4947]: I1129 06:56:55.647846 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64cd874c85-ts9sk"] Nov 29 06:56:55 crc kubenswrapper[4947]: I1129 06:56:55.699691 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3ec3135-6049-4853-b571-d23200456fc5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f3ec3135-6049-4853-b571-d23200456fc5\") " pod="openstack/cinder-scheduler-0" Nov 29 06:56:55 crc kubenswrapper[4947]: I1129 06:56:55.699784 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3ec3135-6049-4853-b571-d23200456fc5-config-data\") pod \"cinder-scheduler-0\" (UID: \"f3ec3135-6049-4853-b571-d23200456fc5\") " pod="openstack/cinder-scheduler-0" Nov 29 06:56:55 crc kubenswrapper[4947]: I1129 06:56:55.699814 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3ec3135-6049-4853-b571-d23200456fc5-scripts\") pod \"cinder-scheduler-0\" (UID: \"f3ec3135-6049-4853-b571-d23200456fc5\") " pod="openstack/cinder-scheduler-0" Nov 29 06:56:55 crc kubenswrapper[4947]: I1129 06:56:55.699839 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f22vf\" (UniqueName: \"kubernetes.io/projected/f3ec3135-6049-4853-b571-d23200456fc5-kube-api-access-f22vf\") pod \"cinder-scheduler-0\" (UID: \"f3ec3135-6049-4853-b571-d23200456fc5\") " pod="openstack/cinder-scheduler-0" Nov 29 06:56:55 crc kubenswrapper[4947]: I1129 06:56:55.699909 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3ec3135-6049-4853-b571-d23200456fc5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f3ec3135-6049-4853-b571-d23200456fc5\") " pod="openstack/cinder-scheduler-0" Nov 29 06:56:55 crc kubenswrapper[4947]: I1129 06:56:55.699944 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f3ec3135-6049-4853-b571-d23200456fc5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f3ec3135-6049-4853-b571-d23200456fc5\") " pod="openstack/cinder-scheduler-0" Nov 29 06:56:55 crc kubenswrapper[4947]: I1129 06:56:55.700034 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f3ec3135-6049-4853-b571-d23200456fc5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f3ec3135-6049-4853-b571-d23200456fc5\") " pod="openstack/cinder-scheduler-0" Nov 29 06:56:55 crc kubenswrapper[4947]: I1129 06:56:55.713106 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3ec3135-6049-4853-b571-d23200456fc5-config-data\") pod \"cinder-scheduler-0\" (UID: \"f3ec3135-6049-4853-b571-d23200456fc5\") " pod="openstack/cinder-scheduler-0" Nov 29 06:56:55 crc kubenswrapper[4947]: I1129 06:56:55.714851 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3ec3135-6049-4853-b571-d23200456fc5-scripts\") pod \"cinder-scheduler-0\" (UID: \"f3ec3135-6049-4853-b571-d23200456fc5\") " pod="openstack/cinder-scheduler-0" Nov 29 06:56:55 crc kubenswrapper[4947]: I1129 06:56:55.717879 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3ec3135-6049-4853-b571-d23200456fc5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f3ec3135-6049-4853-b571-d23200456fc5\") " pod="openstack/cinder-scheduler-0" Nov 29 06:56:55 crc kubenswrapper[4947]: I1129 06:56:55.725184 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3ec3135-6049-4853-b571-d23200456fc5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f3ec3135-6049-4853-b571-d23200456fc5\") " pod="openstack/cinder-scheduler-0" Nov 29 06:56:55 crc kubenswrapper[4947]: I1129 06:56:55.735852 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f22vf\" (UniqueName: \"kubernetes.io/projected/f3ec3135-6049-4853-b571-d23200456fc5-kube-api-access-f22vf\") pod \"cinder-scheduler-0\" (UID: \"f3ec3135-6049-4853-b571-d23200456fc5\") " pod="openstack/cinder-scheduler-0" Nov 29 06:56:55 crc kubenswrapper[4947]: I1129 06:56:55.759037 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-zxm5n"] Nov 29 06:56:55 crc kubenswrapper[4947]: I1129 06:56:55.804578 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aa846029-b0e0-4e05-b8c1-07aecc1d4d27-ovsdbserver-sb\") pod \"dnsmasq-dns-64cd874c85-ts9sk\" (UID: \"aa846029-b0e0-4e05-b8c1-07aecc1d4d27\") " pod="openstack/dnsmasq-dns-64cd874c85-ts9sk" Nov 29 06:56:55 crc kubenswrapper[4947]: I1129 06:56:55.804650 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzlxl\" (UniqueName: \"kubernetes.io/projected/aa846029-b0e0-4e05-b8c1-07aecc1d4d27-kube-api-access-bzlxl\") pod \"dnsmasq-dns-64cd874c85-ts9sk\" (UID: \"aa846029-b0e0-4e05-b8c1-07aecc1d4d27\") " pod="openstack/dnsmasq-dns-64cd874c85-ts9sk" Nov 29 06:56:55 crc kubenswrapper[4947]: I1129 06:56:55.804676 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa846029-b0e0-4e05-b8c1-07aecc1d4d27-config\") pod \"dnsmasq-dns-64cd874c85-ts9sk\" (UID: \"aa846029-b0e0-4e05-b8c1-07aecc1d4d27\") " pod="openstack/dnsmasq-dns-64cd874c85-ts9sk" Nov 29 06:56:55 crc kubenswrapper[4947]: I1129 06:56:55.805114 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa846029-b0e0-4e05-b8c1-07aecc1d4d27-dns-svc\") pod \"dnsmasq-dns-64cd874c85-ts9sk\" (UID: \"aa846029-b0e0-4e05-b8c1-07aecc1d4d27\") " pod="openstack/dnsmasq-dns-64cd874c85-ts9sk" Nov 29 06:56:55 crc kubenswrapper[4947]: I1129 06:56:55.805378 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aa846029-b0e0-4e05-b8c1-07aecc1d4d27-ovsdbserver-nb\") pod \"dnsmasq-dns-64cd874c85-ts9sk\" (UID: \"aa846029-b0e0-4e05-b8c1-07aecc1d4d27\") " pod="openstack/dnsmasq-dns-64cd874c85-ts9sk" Nov 29 06:56:55 crc kubenswrapper[4947]: I1129 06:56:55.818097 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 29 06:56:55 crc kubenswrapper[4947]: I1129 06:56:55.864410 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 29 06:56:55 crc kubenswrapper[4947]: I1129 06:56:55.866007 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 29 06:56:55 crc kubenswrapper[4947]: I1129 06:56:55.868530 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 29 06:56:55 crc kubenswrapper[4947]: I1129 06:56:55.878720 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 29 06:56:55 crc kubenswrapper[4947]: I1129 06:56:55.879667 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-77d4t" Nov 29 06:56:55 crc kubenswrapper[4947]: I1129 06:56:55.907481 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aa846029-b0e0-4e05-b8c1-07aecc1d4d27-ovsdbserver-nb\") pod \"dnsmasq-dns-64cd874c85-ts9sk\" (UID: \"aa846029-b0e0-4e05-b8c1-07aecc1d4d27\") " pod="openstack/dnsmasq-dns-64cd874c85-ts9sk" Nov 29 06:56:55 crc kubenswrapper[4947]: I1129 06:56:55.907544 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aa846029-b0e0-4e05-b8c1-07aecc1d4d27-ovsdbserver-sb\") pod \"dnsmasq-dns-64cd874c85-ts9sk\" (UID: \"aa846029-b0e0-4e05-b8c1-07aecc1d4d27\") " pod="openstack/dnsmasq-dns-64cd874c85-ts9sk" Nov 29 06:56:55 crc kubenswrapper[4947]: I1129 06:56:55.907579 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzlxl\" (UniqueName: \"kubernetes.io/projected/aa846029-b0e0-4e05-b8c1-07aecc1d4d27-kube-api-access-bzlxl\") pod \"dnsmasq-dns-64cd874c85-ts9sk\" (UID: \"aa846029-b0e0-4e05-b8c1-07aecc1d4d27\") " pod="openstack/dnsmasq-dns-64cd874c85-ts9sk" Nov 29 06:56:55 crc kubenswrapper[4947]: I1129 06:56:55.907603 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa846029-b0e0-4e05-b8c1-07aecc1d4d27-config\") pod \"dnsmasq-dns-64cd874c85-ts9sk\" (UID: \"aa846029-b0e0-4e05-b8c1-07aecc1d4d27\") " pod="openstack/dnsmasq-dns-64cd874c85-ts9sk" Nov 29 06:56:55 crc kubenswrapper[4947]: I1129 06:56:55.907681 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa846029-b0e0-4e05-b8c1-07aecc1d4d27-dns-svc\") pod \"dnsmasq-dns-64cd874c85-ts9sk\" (UID: \"aa846029-b0e0-4e05-b8c1-07aecc1d4d27\") " pod="openstack/dnsmasq-dns-64cd874c85-ts9sk" Nov 29 06:56:55 crc kubenswrapper[4947]: I1129 06:56:55.909382 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aa846029-b0e0-4e05-b8c1-07aecc1d4d27-ovsdbserver-sb\") pod \"dnsmasq-dns-64cd874c85-ts9sk\" (UID: \"aa846029-b0e0-4e05-b8c1-07aecc1d4d27\") " pod="openstack/dnsmasq-dns-64cd874c85-ts9sk" Nov 29 06:56:55 crc kubenswrapper[4947]: I1129 06:56:55.909947 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aa846029-b0e0-4e05-b8c1-07aecc1d4d27-ovsdbserver-nb\") pod \"dnsmasq-dns-64cd874c85-ts9sk\" (UID: \"aa846029-b0e0-4e05-b8c1-07aecc1d4d27\") " pod="openstack/dnsmasq-dns-64cd874c85-ts9sk" Nov 29 06:56:55 crc kubenswrapper[4947]: I1129 06:56:55.910279 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa846029-b0e0-4e05-b8c1-07aecc1d4d27-dns-svc\") pod \"dnsmasq-dns-64cd874c85-ts9sk\" (UID: \"aa846029-b0e0-4e05-b8c1-07aecc1d4d27\") " pod="openstack/dnsmasq-dns-64cd874c85-ts9sk" Nov 29 06:56:55 crc kubenswrapper[4947]: I1129 06:56:55.910509 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa846029-b0e0-4e05-b8c1-07aecc1d4d27-config\") pod \"dnsmasq-dns-64cd874c85-ts9sk\" (UID: \"aa846029-b0e0-4e05-b8c1-07aecc1d4d27\") " pod="openstack/dnsmasq-dns-64cd874c85-ts9sk" Nov 29 06:56:55 crc kubenswrapper[4947]: I1129 06:56:55.934249 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzlxl\" (UniqueName: \"kubernetes.io/projected/aa846029-b0e0-4e05-b8c1-07aecc1d4d27-kube-api-access-bzlxl\") pod \"dnsmasq-dns-64cd874c85-ts9sk\" (UID: \"aa846029-b0e0-4e05-b8c1-07aecc1d4d27\") " pod="openstack/dnsmasq-dns-64cd874c85-ts9sk" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.008632 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f26cf011-4f52-4d26-a248-b92906824399-db-sync-config-data\") pod \"f26cf011-4f52-4d26-a248-b92906824399\" (UID: \"f26cf011-4f52-4d26-a248-b92906824399\") " Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.008745 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f26cf011-4f52-4d26-a248-b92906824399-combined-ca-bundle\") pod \"f26cf011-4f52-4d26-a248-b92906824399\" (UID: \"f26cf011-4f52-4d26-a248-b92906824399\") " Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.008880 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hncz2\" (UniqueName: \"kubernetes.io/projected/f26cf011-4f52-4d26-a248-b92906824399-kube-api-access-hncz2\") pod \"f26cf011-4f52-4d26-a248-b92906824399\" (UID: \"f26cf011-4f52-4d26-a248-b92906824399\") " Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.009185 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e4e05c1a-b07e-4307-ab90-314a6fcd6619-config-data-custom\") pod \"cinder-api-0\" (UID: \"e4e05c1a-b07e-4307-ab90-314a6fcd6619\") " pod="openstack/cinder-api-0" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.009254 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4e05c1a-b07e-4307-ab90-314a6fcd6619-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e4e05c1a-b07e-4307-ab90-314a6fcd6619\") " pod="openstack/cinder-api-0" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.009344 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4e05c1a-b07e-4307-ab90-314a6fcd6619-logs\") pod \"cinder-api-0\" (UID: \"e4e05c1a-b07e-4307-ab90-314a6fcd6619\") " pod="openstack/cinder-api-0" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.011093 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xclr7\" (UniqueName: \"kubernetes.io/projected/e4e05c1a-b07e-4307-ab90-314a6fcd6619-kube-api-access-xclr7\") pod \"cinder-api-0\" (UID: \"e4e05c1a-b07e-4307-ab90-314a6fcd6619\") " pod="openstack/cinder-api-0" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.011170 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4e05c1a-b07e-4307-ab90-314a6fcd6619-scripts\") pod \"cinder-api-0\" (UID: \"e4e05c1a-b07e-4307-ab90-314a6fcd6619\") " pod="openstack/cinder-api-0" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.011231 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e4e05c1a-b07e-4307-ab90-314a6fcd6619-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e4e05c1a-b07e-4307-ab90-314a6fcd6619\") " pod="openstack/cinder-api-0" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.011324 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4e05c1a-b07e-4307-ab90-314a6fcd6619-config-data\") pod \"cinder-api-0\" (UID: \"e4e05c1a-b07e-4307-ab90-314a6fcd6619\") " pod="openstack/cinder-api-0" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.014165 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f26cf011-4f52-4d26-a248-b92906824399-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f26cf011-4f52-4d26-a248-b92906824399" (UID: "f26cf011-4f52-4d26-a248-b92906824399"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.017505 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f26cf011-4f52-4d26-a248-b92906824399-kube-api-access-hncz2" (OuterVolumeSpecName: "kube-api-access-hncz2") pod "f26cf011-4f52-4d26-a248-b92906824399" (UID: "f26cf011-4f52-4d26-a248-b92906824399"). InnerVolumeSpecName "kube-api-access-hncz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.115497 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f26cf011-4f52-4d26-a248-b92906824399-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f26cf011-4f52-4d26-a248-b92906824399" (UID: "f26cf011-4f52-4d26-a248-b92906824399"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.119781 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4e05c1a-b07e-4307-ab90-314a6fcd6619-logs\") pod \"cinder-api-0\" (UID: \"e4e05c1a-b07e-4307-ab90-314a6fcd6619\") " pod="openstack/cinder-api-0" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.119880 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xclr7\" (UniqueName: \"kubernetes.io/projected/e4e05c1a-b07e-4307-ab90-314a6fcd6619-kube-api-access-xclr7\") pod \"cinder-api-0\" (UID: \"e4e05c1a-b07e-4307-ab90-314a6fcd6619\") " pod="openstack/cinder-api-0" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.119976 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4e05c1a-b07e-4307-ab90-314a6fcd6619-scripts\") pod \"cinder-api-0\" (UID: \"e4e05c1a-b07e-4307-ab90-314a6fcd6619\") " pod="openstack/cinder-api-0" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.120046 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e4e05c1a-b07e-4307-ab90-314a6fcd6619-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e4e05c1a-b07e-4307-ab90-314a6fcd6619\") " pod="openstack/cinder-api-0" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.120201 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4e05c1a-b07e-4307-ab90-314a6fcd6619-config-data\") pod \"cinder-api-0\" (UID: \"e4e05c1a-b07e-4307-ab90-314a6fcd6619\") " pod="openstack/cinder-api-0" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.120394 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e4e05c1a-b07e-4307-ab90-314a6fcd6619-config-data-custom\") pod \"cinder-api-0\" (UID: \"e4e05c1a-b07e-4307-ab90-314a6fcd6619\") " pod="openstack/cinder-api-0" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.120453 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4e05c1a-b07e-4307-ab90-314a6fcd6619-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e4e05c1a-b07e-4307-ab90-314a6fcd6619\") " pod="openstack/cinder-api-0" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.121112 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hncz2\" (UniqueName: \"kubernetes.io/projected/f26cf011-4f52-4d26-a248-b92906824399-kube-api-access-hncz2\") on node \"crc\" DevicePath \"\"" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.128741 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4e05c1a-b07e-4307-ab90-314a6fcd6619-logs\") pod \"cinder-api-0\" (UID: \"e4e05c1a-b07e-4307-ab90-314a6fcd6619\") " pod="openstack/cinder-api-0" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.128799 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e4e05c1a-b07e-4307-ab90-314a6fcd6619-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e4e05c1a-b07e-4307-ab90-314a6fcd6619\") " pod="openstack/cinder-api-0" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.134307 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4e05c1a-b07e-4307-ab90-314a6fcd6619-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e4e05c1a-b07e-4307-ab90-314a6fcd6619\") " pod="openstack/cinder-api-0" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.134410 4947 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f26cf011-4f52-4d26-a248-b92906824399-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.134436 4947 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f26cf011-4f52-4d26-a248-b92906824399-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.134671 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4e05c1a-b07e-4307-ab90-314a6fcd6619-config-data\") pod \"cinder-api-0\" (UID: \"e4e05c1a-b07e-4307-ab90-314a6fcd6619\") " pod="openstack/cinder-api-0" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.134738 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4e05c1a-b07e-4307-ab90-314a6fcd6619-scripts\") pod \"cinder-api-0\" (UID: \"e4e05c1a-b07e-4307-ab90-314a6fcd6619\") " pod="openstack/cinder-api-0" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.134893 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e4e05c1a-b07e-4307-ab90-314a6fcd6619-config-data-custom\") pod \"cinder-api-0\" (UID: \"e4e05c1a-b07e-4307-ab90-314a6fcd6619\") " pod="openstack/cinder-api-0" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.145004 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xclr7\" (UniqueName: \"kubernetes.io/projected/e4e05c1a-b07e-4307-ab90-314a6fcd6619-kube-api-access-xclr7\") pod \"cinder-api-0\" (UID: \"e4e05c1a-b07e-4307-ab90-314a6fcd6619\") " pod="openstack/cinder-api-0" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.154271 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64cd874c85-ts9sk" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.209620 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.256493 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9825e55-7596-4c45-aa4c-0b74cc470e65","Type":"ContainerStarted","Data":"81435585e674f07540a92eb4a4645d7cc44ffa1e357d5aa68425c08abbe3c3fc"} Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.261836 4947 generic.go:334] "Generic (PLEG): container finished" podID="5b906095-1e6e-42c3-9952-ad436efa3fbf" containerID="10bd9d566a272be5ec374787ee424cbd9f3478f5cfb14d4b171f04fb833f58bf" exitCode=0 Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.263705 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-zxm5n" event={"ID":"5b906095-1e6e-42c3-9952-ad436efa3fbf","Type":"ContainerDied","Data":"10bd9d566a272be5ec374787ee424cbd9f3478f5cfb14d4b171f04fb833f58bf"} Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.263846 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-zxm5n" event={"ID":"5b906095-1e6e-42c3-9952-ad436efa3fbf","Type":"ContainerStarted","Data":"3ae54f635aa3fb40bfa6354086d57df6e0ee8fb53a292e7b8ade20016940fbe8"} Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.271026 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-77d4t" event={"ID":"f26cf011-4f52-4d26-a248-b92906824399","Type":"ContainerDied","Data":"cb221bfdd7ece81457b8fcba3f53f0f7667d5cb1cacfa7ef4403c24844cf6099"} Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.271083 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb221bfdd7ece81457b8fcba3f53f0f7667d5cb1cacfa7ef4403c24844cf6099" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.271166 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-77d4t" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.321384 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.475554 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-85c67749d7-4wkg2"] Nov 29 06:56:56 crc kubenswrapper[4947]: E1129 06:56:56.476163 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f26cf011-4f52-4d26-a248-b92906824399" containerName="barbican-db-sync" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.476187 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="f26cf011-4f52-4d26-a248-b92906824399" containerName="barbican-db-sync" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.476487 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="f26cf011-4f52-4d26-a248-b92906824399" containerName="barbican-db-sync" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.477810 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-85c67749d7-4wkg2" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.485701 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.485932 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-9znx8" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.499670 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.523395 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-59fd76fd88-6ghpn"] Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.525261 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-59fd76fd88-6ghpn" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.530619 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.548535 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ccadffe2-e81e-44ab-a879-fefa01177386-logs\") pod \"barbican-worker-85c67749d7-4wkg2\" (UID: \"ccadffe2-e81e-44ab-a879-fefa01177386\") " pod="openstack/barbican-worker-85c67749d7-4wkg2" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.548746 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae7d6be8-33b3-4771-aebf-d7302883bd3d-config-data-custom\") pod \"barbican-keystone-listener-59fd76fd88-6ghpn\" (UID: \"ae7d6be8-33b3-4771-aebf-d7302883bd3d\") " pod="openstack/barbican-keystone-listener-59fd76fd88-6ghpn" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.548776 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ccadffe2-e81e-44ab-a879-fefa01177386-config-data-custom\") pod \"barbican-worker-85c67749d7-4wkg2\" (UID: \"ccadffe2-e81e-44ab-a879-fefa01177386\") " pod="openstack/barbican-worker-85c67749d7-4wkg2" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.548825 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccadffe2-e81e-44ab-a879-fefa01177386-combined-ca-bundle\") pod \"barbican-worker-85c67749d7-4wkg2\" (UID: \"ccadffe2-e81e-44ab-a879-fefa01177386\") " pod="openstack/barbican-worker-85c67749d7-4wkg2" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.548852 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x9xr\" (UniqueName: \"kubernetes.io/projected/ae7d6be8-33b3-4771-aebf-d7302883bd3d-kube-api-access-5x9xr\") pod \"barbican-keystone-listener-59fd76fd88-6ghpn\" (UID: \"ae7d6be8-33b3-4771-aebf-d7302883bd3d\") " pod="openstack/barbican-keystone-listener-59fd76fd88-6ghpn" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.548895 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccadffe2-e81e-44ab-a879-fefa01177386-config-data\") pod \"barbican-worker-85c67749d7-4wkg2\" (UID: \"ccadffe2-e81e-44ab-a879-fefa01177386\") " pod="openstack/barbican-worker-85c67749d7-4wkg2" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.548927 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae7d6be8-33b3-4771-aebf-d7302883bd3d-logs\") pod \"barbican-keystone-listener-59fd76fd88-6ghpn\" (UID: \"ae7d6be8-33b3-4771-aebf-d7302883bd3d\") " pod="openstack/barbican-keystone-listener-59fd76fd88-6ghpn" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.548991 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae7d6be8-33b3-4771-aebf-d7302883bd3d-combined-ca-bundle\") pod \"barbican-keystone-listener-59fd76fd88-6ghpn\" (UID: \"ae7d6be8-33b3-4771-aebf-d7302883bd3d\") " pod="openstack/barbican-keystone-listener-59fd76fd88-6ghpn" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.549018 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae7d6be8-33b3-4771-aebf-d7302883bd3d-config-data\") pod \"barbican-keystone-listener-59fd76fd88-6ghpn\" (UID: \"ae7d6be8-33b3-4771-aebf-d7302883bd3d\") " pod="openstack/barbican-keystone-listener-59fd76fd88-6ghpn" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.549098 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjn2m\" (UniqueName: \"kubernetes.io/projected/ccadffe2-e81e-44ab-a879-fefa01177386-kube-api-access-xjn2m\") pod \"barbican-worker-85c67749d7-4wkg2\" (UID: \"ccadffe2-e81e-44ab-a879-fefa01177386\") " pod="openstack/barbican-worker-85c67749d7-4wkg2" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.554409 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-85c67749d7-4wkg2"] Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.654244 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae7d6be8-33b3-4771-aebf-d7302883bd3d-logs\") pod \"barbican-keystone-listener-59fd76fd88-6ghpn\" (UID: \"ae7d6be8-33b3-4771-aebf-d7302883bd3d\") " pod="openstack/barbican-keystone-listener-59fd76fd88-6ghpn" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.664718 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae7d6be8-33b3-4771-aebf-d7302883bd3d-combined-ca-bundle\") pod \"barbican-keystone-listener-59fd76fd88-6ghpn\" (UID: \"ae7d6be8-33b3-4771-aebf-d7302883bd3d\") " pod="openstack/barbican-keystone-listener-59fd76fd88-6ghpn" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.664909 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae7d6be8-33b3-4771-aebf-d7302883bd3d-config-data\") pod \"barbican-keystone-listener-59fd76fd88-6ghpn\" (UID: \"ae7d6be8-33b3-4771-aebf-d7302883bd3d\") " pod="openstack/barbican-keystone-listener-59fd76fd88-6ghpn" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.665052 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjn2m\" (UniqueName: \"kubernetes.io/projected/ccadffe2-e81e-44ab-a879-fefa01177386-kube-api-access-xjn2m\") pod \"barbican-worker-85c67749d7-4wkg2\" (UID: \"ccadffe2-e81e-44ab-a879-fefa01177386\") " pod="openstack/barbican-worker-85c67749d7-4wkg2" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.665157 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ccadffe2-e81e-44ab-a879-fefa01177386-logs\") pod \"barbican-worker-85c67749d7-4wkg2\" (UID: \"ccadffe2-e81e-44ab-a879-fefa01177386\") " pod="openstack/barbican-worker-85c67749d7-4wkg2" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.665660 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ccadffe2-e81e-44ab-a879-fefa01177386-logs\") pod \"barbican-worker-85c67749d7-4wkg2\" (UID: \"ccadffe2-e81e-44ab-a879-fefa01177386\") " pod="openstack/barbican-worker-85c67749d7-4wkg2" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.665723 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae7d6be8-33b3-4771-aebf-d7302883bd3d-config-data-custom\") pod \"barbican-keystone-listener-59fd76fd88-6ghpn\" (UID: \"ae7d6be8-33b3-4771-aebf-d7302883bd3d\") " pod="openstack/barbican-keystone-listener-59fd76fd88-6ghpn" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.665811 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ccadffe2-e81e-44ab-a879-fefa01177386-config-data-custom\") pod \"barbican-worker-85c67749d7-4wkg2\" (UID: \"ccadffe2-e81e-44ab-a879-fefa01177386\") " pod="openstack/barbican-worker-85c67749d7-4wkg2" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.665853 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccadffe2-e81e-44ab-a879-fefa01177386-combined-ca-bundle\") pod \"barbican-worker-85c67749d7-4wkg2\" (UID: \"ccadffe2-e81e-44ab-a879-fefa01177386\") " pod="openstack/barbican-worker-85c67749d7-4wkg2" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.665912 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5x9xr\" (UniqueName: \"kubernetes.io/projected/ae7d6be8-33b3-4771-aebf-d7302883bd3d-kube-api-access-5x9xr\") pod \"barbican-keystone-listener-59fd76fd88-6ghpn\" (UID: \"ae7d6be8-33b3-4771-aebf-d7302883bd3d\") " pod="openstack/barbican-keystone-listener-59fd76fd88-6ghpn" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.665947 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccadffe2-e81e-44ab-a879-fefa01177386-config-data\") pod \"barbican-worker-85c67749d7-4wkg2\" (UID: \"ccadffe2-e81e-44ab-a879-fefa01177386\") " pod="openstack/barbican-worker-85c67749d7-4wkg2" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.672710 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-59fd76fd88-6ghpn"] Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.701875 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae7d6be8-33b3-4771-aebf-d7302883bd3d-logs\") pod \"barbican-keystone-listener-59fd76fd88-6ghpn\" (UID: \"ae7d6be8-33b3-4771-aebf-d7302883bd3d\") " pod="openstack/barbican-keystone-listener-59fd76fd88-6ghpn" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.703911 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae7d6be8-33b3-4771-aebf-d7302883bd3d-combined-ca-bundle\") pod \"barbican-keystone-listener-59fd76fd88-6ghpn\" (UID: \"ae7d6be8-33b3-4771-aebf-d7302883bd3d\") " pod="openstack/barbican-keystone-listener-59fd76fd88-6ghpn" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.714674 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccadffe2-e81e-44ab-a879-fefa01177386-config-data\") pod \"barbican-worker-85c67749d7-4wkg2\" (UID: \"ccadffe2-e81e-44ab-a879-fefa01177386\") " pod="openstack/barbican-worker-85c67749d7-4wkg2" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.731146 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae7d6be8-33b3-4771-aebf-d7302883bd3d-config-data-custom\") pod \"barbican-keystone-listener-59fd76fd88-6ghpn\" (UID: \"ae7d6be8-33b3-4771-aebf-d7302883bd3d\") " pod="openstack/barbican-keystone-listener-59fd76fd88-6ghpn" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.753668 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5x9xr\" (UniqueName: \"kubernetes.io/projected/ae7d6be8-33b3-4771-aebf-d7302883bd3d-kube-api-access-5x9xr\") pod \"barbican-keystone-listener-59fd76fd88-6ghpn\" (UID: \"ae7d6be8-33b3-4771-aebf-d7302883bd3d\") " pod="openstack/barbican-keystone-listener-59fd76fd88-6ghpn" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.772289 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjn2m\" (UniqueName: \"kubernetes.io/projected/ccadffe2-e81e-44ab-a879-fefa01177386-kube-api-access-xjn2m\") pod \"barbican-worker-85c67749d7-4wkg2\" (UID: \"ccadffe2-e81e-44ab-a879-fefa01177386\") " pod="openstack/barbican-worker-85c67749d7-4wkg2" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.867406 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae7d6be8-33b3-4771-aebf-d7302883bd3d-config-data\") pod \"barbican-keystone-listener-59fd76fd88-6ghpn\" (UID: \"ae7d6be8-33b3-4771-aebf-d7302883bd3d\") " pod="openstack/barbican-keystone-listener-59fd76fd88-6ghpn" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.870485 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccadffe2-e81e-44ab-a879-fefa01177386-combined-ca-bundle\") pod \"barbican-worker-85c67749d7-4wkg2\" (UID: \"ccadffe2-e81e-44ab-a879-fefa01177386\") " pod="openstack/barbican-worker-85c67749d7-4wkg2" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.875174 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-59fd76fd88-6ghpn" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.906792 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ccadffe2-e81e-44ab-a879-fefa01177386-config-data-custom\") pod \"barbican-worker-85c67749d7-4wkg2\" (UID: \"ccadffe2-e81e-44ab-a879-fefa01177386\") " pod="openstack/barbican-worker-85c67749d7-4wkg2" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.908431 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64cd874c85-ts9sk"] Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.925075 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b76cdf485-9slnj"] Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.932632 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b76cdf485-9slnj" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.933862 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b76cdf485-9slnj"] Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.941453 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-74cff4c986-xvvw9"] Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.943926 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-74cff4c986-xvvw9" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.954321 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-zxm5n" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.955401 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 29 06:56:56 crc kubenswrapper[4947]: I1129 06:56:56.975302 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-74cff4c986-xvvw9"] Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.044155 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64cd874c85-ts9sk"] Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.067939 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b906095-1e6e-42c3-9952-ad436efa3fbf-config\") pod \"5b906095-1e6e-42c3-9952-ad436efa3fbf\" (UID: \"5b906095-1e6e-42c3-9952-ad436efa3fbf\") " Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.068038 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5b906095-1e6e-42c3-9952-ad436efa3fbf-ovsdbserver-nb\") pod \"5b906095-1e6e-42c3-9952-ad436efa3fbf\" (UID: \"5b906095-1e6e-42c3-9952-ad436efa3fbf\") " Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.068078 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5b906095-1e6e-42c3-9952-ad436efa3fbf-ovsdbserver-sb\") pod \"5b906095-1e6e-42c3-9952-ad436efa3fbf\" (UID: \"5b906095-1e6e-42c3-9952-ad436efa3fbf\") " Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.068113 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b906095-1e6e-42c3-9952-ad436efa3fbf-dns-svc\") pod \"5b906095-1e6e-42c3-9952-ad436efa3fbf\" (UID: \"5b906095-1e6e-42c3-9952-ad436efa3fbf\") " Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.068158 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6x4dl\" (UniqueName: \"kubernetes.io/projected/5b906095-1e6e-42c3-9952-ad436efa3fbf-kube-api-access-6x4dl\") pod \"5b906095-1e6e-42c3-9952-ad436efa3fbf\" (UID: \"5b906095-1e6e-42c3-9952-ad436efa3fbf\") " Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.068453 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64edb581-33ba-46eb-a455-ce4f733f6944-logs\") pod \"barbican-api-74cff4c986-xvvw9\" (UID: \"64edb581-33ba-46eb-a455-ce4f733f6944\") " pod="openstack/barbican-api-74cff4c986-xvvw9" Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.068488 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxv2t\" (UniqueName: \"kubernetes.io/projected/64edb581-33ba-46eb-a455-ce4f733f6944-kube-api-access-rxv2t\") pod \"barbican-api-74cff4c986-xvvw9\" (UID: \"64edb581-33ba-46eb-a455-ce4f733f6944\") " pod="openstack/barbican-api-74cff4c986-xvvw9" Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.068507 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64edb581-33ba-46eb-a455-ce4f733f6944-combined-ca-bundle\") pod \"barbican-api-74cff4c986-xvvw9\" (UID: \"64edb581-33ba-46eb-a455-ce4f733f6944\") " pod="openstack/barbican-api-74cff4c986-xvvw9" Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.068556 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e-ovsdbserver-sb\") pod \"dnsmasq-dns-5b76cdf485-9slnj\" (UID: \"f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e\") " pod="openstack/dnsmasq-dns-5b76cdf485-9slnj" Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.068630 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e-dns-svc\") pod \"dnsmasq-dns-5b76cdf485-9slnj\" (UID: \"f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e\") " pod="openstack/dnsmasq-dns-5b76cdf485-9slnj" Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.068666 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e-config\") pod \"dnsmasq-dns-5b76cdf485-9slnj\" (UID: \"f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e\") " pod="openstack/dnsmasq-dns-5b76cdf485-9slnj" Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.068704 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64edb581-33ba-46eb-a455-ce4f733f6944-config-data-custom\") pod \"barbican-api-74cff4c986-xvvw9\" (UID: \"64edb581-33ba-46eb-a455-ce4f733f6944\") " pod="openstack/barbican-api-74cff4c986-xvvw9" Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.068773 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e-ovsdbserver-nb\") pod \"dnsmasq-dns-5b76cdf485-9slnj\" (UID: \"f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e\") " pod="openstack/dnsmasq-dns-5b76cdf485-9slnj" Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.068834 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crmzn\" (UniqueName: \"kubernetes.io/projected/f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e-kube-api-access-crmzn\") pod \"dnsmasq-dns-5b76cdf485-9slnj\" (UID: \"f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e\") " pod="openstack/dnsmasq-dns-5b76cdf485-9slnj" Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.068865 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64edb581-33ba-46eb-a455-ce4f733f6944-config-data\") pod \"barbican-api-74cff4c986-xvvw9\" (UID: \"64edb581-33ba-46eb-a455-ce4f733f6944\") " pod="openstack/barbican-api-74cff4c986-xvvw9" Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.091564 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b906095-1e6e-42c3-9952-ad436efa3fbf-kube-api-access-6x4dl" (OuterVolumeSpecName: "kube-api-access-6x4dl") pod "5b906095-1e6e-42c3-9952-ad436efa3fbf" (UID: "5b906095-1e6e-42c3-9952-ad436efa3fbf"). InnerVolumeSpecName "kube-api-access-6x4dl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.140406 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-85c67749d7-4wkg2" Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.170331 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e-ovsdbserver-nb\") pod \"dnsmasq-dns-5b76cdf485-9slnj\" (UID: \"f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e\") " pod="openstack/dnsmasq-dns-5b76cdf485-9slnj" Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.170734 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crmzn\" (UniqueName: \"kubernetes.io/projected/f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e-kube-api-access-crmzn\") pod \"dnsmasq-dns-5b76cdf485-9slnj\" (UID: \"f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e\") " pod="openstack/dnsmasq-dns-5b76cdf485-9slnj" Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.170757 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64edb581-33ba-46eb-a455-ce4f733f6944-config-data\") pod \"barbican-api-74cff4c986-xvvw9\" (UID: \"64edb581-33ba-46eb-a455-ce4f733f6944\") " pod="openstack/barbican-api-74cff4c986-xvvw9" Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.170783 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64edb581-33ba-46eb-a455-ce4f733f6944-logs\") pod \"barbican-api-74cff4c986-xvvw9\" (UID: \"64edb581-33ba-46eb-a455-ce4f733f6944\") " pod="openstack/barbican-api-74cff4c986-xvvw9" Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.170802 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxv2t\" (UniqueName: \"kubernetes.io/projected/64edb581-33ba-46eb-a455-ce4f733f6944-kube-api-access-rxv2t\") pod \"barbican-api-74cff4c986-xvvw9\" (UID: \"64edb581-33ba-46eb-a455-ce4f733f6944\") " pod="openstack/barbican-api-74cff4c986-xvvw9" Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.170818 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64edb581-33ba-46eb-a455-ce4f733f6944-combined-ca-bundle\") pod \"barbican-api-74cff4c986-xvvw9\" (UID: \"64edb581-33ba-46eb-a455-ce4f733f6944\") " pod="openstack/barbican-api-74cff4c986-xvvw9" Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.170854 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e-ovsdbserver-sb\") pod \"dnsmasq-dns-5b76cdf485-9slnj\" (UID: \"f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e\") " pod="openstack/dnsmasq-dns-5b76cdf485-9slnj" Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.170894 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e-dns-svc\") pod \"dnsmasq-dns-5b76cdf485-9slnj\" (UID: \"f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e\") " pod="openstack/dnsmasq-dns-5b76cdf485-9slnj" Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.170918 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e-config\") pod \"dnsmasq-dns-5b76cdf485-9slnj\" (UID: \"f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e\") " pod="openstack/dnsmasq-dns-5b76cdf485-9slnj" Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.170940 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64edb581-33ba-46eb-a455-ce4f733f6944-config-data-custom\") pod \"barbican-api-74cff4c986-xvvw9\" (UID: \"64edb581-33ba-46eb-a455-ce4f733f6944\") " pod="openstack/barbican-api-74cff4c986-xvvw9" Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.171008 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6x4dl\" (UniqueName: \"kubernetes.io/projected/5b906095-1e6e-42c3-9952-ad436efa3fbf-kube-api-access-6x4dl\") on node \"crc\" DevicePath \"\"" Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.174072 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e-ovsdbserver-nb\") pod \"dnsmasq-dns-5b76cdf485-9slnj\" (UID: \"f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e\") " pod="openstack/dnsmasq-dns-5b76cdf485-9slnj" Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.174149 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b906095-1e6e-42c3-9952-ad436efa3fbf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5b906095-1e6e-42c3-9952-ad436efa3fbf" (UID: "5b906095-1e6e-42c3-9952-ad436efa3fbf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.175025 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e-ovsdbserver-sb\") pod \"dnsmasq-dns-5b76cdf485-9slnj\" (UID: \"f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e\") " pod="openstack/dnsmasq-dns-5b76cdf485-9slnj" Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.176957 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64edb581-33ba-46eb-a455-ce4f733f6944-logs\") pod \"barbican-api-74cff4c986-xvvw9\" (UID: \"64edb581-33ba-46eb-a455-ce4f733f6944\") " pod="openstack/barbican-api-74cff4c986-xvvw9" Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.177929 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e-dns-svc\") pod \"dnsmasq-dns-5b76cdf485-9slnj\" (UID: \"f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e\") " pod="openstack/dnsmasq-dns-5b76cdf485-9slnj" Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.178698 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64edb581-33ba-46eb-a455-ce4f733f6944-combined-ca-bundle\") pod \"barbican-api-74cff4c986-xvvw9\" (UID: \"64edb581-33ba-46eb-a455-ce4f733f6944\") " pod="openstack/barbican-api-74cff4c986-xvvw9" Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.179499 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e-config\") pod \"dnsmasq-dns-5b76cdf485-9slnj\" (UID: \"f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e\") " pod="openstack/dnsmasq-dns-5b76cdf485-9slnj" Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.185871 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b906095-1e6e-42c3-9952-ad436efa3fbf-config" (OuterVolumeSpecName: "config") pod "5b906095-1e6e-42c3-9952-ad436efa3fbf" (UID: "5b906095-1e6e-42c3-9952-ad436efa3fbf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.186100 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b906095-1e6e-42c3-9952-ad436efa3fbf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5b906095-1e6e-42c3-9952-ad436efa3fbf" (UID: "5b906095-1e6e-42c3-9952-ad436efa3fbf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.189403 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64edb581-33ba-46eb-a455-ce4f733f6944-config-data-custom\") pod \"barbican-api-74cff4c986-xvvw9\" (UID: \"64edb581-33ba-46eb-a455-ce4f733f6944\") " pod="openstack/barbican-api-74cff4c986-xvvw9" Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.197734 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b906095-1e6e-42c3-9952-ad436efa3fbf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5b906095-1e6e-42c3-9952-ad436efa3fbf" (UID: "5b906095-1e6e-42c3-9952-ad436efa3fbf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.198268 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crmzn\" (UniqueName: \"kubernetes.io/projected/f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e-kube-api-access-crmzn\") pod \"dnsmasq-dns-5b76cdf485-9slnj\" (UID: \"f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e\") " pod="openstack/dnsmasq-dns-5b76cdf485-9slnj" Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.198722 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxv2t\" (UniqueName: \"kubernetes.io/projected/64edb581-33ba-46eb-a455-ce4f733f6944-kube-api-access-rxv2t\") pod \"barbican-api-74cff4c986-xvvw9\" (UID: \"64edb581-33ba-46eb-a455-ce4f733f6944\") " pod="openstack/barbican-api-74cff4c986-xvvw9" Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.233655 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.234860 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64edb581-33ba-46eb-a455-ce4f733f6944-config-data\") pod \"barbican-api-74cff4c986-xvvw9\" (UID: \"64edb581-33ba-46eb-a455-ce4f733f6944\") " pod="openstack/barbican-api-74cff4c986-xvvw9" Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.262387 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b76cdf485-9slnj" Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.273257 4947 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5b906095-1e6e-42c3-9952-ad436efa3fbf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.273281 4947 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5b906095-1e6e-42c3-9952-ad436efa3fbf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.273293 4947 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b906095-1e6e-42c3-9952-ad436efa3fbf-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.273305 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b906095-1e6e-42c3-9952-ad436efa3fbf-config\") on node \"crc\" DevicePath \"\"" Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.278876 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-74cff4c986-xvvw9" Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.314177 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64cd874c85-ts9sk" event={"ID":"aa846029-b0e0-4e05-b8c1-07aecc1d4d27","Type":"ContainerStarted","Data":"c4704e8c13fda51b1f899c0843889d532161001928e454be7e058018dd2644a1"} Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.324523 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f3ec3135-6049-4853-b571-d23200456fc5","Type":"ContainerStarted","Data":"14bd56706c0de0c3807cff266d7f3b2c1b2b4aaf920c97524436d23de043e8c1"} Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.370567 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9825e55-7596-4c45-aa4c-0b74cc470e65","Type":"ContainerStarted","Data":"30ce8e2ee9e46d4f382b608f6eb99b4727165939538e98a61788a1256b061db2"} Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.372162 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-zxm5n" event={"ID":"5b906095-1e6e-42c3-9952-ad436efa3fbf","Type":"ContainerDied","Data":"3ae54f635aa3fb40bfa6354086d57df6e0ee8fb53a292e7b8ade20016940fbe8"} Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.372191 4947 scope.go:117] "RemoveContainer" containerID="10bd9d566a272be5ec374787ee424cbd9f3478f5cfb14d4b171f04fb833f58bf" Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.372376 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-zxm5n" Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.383480 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e4e05c1a-b07e-4307-ab90-314a6fcd6619","Type":"ContainerStarted","Data":"f099330ee932a5b05081e0792c960eb23b8e415e5bc83f0c095c8aa2767f939e"} Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.425338 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-zxm5n"] Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.434102 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-zxm5n"] Nov 29 06:56:57 crc kubenswrapper[4947]: I1129 06:56:57.687585 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-59fd76fd88-6ghpn"] Nov 29 06:56:58 crc kubenswrapper[4947]: I1129 06:56:58.259033 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-85c67749d7-4wkg2"] Nov 29 06:56:58 crc kubenswrapper[4947]: I1129 06:56:58.319010 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b76cdf485-9slnj"] Nov 29 06:56:58 crc kubenswrapper[4947]: W1129 06:56:58.323141 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podccadffe2_e81e_44ab_a879_fefa01177386.slice/crio-306d3615cacca8bccc4f5338d529a31996db18259e4fd40e49555ee0cc667013 WatchSource:0}: Error finding container 306d3615cacca8bccc4f5338d529a31996db18259e4fd40e49555ee0cc667013: Status 404 returned error can't find the container with id 306d3615cacca8bccc4f5338d529a31996db18259e4fd40e49555ee0cc667013 Nov 29 06:56:58 crc kubenswrapper[4947]: I1129 06:56:58.367770 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-74cff4c986-xvvw9"] Nov 29 06:56:58 crc kubenswrapper[4947]: I1129 06:56:58.398283 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-85c67749d7-4wkg2" event={"ID":"ccadffe2-e81e-44ab-a879-fefa01177386","Type":"ContainerStarted","Data":"306d3615cacca8bccc4f5338d529a31996db18259e4fd40e49555ee0cc667013"} Nov 29 06:56:58 crc kubenswrapper[4947]: I1129 06:56:58.402771 4947 generic.go:334] "Generic (PLEG): container finished" podID="aa846029-b0e0-4e05-b8c1-07aecc1d4d27" containerID="62573476cffcb9d8c698dfe890eefed8b750b85e45c89b79b189cb3529d55615" exitCode=0 Nov 29 06:56:58 crc kubenswrapper[4947]: I1129 06:56:58.402844 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64cd874c85-ts9sk" event={"ID":"aa846029-b0e0-4e05-b8c1-07aecc1d4d27","Type":"ContainerDied","Data":"62573476cffcb9d8c698dfe890eefed8b750b85e45c89b79b189cb3529d55615"} Nov 29 06:56:58 crc kubenswrapper[4947]: I1129 06:56:58.413070 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-59fd76fd88-6ghpn" event={"ID":"ae7d6be8-33b3-4771-aebf-d7302883bd3d","Type":"ContainerStarted","Data":"4d71850bc26ed7c1a92359817a0d8948565d4b638d4c7fa7f26b1b1a6d170026"} Nov 29 06:56:59 crc kubenswrapper[4947]: I1129 06:56:59.038800 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64cd874c85-ts9sk" Nov 29 06:56:59 crc kubenswrapper[4947]: I1129 06:56:59.166329 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aa846029-b0e0-4e05-b8c1-07aecc1d4d27-ovsdbserver-nb\") pod \"aa846029-b0e0-4e05-b8c1-07aecc1d4d27\" (UID: \"aa846029-b0e0-4e05-b8c1-07aecc1d4d27\") " Nov 29 06:56:59 crc kubenswrapper[4947]: I1129 06:56:59.166560 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzlxl\" (UniqueName: \"kubernetes.io/projected/aa846029-b0e0-4e05-b8c1-07aecc1d4d27-kube-api-access-bzlxl\") pod \"aa846029-b0e0-4e05-b8c1-07aecc1d4d27\" (UID: \"aa846029-b0e0-4e05-b8c1-07aecc1d4d27\") " Nov 29 06:56:59 crc kubenswrapper[4947]: I1129 06:56:59.166638 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa846029-b0e0-4e05-b8c1-07aecc1d4d27-dns-svc\") pod \"aa846029-b0e0-4e05-b8c1-07aecc1d4d27\" (UID: \"aa846029-b0e0-4e05-b8c1-07aecc1d4d27\") " Nov 29 06:56:59 crc kubenswrapper[4947]: I1129 06:56:59.166682 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa846029-b0e0-4e05-b8c1-07aecc1d4d27-config\") pod \"aa846029-b0e0-4e05-b8c1-07aecc1d4d27\" (UID: \"aa846029-b0e0-4e05-b8c1-07aecc1d4d27\") " Nov 29 06:56:59 crc kubenswrapper[4947]: I1129 06:56:59.166866 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aa846029-b0e0-4e05-b8c1-07aecc1d4d27-ovsdbserver-sb\") pod \"aa846029-b0e0-4e05-b8c1-07aecc1d4d27\" (UID: \"aa846029-b0e0-4e05-b8c1-07aecc1d4d27\") " Nov 29 06:56:59 crc kubenswrapper[4947]: I1129 06:56:59.171760 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa846029-b0e0-4e05-b8c1-07aecc1d4d27-kube-api-access-bzlxl" (OuterVolumeSpecName: "kube-api-access-bzlxl") pod "aa846029-b0e0-4e05-b8c1-07aecc1d4d27" (UID: "aa846029-b0e0-4e05-b8c1-07aecc1d4d27"). InnerVolumeSpecName "kube-api-access-bzlxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:56:59 crc kubenswrapper[4947]: I1129 06:56:59.223835 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa846029-b0e0-4e05-b8c1-07aecc1d4d27-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "aa846029-b0e0-4e05-b8c1-07aecc1d4d27" (UID: "aa846029-b0e0-4e05-b8c1-07aecc1d4d27"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:56:59 crc kubenswrapper[4947]: I1129 06:56:59.244021 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa846029-b0e0-4e05-b8c1-07aecc1d4d27-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "aa846029-b0e0-4e05-b8c1-07aecc1d4d27" (UID: "aa846029-b0e0-4e05-b8c1-07aecc1d4d27"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:56:59 crc kubenswrapper[4947]: I1129 06:56:59.246116 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa846029-b0e0-4e05-b8c1-07aecc1d4d27-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "aa846029-b0e0-4e05-b8c1-07aecc1d4d27" (UID: "aa846029-b0e0-4e05-b8c1-07aecc1d4d27"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:56:59 crc kubenswrapper[4947]: I1129 06:56:59.259382 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b906095-1e6e-42c3-9952-ad436efa3fbf" path="/var/lib/kubelet/pods/5b906095-1e6e-42c3-9952-ad436efa3fbf/volumes" Nov 29 06:56:59 crc kubenswrapper[4947]: I1129 06:56:59.270319 4947 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aa846029-b0e0-4e05-b8c1-07aecc1d4d27-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 06:56:59 crc kubenswrapper[4947]: I1129 06:56:59.270357 4947 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aa846029-b0e0-4e05-b8c1-07aecc1d4d27-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 06:56:59 crc kubenswrapper[4947]: I1129 06:56:59.270376 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzlxl\" (UniqueName: \"kubernetes.io/projected/aa846029-b0e0-4e05-b8c1-07aecc1d4d27-kube-api-access-bzlxl\") on node \"crc\" DevicePath \"\"" Nov 29 06:56:59 crc kubenswrapper[4947]: I1129 06:56:59.270396 4947 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa846029-b0e0-4e05-b8c1-07aecc1d4d27-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 06:56:59 crc kubenswrapper[4947]: I1129 06:56:59.299021 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa846029-b0e0-4e05-b8c1-07aecc1d4d27-config" (OuterVolumeSpecName: "config") pod "aa846029-b0e0-4e05-b8c1-07aecc1d4d27" (UID: "aa846029-b0e0-4e05-b8c1-07aecc1d4d27"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:56:59 crc kubenswrapper[4947]: I1129 06:56:59.386391 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa846029-b0e0-4e05-b8c1-07aecc1d4d27-config\") on node \"crc\" DevicePath \"\"" Nov 29 06:56:59 crc kubenswrapper[4947]: I1129 06:56:59.485426 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64cd874c85-ts9sk" Nov 29 06:56:59 crc kubenswrapper[4947]: I1129 06:56:59.486393 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64cd874c85-ts9sk" event={"ID":"aa846029-b0e0-4e05-b8c1-07aecc1d4d27","Type":"ContainerDied","Data":"c4704e8c13fda51b1f899c0843889d532161001928e454be7e058018dd2644a1"} Nov 29 06:56:59 crc kubenswrapper[4947]: I1129 06:56:59.487134 4947 scope.go:117] "RemoveContainer" containerID="62573476cffcb9d8c698dfe890eefed8b750b85e45c89b79b189cb3529d55615" Nov 29 06:56:59 crc kubenswrapper[4947]: I1129 06:56:59.489970 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f3ec3135-6049-4853-b571-d23200456fc5","Type":"ContainerStarted","Data":"b95afbfd702ee6af53c7279a3db23229d79ce557513cd99b8ff3373116d61f92"} Nov 29 06:56:59 crc kubenswrapper[4947]: I1129 06:56:59.496251 4947 generic.go:334] "Generic (PLEG): container finished" podID="f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e" containerID="573c7950e6c26246e6056de8f81fee44a0d18be16f7a904596f12b358039f01d" exitCode=0 Nov 29 06:56:59 crc kubenswrapper[4947]: I1129 06:56:59.496319 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b76cdf485-9slnj" event={"ID":"f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e","Type":"ContainerDied","Data":"573c7950e6c26246e6056de8f81fee44a0d18be16f7a904596f12b358039f01d"} Nov 29 06:56:59 crc kubenswrapper[4947]: I1129 06:56:59.496345 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b76cdf485-9slnj" event={"ID":"f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e","Type":"ContainerStarted","Data":"53a367317bf1f8a7352b29812edc8660f28ea0c12f42a0f0dac96346eec93cbd"} Nov 29 06:56:59 crc kubenswrapper[4947]: I1129 06:56:59.504422 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9825e55-7596-4c45-aa4c-0b74cc470e65","Type":"ContainerStarted","Data":"ac0b51b01da8f62c1b943c2fdd2504630ba0c4e130fbd7fe7750c0e0dae57019"} Nov 29 06:56:59 crc kubenswrapper[4947]: I1129 06:56:59.504515 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 29 06:56:59 crc kubenswrapper[4947]: I1129 06:56:59.510484 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e4e05c1a-b07e-4307-ab90-314a6fcd6619","Type":"ContainerStarted","Data":"b9ebe694c0cdf8c5c9c1a14ade8cccfdc2c06a06875f324362b765e03f97b4fb"} Nov 29 06:56:59 crc kubenswrapper[4947]: I1129 06:56:59.518470 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74cff4c986-xvvw9" event={"ID":"64edb581-33ba-46eb-a455-ce4f733f6944","Type":"ContainerStarted","Data":"ed921614ee825db304d84fd0399b2288485e209049695a02fee9507f5f0dfd5f"} Nov 29 06:56:59 crc kubenswrapper[4947]: I1129 06:56:59.518612 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74cff4c986-xvvw9" event={"ID":"64edb581-33ba-46eb-a455-ce4f733f6944","Type":"ContainerStarted","Data":"00da878760f6635ad6f00a3eb8a86297344bee87a8ee18d327a7e734d108070b"} Nov 29 06:56:59 crc kubenswrapper[4947]: I1129 06:56:59.568449 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.652430152 podStartE2EDuration="7.568276556s" podCreationTimestamp="2025-11-29 06:56:52 +0000 UTC" firstStartedPulling="2025-11-29 06:56:53.782608695 +0000 UTC m=+1364.826990766" lastFinishedPulling="2025-11-29 06:56:58.698455089 +0000 UTC m=+1369.742837170" observedRunningTime="2025-11-29 06:56:59.552647923 +0000 UTC m=+1370.597030014" watchObservedRunningTime="2025-11-29 06:56:59.568276556 +0000 UTC m=+1370.612658637" Nov 29 06:56:59 crc kubenswrapper[4947]: I1129 06:56:59.664117 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64cd874c85-ts9sk"] Nov 29 06:56:59 crc kubenswrapper[4947]: I1129 06:56:59.675529 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-64cd874c85-ts9sk"] Nov 29 06:57:00 crc kubenswrapper[4947]: I1129 06:57:00.569726 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e4e05c1a-b07e-4307-ab90-314a6fcd6619","Type":"ContainerStarted","Data":"6f0f3de9e87292dd6be01c8c9c3486cb64dd4b9d77692d3adfa71796b0d5fb07"} Nov 29 06:57:00 crc kubenswrapper[4947]: I1129 06:57:00.571954 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 29 06:57:00 crc kubenswrapper[4947]: I1129 06:57:00.583048 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74cff4c986-xvvw9" event={"ID":"64edb581-33ba-46eb-a455-ce4f733f6944","Type":"ContainerStarted","Data":"073329272db47c35438b64c66a2b0ca12747bb814aec8f5ca5d1ea0ab8637b33"} Nov 29 06:57:00 crc kubenswrapper[4947]: I1129 06:57:00.583270 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-74cff4c986-xvvw9" Nov 29 06:57:00 crc kubenswrapper[4947]: I1129 06:57:00.583400 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-74cff4c986-xvvw9" Nov 29 06:57:00 crc kubenswrapper[4947]: I1129 06:57:00.588178 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f3ec3135-6049-4853-b571-d23200456fc5","Type":"ContainerStarted","Data":"57d4ad2c4e9379e05d802770be0a292157299c4dba878fc3378985bd6dd3f6bb"} Nov 29 06:57:00 crc kubenswrapper[4947]: I1129 06:57:00.593008 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 29 06:57:00 crc kubenswrapper[4947]: I1129 06:57:00.604357 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.604319262 podStartE2EDuration="5.604319262s" podCreationTimestamp="2025-11-29 06:56:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:57:00.59389141 +0000 UTC m=+1371.638273491" watchObservedRunningTime="2025-11-29 06:57:00.604319262 +0000 UTC m=+1371.648701343" Nov 29 06:57:00 crc kubenswrapper[4947]: I1129 06:57:00.636461 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.722874962 podStartE2EDuration="5.636428679s" podCreationTimestamp="2025-11-29 06:56:55 +0000 UTC" firstStartedPulling="2025-11-29 06:56:56.370571336 +0000 UTC m=+1367.414953417" lastFinishedPulling="2025-11-29 06:56:57.284125053 +0000 UTC m=+1368.328507134" observedRunningTime="2025-11-29 06:57:00.635943417 +0000 UTC m=+1371.680325498" watchObservedRunningTime="2025-11-29 06:57:00.636428679 +0000 UTC m=+1371.680810750" Nov 29 06:57:00 crc kubenswrapper[4947]: I1129 06:57:00.681410 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-74cff4c986-xvvw9" podStartSLOduration=4.681386869 podStartE2EDuration="4.681386869s" podCreationTimestamp="2025-11-29 06:56:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:57:00.661246963 +0000 UTC m=+1371.705629044" watchObservedRunningTime="2025-11-29 06:57:00.681386869 +0000 UTC m=+1371.725768940" Nov 29 06:57:00 crc kubenswrapper[4947]: I1129 06:57:00.819250 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 29 06:57:01 crc kubenswrapper[4947]: I1129 06:57:01.191134 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa846029-b0e0-4e05-b8c1-07aecc1d4d27" path="/var/lib/kubelet/pods/aa846029-b0e0-4e05-b8c1-07aecc1d4d27/volumes" Nov 29 06:57:01 crc kubenswrapper[4947]: I1129 06:57:01.603061 4947 generic.go:334] "Generic (PLEG): container finished" podID="af3ad5b2-503c-4d1c-927c-0feab47e5212" containerID="e3996ec78a4407f0f39b3068a197d553922ee5f1b0a9155bead9dca46afe0267" exitCode=0 Nov 29 06:57:01 crc kubenswrapper[4947]: I1129 06:57:01.603151 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4sjrf" event={"ID":"af3ad5b2-503c-4d1c-927c-0feab47e5212","Type":"ContainerDied","Data":"e3996ec78a4407f0f39b3068a197d553922ee5f1b0a9155bead9dca46afe0267"} Nov 29 06:57:01 crc kubenswrapper[4947]: I1129 06:57:01.611980 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b76cdf485-9slnj" event={"ID":"f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e","Type":"ContainerStarted","Data":"f46459103bf24a56ff3aa2f0cf7c4495c601ec70f7a9f9ab69a9d1c318c2ebd1"} Nov 29 06:57:01 crc kubenswrapper[4947]: I1129 06:57:01.612060 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b76cdf485-9slnj" Nov 29 06:57:01 crc kubenswrapper[4947]: I1129 06:57:01.663555 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b76cdf485-9slnj" podStartSLOduration=5.663530851 podStartE2EDuration="5.663530851s" podCreationTimestamp="2025-11-29 06:56:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:57:01.645578119 +0000 UTC m=+1372.689960200" watchObservedRunningTime="2025-11-29 06:57:01.663530851 +0000 UTC m=+1372.707912932" Nov 29 06:57:02 crc kubenswrapper[4947]: I1129 06:57:02.623081 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-59fd76fd88-6ghpn" event={"ID":"ae7d6be8-33b3-4771-aebf-d7302883bd3d","Type":"ContainerStarted","Data":"678efb524d07bbc02c7636789ce9c74eef29751ee7b7c19339bb500104180593"} Nov 29 06:57:02 crc kubenswrapper[4947]: I1129 06:57:02.623564 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-59fd76fd88-6ghpn" event={"ID":"ae7d6be8-33b3-4771-aebf-d7302883bd3d","Type":"ContainerStarted","Data":"0e6be3c0e27d5af5be739312bda3608c878432b96d1e788ae24a47761aa613a5"} Nov 29 06:57:02 crc kubenswrapper[4947]: I1129 06:57:02.625563 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-85c67749d7-4wkg2" event={"ID":"ccadffe2-e81e-44ab-a879-fefa01177386","Type":"ContainerStarted","Data":"56a165c68988d8cbc3ef5eaaf7a5d4ce77e691411007f1c6a4897d5661a43554"} Nov 29 06:57:02 crc kubenswrapper[4947]: I1129 06:57:02.625615 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-85c67749d7-4wkg2" event={"ID":"ccadffe2-e81e-44ab-a879-fefa01177386","Type":"ContainerStarted","Data":"93ea4c0c540fec546422ff01c7314068ab14c952f573307c6bcc58640e05d050"} Nov 29 06:57:02 crc kubenswrapper[4947]: I1129 06:57:02.626094 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="e4e05c1a-b07e-4307-ab90-314a6fcd6619" containerName="cinder-api-log" containerID="cri-o://b9ebe694c0cdf8c5c9c1a14ade8cccfdc2c06a06875f324362b765e03f97b4fb" gracePeriod=30 Nov 29 06:57:02 crc kubenswrapper[4947]: I1129 06:57:02.626262 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="e4e05c1a-b07e-4307-ab90-314a6fcd6619" containerName="cinder-api" containerID="cri-o://6f0f3de9e87292dd6be01c8c9c3486cb64dd4b9d77692d3adfa71796b0d5fb07" gracePeriod=30 Nov 29 06:57:02 crc kubenswrapper[4947]: I1129 06:57:02.664500 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-59fd76fd88-6ghpn" podStartSLOduration=3.071153538 podStartE2EDuration="6.664475914s" podCreationTimestamp="2025-11-29 06:56:56 +0000 UTC" firstStartedPulling="2025-11-29 06:56:57.756414486 +0000 UTC m=+1368.800796567" lastFinishedPulling="2025-11-29 06:57:01.349736852 +0000 UTC m=+1372.394118943" observedRunningTime="2025-11-29 06:57:02.659709124 +0000 UTC m=+1373.704091215" watchObservedRunningTime="2025-11-29 06:57:02.664475914 +0000 UTC m=+1373.708857995" Nov 29 06:57:02 crc kubenswrapper[4947]: I1129 06:57:02.684961 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-85c67749d7-4wkg2" podStartSLOduration=3.6938552529999997 podStartE2EDuration="6.684925998s" podCreationTimestamp="2025-11-29 06:56:56 +0000 UTC" firstStartedPulling="2025-11-29 06:56:58.36556173 +0000 UTC m=+1369.409943811" lastFinishedPulling="2025-11-29 06:57:01.356632475 +0000 UTC m=+1372.401014556" observedRunningTime="2025-11-29 06:57:02.683769829 +0000 UTC m=+1373.728151920" watchObservedRunningTime="2025-11-29 06:57:02.684925998 +0000 UTC m=+1373.729308079" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.125157 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4sjrf" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.287939 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/af3ad5b2-503c-4d1c-927c-0feab47e5212-config\") pod \"af3ad5b2-503c-4d1c-927c-0feab47e5212\" (UID: \"af3ad5b2-503c-4d1c-927c-0feab47e5212\") " Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.288061 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kdg4\" (UniqueName: \"kubernetes.io/projected/af3ad5b2-503c-4d1c-927c-0feab47e5212-kube-api-access-4kdg4\") pod \"af3ad5b2-503c-4d1c-927c-0feab47e5212\" (UID: \"af3ad5b2-503c-4d1c-927c-0feab47e5212\") " Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.288106 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af3ad5b2-503c-4d1c-927c-0feab47e5212-combined-ca-bundle\") pod \"af3ad5b2-503c-4d1c-927c-0feab47e5212\" (UID: \"af3ad5b2-503c-4d1c-927c-0feab47e5212\") " Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.298066 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af3ad5b2-503c-4d1c-927c-0feab47e5212-kube-api-access-4kdg4" (OuterVolumeSpecName: "kube-api-access-4kdg4") pod "af3ad5b2-503c-4d1c-927c-0feab47e5212" (UID: "af3ad5b2-503c-4d1c-927c-0feab47e5212"). InnerVolumeSpecName "kube-api-access-4kdg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.324171 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af3ad5b2-503c-4d1c-927c-0feab47e5212-config" (OuterVolumeSpecName: "config") pod "af3ad5b2-503c-4d1c-927c-0feab47e5212" (UID: "af3ad5b2-503c-4d1c-927c-0feab47e5212"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.337333 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af3ad5b2-503c-4d1c-927c-0feab47e5212-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "af3ad5b2-503c-4d1c-927c-0feab47e5212" (UID: "af3ad5b2-503c-4d1c-927c-0feab47e5212"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.372956 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.401140 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/af3ad5b2-503c-4d1c-927c-0feab47e5212-config\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.401184 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kdg4\" (UniqueName: \"kubernetes.io/projected/af3ad5b2-503c-4d1c-927c-0feab47e5212-kube-api-access-4kdg4\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.401196 4947 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af3ad5b2-503c-4d1c-927c-0feab47e5212-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.502332 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4e05c1a-b07e-4307-ab90-314a6fcd6619-config-data\") pod \"e4e05c1a-b07e-4307-ab90-314a6fcd6619\" (UID: \"e4e05c1a-b07e-4307-ab90-314a6fcd6619\") " Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.502895 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4e05c1a-b07e-4307-ab90-314a6fcd6619-scripts\") pod \"e4e05c1a-b07e-4307-ab90-314a6fcd6619\" (UID: \"e4e05c1a-b07e-4307-ab90-314a6fcd6619\") " Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.502963 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4e05c1a-b07e-4307-ab90-314a6fcd6619-combined-ca-bundle\") pod \"e4e05c1a-b07e-4307-ab90-314a6fcd6619\" (UID: \"e4e05c1a-b07e-4307-ab90-314a6fcd6619\") " Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.502996 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e4e05c1a-b07e-4307-ab90-314a6fcd6619-config-data-custom\") pod \"e4e05c1a-b07e-4307-ab90-314a6fcd6619\" (UID: \"e4e05c1a-b07e-4307-ab90-314a6fcd6619\") " Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.503028 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xclr7\" (UniqueName: \"kubernetes.io/projected/e4e05c1a-b07e-4307-ab90-314a6fcd6619-kube-api-access-xclr7\") pod \"e4e05c1a-b07e-4307-ab90-314a6fcd6619\" (UID: \"e4e05c1a-b07e-4307-ab90-314a6fcd6619\") " Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.503117 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4e05c1a-b07e-4307-ab90-314a6fcd6619-logs\") pod \"e4e05c1a-b07e-4307-ab90-314a6fcd6619\" (UID: \"e4e05c1a-b07e-4307-ab90-314a6fcd6619\") " Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.503143 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e4e05c1a-b07e-4307-ab90-314a6fcd6619-etc-machine-id\") pod \"e4e05c1a-b07e-4307-ab90-314a6fcd6619\" (UID: \"e4e05c1a-b07e-4307-ab90-314a6fcd6619\") " Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.503789 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4e05c1a-b07e-4307-ab90-314a6fcd6619-logs" (OuterVolumeSpecName: "logs") pod "e4e05c1a-b07e-4307-ab90-314a6fcd6619" (UID: "e4e05c1a-b07e-4307-ab90-314a6fcd6619"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.503829 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4e05c1a-b07e-4307-ab90-314a6fcd6619-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e4e05c1a-b07e-4307-ab90-314a6fcd6619" (UID: "e4e05c1a-b07e-4307-ab90-314a6fcd6619"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.508264 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4e05c1a-b07e-4307-ab90-314a6fcd6619-kube-api-access-xclr7" (OuterVolumeSpecName: "kube-api-access-xclr7") pod "e4e05c1a-b07e-4307-ab90-314a6fcd6619" (UID: "e4e05c1a-b07e-4307-ab90-314a6fcd6619"). InnerVolumeSpecName "kube-api-access-xclr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.508365 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4e05c1a-b07e-4307-ab90-314a6fcd6619-scripts" (OuterVolumeSpecName: "scripts") pod "e4e05c1a-b07e-4307-ab90-314a6fcd6619" (UID: "e4e05c1a-b07e-4307-ab90-314a6fcd6619"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.508833 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4e05c1a-b07e-4307-ab90-314a6fcd6619-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e4e05c1a-b07e-4307-ab90-314a6fcd6619" (UID: "e4e05c1a-b07e-4307-ab90-314a6fcd6619"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.531656 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4e05c1a-b07e-4307-ab90-314a6fcd6619-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e4e05c1a-b07e-4307-ab90-314a6fcd6619" (UID: "e4e05c1a-b07e-4307-ab90-314a6fcd6619"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.554512 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4e05c1a-b07e-4307-ab90-314a6fcd6619-config-data" (OuterVolumeSpecName: "config-data") pod "e4e05c1a-b07e-4307-ab90-314a6fcd6619" (UID: "e4e05c1a-b07e-4307-ab90-314a6fcd6619"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.605410 4947 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4e05c1a-b07e-4307-ab90-314a6fcd6619-logs\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.606450 4947 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e4e05c1a-b07e-4307-ab90-314a6fcd6619-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.606684 4947 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4e05c1a-b07e-4307-ab90-314a6fcd6619-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.606701 4947 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4e05c1a-b07e-4307-ab90-314a6fcd6619-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.606711 4947 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4e05c1a-b07e-4307-ab90-314a6fcd6619-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.606724 4947 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e4e05c1a-b07e-4307-ab90-314a6fcd6619-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.606760 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xclr7\" (UniqueName: \"kubernetes.io/projected/e4e05c1a-b07e-4307-ab90-314a6fcd6619-kube-api-access-xclr7\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.640935 4947 generic.go:334] "Generic (PLEG): container finished" podID="e4e05c1a-b07e-4307-ab90-314a6fcd6619" containerID="6f0f3de9e87292dd6be01c8c9c3486cb64dd4b9d77692d3adfa71796b0d5fb07" exitCode=0 Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.640979 4947 generic.go:334] "Generic (PLEG): container finished" podID="e4e05c1a-b07e-4307-ab90-314a6fcd6619" containerID="b9ebe694c0cdf8c5c9c1a14ade8cccfdc2c06a06875f324362b765e03f97b4fb" exitCode=143 Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.641039 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e4e05c1a-b07e-4307-ab90-314a6fcd6619","Type":"ContainerDied","Data":"6f0f3de9e87292dd6be01c8c9c3486cb64dd4b9d77692d3adfa71796b0d5fb07"} Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.641106 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e4e05c1a-b07e-4307-ab90-314a6fcd6619","Type":"ContainerDied","Data":"b9ebe694c0cdf8c5c9c1a14ade8cccfdc2c06a06875f324362b765e03f97b4fb"} Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.641121 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e4e05c1a-b07e-4307-ab90-314a6fcd6619","Type":"ContainerDied","Data":"f099330ee932a5b05081e0792c960eb23b8e415e5bc83f0c095c8aa2767f939e"} Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.641142 4947 scope.go:117] "RemoveContainer" containerID="6f0f3de9e87292dd6be01c8c9c3486cb64dd4b9d77692d3adfa71796b0d5fb07" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.641363 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.650348 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4sjrf" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.652571 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4sjrf" event={"ID":"af3ad5b2-503c-4d1c-927c-0feab47e5212","Type":"ContainerDied","Data":"222b09d42ba08bc614d3fa0a72c941b42f2f854132fd50c19f420ef8347c9f98"} Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.652618 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="222b09d42ba08bc614d3fa0a72c941b42f2f854132fd50c19f420ef8347c9f98" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.674329 4947 scope.go:117] "RemoveContainer" containerID="b9ebe694c0cdf8c5c9c1a14ade8cccfdc2c06a06875f324362b765e03f97b4fb" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.738831 4947 scope.go:117] "RemoveContainer" containerID="6f0f3de9e87292dd6be01c8c9c3486cb64dd4b9d77692d3adfa71796b0d5fb07" Nov 29 06:57:03 crc kubenswrapper[4947]: E1129 06:57:03.740898 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f0f3de9e87292dd6be01c8c9c3486cb64dd4b9d77692d3adfa71796b0d5fb07\": container with ID starting with 6f0f3de9e87292dd6be01c8c9c3486cb64dd4b9d77692d3adfa71796b0d5fb07 not found: ID does not exist" containerID="6f0f3de9e87292dd6be01c8c9c3486cb64dd4b9d77692d3adfa71796b0d5fb07" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.740991 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f0f3de9e87292dd6be01c8c9c3486cb64dd4b9d77692d3adfa71796b0d5fb07"} err="failed to get container status \"6f0f3de9e87292dd6be01c8c9c3486cb64dd4b9d77692d3adfa71796b0d5fb07\": rpc error: code = NotFound desc = could not find container \"6f0f3de9e87292dd6be01c8c9c3486cb64dd4b9d77692d3adfa71796b0d5fb07\": container with ID starting with 6f0f3de9e87292dd6be01c8c9c3486cb64dd4b9d77692d3adfa71796b0d5fb07 not found: ID does not exist" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.741030 4947 scope.go:117] "RemoveContainer" containerID="b9ebe694c0cdf8c5c9c1a14ade8cccfdc2c06a06875f324362b765e03f97b4fb" Nov 29 06:57:03 crc kubenswrapper[4947]: E1129 06:57:03.741924 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9ebe694c0cdf8c5c9c1a14ade8cccfdc2c06a06875f324362b765e03f97b4fb\": container with ID starting with b9ebe694c0cdf8c5c9c1a14ade8cccfdc2c06a06875f324362b765e03f97b4fb not found: ID does not exist" containerID="b9ebe694c0cdf8c5c9c1a14ade8cccfdc2c06a06875f324362b765e03f97b4fb" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.742021 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9ebe694c0cdf8c5c9c1a14ade8cccfdc2c06a06875f324362b765e03f97b4fb"} err="failed to get container status \"b9ebe694c0cdf8c5c9c1a14ade8cccfdc2c06a06875f324362b765e03f97b4fb\": rpc error: code = NotFound desc = could not find container \"b9ebe694c0cdf8c5c9c1a14ade8cccfdc2c06a06875f324362b765e03f97b4fb\": container with ID starting with b9ebe694c0cdf8c5c9c1a14ade8cccfdc2c06a06875f324362b765e03f97b4fb not found: ID does not exist" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.742065 4947 scope.go:117] "RemoveContainer" containerID="6f0f3de9e87292dd6be01c8c9c3486cb64dd4b9d77692d3adfa71796b0d5fb07" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.743925 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f0f3de9e87292dd6be01c8c9c3486cb64dd4b9d77692d3adfa71796b0d5fb07"} err="failed to get container status \"6f0f3de9e87292dd6be01c8c9c3486cb64dd4b9d77692d3adfa71796b0d5fb07\": rpc error: code = NotFound desc = could not find container \"6f0f3de9e87292dd6be01c8c9c3486cb64dd4b9d77692d3adfa71796b0d5fb07\": container with ID starting with 6f0f3de9e87292dd6be01c8c9c3486cb64dd4b9d77692d3adfa71796b0d5fb07 not found: ID does not exist" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.744016 4947 scope.go:117] "RemoveContainer" containerID="b9ebe694c0cdf8c5c9c1a14ade8cccfdc2c06a06875f324362b765e03f97b4fb" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.745122 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9ebe694c0cdf8c5c9c1a14ade8cccfdc2c06a06875f324362b765e03f97b4fb"} err="failed to get container status \"b9ebe694c0cdf8c5c9c1a14ade8cccfdc2c06a06875f324362b765e03f97b4fb\": rpc error: code = NotFound desc = could not find container \"b9ebe694c0cdf8c5c9c1a14ade8cccfdc2c06a06875f324362b765e03f97b4fb\": container with ID starting with b9ebe694c0cdf8c5c9c1a14ade8cccfdc2c06a06875f324362b765e03f97b4fb not found: ID does not exist" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.756422 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.799906 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.816731 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 29 06:57:03 crc kubenswrapper[4947]: E1129 06:57:03.817392 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa846029-b0e0-4e05-b8c1-07aecc1d4d27" containerName="init" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.817423 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa846029-b0e0-4e05-b8c1-07aecc1d4d27" containerName="init" Nov 29 06:57:03 crc kubenswrapper[4947]: E1129 06:57:03.817438 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4e05c1a-b07e-4307-ab90-314a6fcd6619" containerName="cinder-api-log" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.817448 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4e05c1a-b07e-4307-ab90-314a6fcd6619" containerName="cinder-api-log" Nov 29 06:57:03 crc kubenswrapper[4947]: E1129 06:57:03.817472 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af3ad5b2-503c-4d1c-927c-0feab47e5212" containerName="neutron-db-sync" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.817480 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="af3ad5b2-503c-4d1c-927c-0feab47e5212" containerName="neutron-db-sync" Nov 29 06:57:03 crc kubenswrapper[4947]: E1129 06:57:03.817500 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b906095-1e6e-42c3-9952-ad436efa3fbf" containerName="init" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.817509 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b906095-1e6e-42c3-9952-ad436efa3fbf" containerName="init" Nov 29 06:57:03 crc kubenswrapper[4947]: E1129 06:57:03.817533 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4e05c1a-b07e-4307-ab90-314a6fcd6619" containerName="cinder-api" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.817540 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4e05c1a-b07e-4307-ab90-314a6fcd6619" containerName="cinder-api" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.817770 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4e05c1a-b07e-4307-ab90-314a6fcd6619" containerName="cinder-api" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.817797 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="af3ad5b2-503c-4d1c-927c-0feab47e5212" containerName="neutron-db-sync" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.817814 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa846029-b0e0-4e05-b8c1-07aecc1d4d27" containerName="init" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.817824 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b906095-1e6e-42c3-9952-ad436efa3fbf" containerName="init" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.817836 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4e05c1a-b07e-4307-ab90-314a6fcd6619" containerName="cinder-api-log" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.819186 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.821829 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/53f96945-8da7-4cac-8579-990373298a91-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"53f96945-8da7-4cac-8579-990373298a91\") " pod="openstack/cinder-api-0" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.821955 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/53f96945-8da7-4cac-8579-990373298a91-public-tls-certs\") pod \"cinder-api-0\" (UID: \"53f96945-8da7-4cac-8579-990373298a91\") " pod="openstack/cinder-api-0" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.821977 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6bkr\" (UniqueName: \"kubernetes.io/projected/53f96945-8da7-4cac-8579-990373298a91-kube-api-access-x6bkr\") pod \"cinder-api-0\" (UID: \"53f96945-8da7-4cac-8579-990373298a91\") " pod="openstack/cinder-api-0" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.822048 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53f96945-8da7-4cac-8579-990373298a91-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"53f96945-8da7-4cac-8579-990373298a91\") " pod="openstack/cinder-api-0" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.822082 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/53f96945-8da7-4cac-8579-990373298a91-etc-machine-id\") pod \"cinder-api-0\" (UID: \"53f96945-8da7-4cac-8579-990373298a91\") " pod="openstack/cinder-api-0" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.822135 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53f96945-8da7-4cac-8579-990373298a91-config-data\") pod \"cinder-api-0\" (UID: \"53f96945-8da7-4cac-8579-990373298a91\") " pod="openstack/cinder-api-0" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.822178 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/53f96945-8da7-4cac-8579-990373298a91-config-data-custom\") pod \"cinder-api-0\" (UID: \"53f96945-8da7-4cac-8579-990373298a91\") " pod="openstack/cinder-api-0" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.822230 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53f96945-8da7-4cac-8579-990373298a91-logs\") pod \"cinder-api-0\" (UID: \"53f96945-8da7-4cac-8579-990373298a91\") " pod="openstack/cinder-api-0" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.822256 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53f96945-8da7-4cac-8579-990373298a91-scripts\") pod \"cinder-api-0\" (UID: \"53f96945-8da7-4cac-8579-990373298a91\") " pod="openstack/cinder-api-0" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.830794 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.831101 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.831322 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.857553 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.924003 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53f96945-8da7-4cac-8579-990373298a91-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"53f96945-8da7-4cac-8579-990373298a91\") " pod="openstack/cinder-api-0" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.924075 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/53f96945-8da7-4cac-8579-990373298a91-etc-machine-id\") pod \"cinder-api-0\" (UID: \"53f96945-8da7-4cac-8579-990373298a91\") " pod="openstack/cinder-api-0" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.924132 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53f96945-8da7-4cac-8579-990373298a91-config-data\") pod \"cinder-api-0\" (UID: \"53f96945-8da7-4cac-8579-990373298a91\") " pod="openstack/cinder-api-0" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.924179 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/53f96945-8da7-4cac-8579-990373298a91-config-data-custom\") pod \"cinder-api-0\" (UID: \"53f96945-8da7-4cac-8579-990373298a91\") " pod="openstack/cinder-api-0" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.924199 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53f96945-8da7-4cac-8579-990373298a91-logs\") pod \"cinder-api-0\" (UID: \"53f96945-8da7-4cac-8579-990373298a91\") " pod="openstack/cinder-api-0" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.924280 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53f96945-8da7-4cac-8579-990373298a91-scripts\") pod \"cinder-api-0\" (UID: \"53f96945-8da7-4cac-8579-990373298a91\") " pod="openstack/cinder-api-0" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.924307 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/53f96945-8da7-4cac-8579-990373298a91-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"53f96945-8da7-4cac-8579-990373298a91\") " pod="openstack/cinder-api-0" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.924348 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/53f96945-8da7-4cac-8579-990373298a91-public-tls-certs\") pod \"cinder-api-0\" (UID: \"53f96945-8da7-4cac-8579-990373298a91\") " pod="openstack/cinder-api-0" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.924371 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6bkr\" (UniqueName: \"kubernetes.io/projected/53f96945-8da7-4cac-8579-990373298a91-kube-api-access-x6bkr\") pod \"cinder-api-0\" (UID: \"53f96945-8da7-4cac-8579-990373298a91\") " pod="openstack/cinder-api-0" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.929171 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53f96945-8da7-4cac-8579-990373298a91-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"53f96945-8da7-4cac-8579-990373298a91\") " pod="openstack/cinder-api-0" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.929322 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/53f96945-8da7-4cac-8579-990373298a91-etc-machine-id\") pod \"cinder-api-0\" (UID: \"53f96945-8da7-4cac-8579-990373298a91\") " pod="openstack/cinder-api-0" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.934029 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53f96945-8da7-4cac-8579-990373298a91-config-data\") pod \"cinder-api-0\" (UID: \"53f96945-8da7-4cac-8579-990373298a91\") " pod="openstack/cinder-api-0" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.938229 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/53f96945-8da7-4cac-8579-990373298a91-config-data-custom\") pod \"cinder-api-0\" (UID: \"53f96945-8da7-4cac-8579-990373298a91\") " pod="openstack/cinder-api-0" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.938534 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53f96945-8da7-4cac-8579-990373298a91-logs\") pod \"cinder-api-0\" (UID: \"53f96945-8da7-4cac-8579-990373298a91\") " pod="openstack/cinder-api-0" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.942172 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53f96945-8da7-4cac-8579-990373298a91-scripts\") pod \"cinder-api-0\" (UID: \"53f96945-8da7-4cac-8579-990373298a91\") " pod="openstack/cinder-api-0" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.944891 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/53f96945-8da7-4cac-8579-990373298a91-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"53f96945-8da7-4cac-8579-990373298a91\") " pod="openstack/cinder-api-0" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.950210 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/53f96945-8da7-4cac-8579-990373298a91-public-tls-certs\") pod \"cinder-api-0\" (UID: \"53f96945-8da7-4cac-8579-990373298a91\") " pod="openstack/cinder-api-0" Nov 29 06:57:03 crc kubenswrapper[4947]: I1129 06:57:03.995975 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6bkr\" (UniqueName: \"kubernetes.io/projected/53f96945-8da7-4cac-8579-990373298a91-kube-api-access-x6bkr\") pod \"cinder-api-0\" (UID: \"53f96945-8da7-4cac-8579-990373298a91\") " pod="openstack/cinder-api-0" Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.012795 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b76cdf485-9slnj"] Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.013105 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b76cdf485-9slnj" podUID="f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e" containerName="dnsmasq-dns" containerID="cri-o://f46459103bf24a56ff3aa2f0cf7c4495c601ec70f7a9f9ab69a9d1c318c2ebd1" gracePeriod=10 Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.081003 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-cksns"] Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.083161 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-cksns" Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.110752 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7d7467cbc8-jttcd"] Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.112650 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7d7467cbc8-jttcd" Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.117487 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.117558 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.117692 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-pf86q" Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.117788 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.132043 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a07539ac-ab47-4fb2-b397-1dba22e18c65-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-cksns\" (UID: \"a07539ac-ab47-4fb2-b397-1dba22e18c65\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-cksns" Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.132116 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/37e2a512-1c34-4400-8986-244e64410004-httpd-config\") pod \"neutron-7d7467cbc8-jttcd\" (UID: \"37e2a512-1c34-4400-8986-244e64410004\") " pod="openstack/neutron-7d7467cbc8-jttcd" Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.132151 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37e2a512-1c34-4400-8986-244e64410004-combined-ca-bundle\") pod \"neutron-7d7467cbc8-jttcd\" (UID: \"37e2a512-1c34-4400-8986-244e64410004\") " pod="openstack/neutron-7d7467cbc8-jttcd" Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.132186 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a07539ac-ab47-4fb2-b397-1dba22e18c65-config\") pod \"dnsmasq-dns-6d97fcdd8f-cksns\" (UID: \"a07539ac-ab47-4fb2-b397-1dba22e18c65\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-cksns" Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.132206 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/37e2a512-1c34-4400-8986-244e64410004-config\") pod \"neutron-7d7467cbc8-jttcd\" (UID: \"37e2a512-1c34-4400-8986-244e64410004\") " pod="openstack/neutron-7d7467cbc8-jttcd" Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.132330 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a07539ac-ab47-4fb2-b397-1dba22e18c65-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-cksns\" (UID: \"a07539ac-ab47-4fb2-b397-1dba22e18c65\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-cksns" Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.132359 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a07539ac-ab47-4fb2-b397-1dba22e18c65-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-cksns\" (UID: \"a07539ac-ab47-4fb2-b397-1dba22e18c65\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-cksns" Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.132399 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2287w\" (UniqueName: \"kubernetes.io/projected/a07539ac-ab47-4fb2-b397-1dba22e18c65-kube-api-access-2287w\") pod \"dnsmasq-dns-6d97fcdd8f-cksns\" (UID: \"a07539ac-ab47-4fb2-b397-1dba22e18c65\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-cksns" Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.132412 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/37e2a512-1c34-4400-8986-244e64410004-ovndb-tls-certs\") pod \"neutron-7d7467cbc8-jttcd\" (UID: \"37e2a512-1c34-4400-8986-244e64410004\") " pod="openstack/neutron-7d7467cbc8-jttcd" Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.132445 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb5fb\" (UniqueName: \"kubernetes.io/projected/37e2a512-1c34-4400-8986-244e64410004-kube-api-access-jb5fb\") pod \"neutron-7d7467cbc8-jttcd\" (UID: \"37e2a512-1c34-4400-8986-244e64410004\") " pod="openstack/neutron-7d7467cbc8-jttcd" Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.135836 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-cksns"] Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.146963 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.180148 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7d7467cbc8-jttcd"] Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.244512 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37e2a512-1c34-4400-8986-244e64410004-combined-ca-bundle\") pod \"neutron-7d7467cbc8-jttcd\" (UID: \"37e2a512-1c34-4400-8986-244e64410004\") " pod="openstack/neutron-7d7467cbc8-jttcd" Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.244593 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a07539ac-ab47-4fb2-b397-1dba22e18c65-config\") pod \"dnsmasq-dns-6d97fcdd8f-cksns\" (UID: \"a07539ac-ab47-4fb2-b397-1dba22e18c65\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-cksns" Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.244613 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/37e2a512-1c34-4400-8986-244e64410004-config\") pod \"neutron-7d7467cbc8-jttcd\" (UID: \"37e2a512-1c34-4400-8986-244e64410004\") " pod="openstack/neutron-7d7467cbc8-jttcd" Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.244635 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a07539ac-ab47-4fb2-b397-1dba22e18c65-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-cksns\" (UID: \"a07539ac-ab47-4fb2-b397-1dba22e18c65\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-cksns" Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.244659 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a07539ac-ab47-4fb2-b397-1dba22e18c65-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-cksns\" (UID: \"a07539ac-ab47-4fb2-b397-1dba22e18c65\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-cksns" Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.244700 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2287w\" (UniqueName: \"kubernetes.io/projected/a07539ac-ab47-4fb2-b397-1dba22e18c65-kube-api-access-2287w\") pod \"dnsmasq-dns-6d97fcdd8f-cksns\" (UID: \"a07539ac-ab47-4fb2-b397-1dba22e18c65\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-cksns" Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.244718 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/37e2a512-1c34-4400-8986-244e64410004-ovndb-tls-certs\") pod \"neutron-7d7467cbc8-jttcd\" (UID: \"37e2a512-1c34-4400-8986-244e64410004\") " pod="openstack/neutron-7d7467cbc8-jttcd" Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.244759 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jb5fb\" (UniqueName: \"kubernetes.io/projected/37e2a512-1c34-4400-8986-244e64410004-kube-api-access-jb5fb\") pod \"neutron-7d7467cbc8-jttcd\" (UID: \"37e2a512-1c34-4400-8986-244e64410004\") " pod="openstack/neutron-7d7467cbc8-jttcd" Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.244841 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a07539ac-ab47-4fb2-b397-1dba22e18c65-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-cksns\" (UID: \"a07539ac-ab47-4fb2-b397-1dba22e18c65\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-cksns" Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.244866 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/37e2a512-1c34-4400-8986-244e64410004-httpd-config\") pod \"neutron-7d7467cbc8-jttcd\" (UID: \"37e2a512-1c34-4400-8986-244e64410004\") " pod="openstack/neutron-7d7467cbc8-jttcd" Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.250915 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a07539ac-ab47-4fb2-b397-1dba22e18c65-config\") pod \"dnsmasq-dns-6d97fcdd8f-cksns\" (UID: \"a07539ac-ab47-4fb2-b397-1dba22e18c65\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-cksns" Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.252237 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a07539ac-ab47-4fb2-b397-1dba22e18c65-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-cksns\" (UID: \"a07539ac-ab47-4fb2-b397-1dba22e18c65\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-cksns" Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.259032 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/37e2a512-1c34-4400-8986-244e64410004-ovndb-tls-certs\") pod \"neutron-7d7467cbc8-jttcd\" (UID: \"37e2a512-1c34-4400-8986-244e64410004\") " pod="openstack/neutron-7d7467cbc8-jttcd" Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.259709 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37e2a512-1c34-4400-8986-244e64410004-combined-ca-bundle\") pod \"neutron-7d7467cbc8-jttcd\" (UID: \"37e2a512-1c34-4400-8986-244e64410004\") " pod="openstack/neutron-7d7467cbc8-jttcd" Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.260657 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a07539ac-ab47-4fb2-b397-1dba22e18c65-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-cksns\" (UID: \"a07539ac-ab47-4fb2-b397-1dba22e18c65\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-cksns" Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.261443 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/37e2a512-1c34-4400-8986-244e64410004-config\") pod \"neutron-7d7467cbc8-jttcd\" (UID: \"37e2a512-1c34-4400-8986-244e64410004\") " pod="openstack/neutron-7d7467cbc8-jttcd" Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.261939 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/37e2a512-1c34-4400-8986-244e64410004-httpd-config\") pod \"neutron-7d7467cbc8-jttcd\" (UID: \"37e2a512-1c34-4400-8986-244e64410004\") " pod="openstack/neutron-7d7467cbc8-jttcd" Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.264624 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a07539ac-ab47-4fb2-b397-1dba22e18c65-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-cksns\" (UID: \"a07539ac-ab47-4fb2-b397-1dba22e18c65\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-cksns" Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.294288 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb5fb\" (UniqueName: \"kubernetes.io/projected/37e2a512-1c34-4400-8986-244e64410004-kube-api-access-jb5fb\") pod \"neutron-7d7467cbc8-jttcd\" (UID: \"37e2a512-1c34-4400-8986-244e64410004\") " pod="openstack/neutron-7d7467cbc8-jttcd" Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.299971 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2287w\" (UniqueName: \"kubernetes.io/projected/a07539ac-ab47-4fb2-b397-1dba22e18c65-kube-api-access-2287w\") pod \"dnsmasq-dns-6d97fcdd8f-cksns\" (UID: \"a07539ac-ab47-4fb2-b397-1dba22e18c65\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-cksns" Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.518785 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-cksns" Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.548509 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7d7467cbc8-jttcd" Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.690508 4947 generic.go:334] "Generic (PLEG): container finished" podID="f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e" containerID="f46459103bf24a56ff3aa2f0cf7c4495c601ec70f7a9f9ab69a9d1c318c2ebd1" exitCode=0 Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.690566 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b76cdf485-9slnj" event={"ID":"f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e","Type":"ContainerDied","Data":"f46459103bf24a56ff3aa2f0cf7c4495c601ec70f7a9f9ab69a9d1c318c2ebd1"} Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.713357 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b76cdf485-9slnj" Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.757755 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e-ovsdbserver-sb\") pod \"f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e\" (UID: \"f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e\") " Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.757843 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crmzn\" (UniqueName: \"kubernetes.io/projected/f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e-kube-api-access-crmzn\") pod \"f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e\" (UID: \"f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e\") " Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.757884 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e-ovsdbserver-nb\") pod \"f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e\" (UID: \"f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e\") " Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.757927 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e-config\") pod \"f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e\" (UID: \"f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e\") " Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.758011 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e-dns-svc\") pod \"f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e\" (UID: \"f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e\") " Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.769081 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e-kube-api-access-crmzn" (OuterVolumeSpecName: "kube-api-access-crmzn") pod "f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e" (UID: "f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e"). InnerVolumeSpecName "kube-api-access-crmzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.834894 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e" (UID: "f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.837201 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e" (UID: "f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.860643 4947 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.860689 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crmzn\" (UniqueName: \"kubernetes.io/projected/f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e-kube-api-access-crmzn\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.860702 4947 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.863101 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e" (UID: "f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.872901 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e-config" (OuterVolumeSpecName: "config") pod "f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e" (UID: "f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.962060 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e-config\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.962095 4947 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:04 crc kubenswrapper[4947]: I1129 06:57:04.962723 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 29 06:57:05 crc kubenswrapper[4947]: I1129 06:57:05.151808 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-cksns"] Nov 29 06:57:05 crc kubenswrapper[4947]: I1129 06:57:05.196546 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4e05c1a-b07e-4307-ab90-314a6fcd6619" path="/var/lib/kubelet/pods/e4e05c1a-b07e-4307-ab90-314a6fcd6619/volumes" Nov 29 06:57:05 crc kubenswrapper[4947]: I1129 06:57:05.372633 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7d7467cbc8-jttcd"] Nov 29 06:57:05 crc kubenswrapper[4947]: I1129 06:57:05.703605 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7d7467cbc8-jttcd" event={"ID":"37e2a512-1c34-4400-8986-244e64410004","Type":"ContainerStarted","Data":"3eca658dafbdb3640f4d19a53677695531eab41e8263797a8550999e17d0d64a"} Nov 29 06:57:05 crc kubenswrapper[4947]: I1129 06:57:05.706062 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"53f96945-8da7-4cac-8579-990373298a91","Type":"ContainerStarted","Data":"3005f0f7eb3b582cb70562dfb3f87e7aeea763116dab63e2c7b63b8b9b04763b"} Nov 29 06:57:05 crc kubenswrapper[4947]: I1129 06:57:05.709497 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b76cdf485-9slnj" event={"ID":"f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e","Type":"ContainerDied","Data":"53a367317bf1f8a7352b29812edc8660f28ea0c12f42a0f0dac96346eec93cbd"} Nov 29 06:57:05 crc kubenswrapper[4947]: I1129 06:57:05.709569 4947 scope.go:117] "RemoveContainer" containerID="f46459103bf24a56ff3aa2f0cf7c4495c601ec70f7a9f9ab69a9d1c318c2ebd1" Nov 29 06:57:05 crc kubenswrapper[4947]: I1129 06:57:05.709749 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b76cdf485-9slnj" Nov 29 06:57:05 crc kubenswrapper[4947]: I1129 06:57:05.717041 4947 generic.go:334] "Generic (PLEG): container finished" podID="a07539ac-ab47-4fb2-b397-1dba22e18c65" containerID="d109548ef1a1996edd368d9cefa4f837b55738749cfc6a7f8bceeb3c93414e8f" exitCode=0 Nov 29 06:57:05 crc kubenswrapper[4947]: I1129 06:57:05.717125 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-cksns" event={"ID":"a07539ac-ab47-4fb2-b397-1dba22e18c65","Type":"ContainerDied","Data":"d109548ef1a1996edd368d9cefa4f837b55738749cfc6a7f8bceeb3c93414e8f"} Nov 29 06:57:05 crc kubenswrapper[4947]: I1129 06:57:05.717177 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-cksns" event={"ID":"a07539ac-ab47-4fb2-b397-1dba22e18c65","Type":"ContainerStarted","Data":"04c2aa1f187dc05e8704b8b7fe6a2f872b3aef7e8642eb0941a300153d08eb21"} Nov 29 06:57:05 crc kubenswrapper[4947]: I1129 06:57:05.762316 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b76cdf485-9slnj"] Nov 29 06:57:05 crc kubenswrapper[4947]: I1129 06:57:05.770277 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b76cdf485-9slnj"] Nov 29 06:57:05 crc kubenswrapper[4947]: I1129 06:57:05.784693 4947 scope.go:117] "RemoveContainer" containerID="573c7950e6c26246e6056de8f81fee44a0d18be16f7a904596f12b358039f01d" Nov 29 06:57:06 crc kubenswrapper[4947]: I1129 06:57:06.195523 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 29 06:57:06 crc kubenswrapper[4947]: I1129 06:57:06.286181 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 29 06:57:06 crc kubenswrapper[4947]: I1129 06:57:06.746324 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-cksns" event={"ID":"a07539ac-ab47-4fb2-b397-1dba22e18c65","Type":"ContainerStarted","Data":"3cdcdfa65126ac936671c2cddaab29c8f8cbad6987cd5665779e497743701b64"} Nov 29 06:57:06 crc kubenswrapper[4947]: I1129 06:57:06.747120 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d97fcdd8f-cksns" Nov 29 06:57:06 crc kubenswrapper[4947]: I1129 06:57:06.752587 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7d7467cbc8-jttcd" event={"ID":"37e2a512-1c34-4400-8986-244e64410004","Type":"ContainerStarted","Data":"6de20a3f02eacb904a39e3413e77f93d0c3c36c1cbbfa751bcb7cae8c3a74639"} Nov 29 06:57:06 crc kubenswrapper[4947]: I1129 06:57:06.752654 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7d7467cbc8-jttcd" event={"ID":"37e2a512-1c34-4400-8986-244e64410004","Type":"ContainerStarted","Data":"b66610f79b9b74b6dc6abb459e34b205c173b7d7b7427422616711f042ef3e3e"} Nov 29 06:57:06 crc kubenswrapper[4947]: I1129 06:57:06.752760 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7d7467cbc8-jttcd" Nov 29 06:57:06 crc kubenswrapper[4947]: I1129 06:57:06.761355 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="f3ec3135-6049-4853-b571-d23200456fc5" containerName="cinder-scheduler" containerID="cri-o://b95afbfd702ee6af53c7279a3db23229d79ce557513cd99b8ff3373116d61f92" gracePeriod=30 Nov 29 06:57:06 crc kubenswrapper[4947]: I1129 06:57:06.762681 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="f3ec3135-6049-4853-b571-d23200456fc5" containerName="probe" containerID="cri-o://57d4ad2c4e9379e05d802770be0a292157299c4dba878fc3378985bd6dd3f6bb" gracePeriod=30 Nov 29 06:57:06 crc kubenswrapper[4947]: I1129 06:57:06.762730 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"53f96945-8da7-4cac-8579-990373298a91","Type":"ContainerStarted","Data":"0f2ee5adf78b5fa7543a2a940bde21df7475cf2fce2aacffef352a4a5aa2e7c0"} Nov 29 06:57:06 crc kubenswrapper[4947]: I1129 06:57:06.762797 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"53f96945-8da7-4cac-8579-990373298a91","Type":"ContainerStarted","Data":"49c8ea492298c9973bdd61543c25031b308aa28dac89a5dfde276af2ff8b80a5"} Nov 29 06:57:06 crc kubenswrapper[4947]: I1129 06:57:06.762947 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 29 06:57:06 crc kubenswrapper[4947]: I1129 06:57:06.772819 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d97fcdd8f-cksns" podStartSLOduration=2.772793366 podStartE2EDuration="2.772793366s" podCreationTimestamp="2025-11-29 06:57:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:57:06.768682623 +0000 UTC m=+1377.813064734" watchObservedRunningTime="2025-11-29 06:57:06.772793366 +0000 UTC m=+1377.817175447" Nov 29 06:57:06 crc kubenswrapper[4947]: I1129 06:57:06.795448 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7d7467cbc8-jttcd" podStartSLOduration=2.795419125 podStartE2EDuration="2.795419125s" podCreationTimestamp="2025-11-29 06:57:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:57:06.791322102 +0000 UTC m=+1377.835704193" watchObservedRunningTime="2025-11-29 06:57:06.795419125 +0000 UTC m=+1377.839801236" Nov 29 06:57:06 crc kubenswrapper[4947]: I1129 06:57:06.823750 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.823721217 podStartE2EDuration="3.823721217s" podCreationTimestamp="2025-11-29 06:57:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:57:06.821725607 +0000 UTC m=+1377.866107688" watchObservedRunningTime="2025-11-29 06:57:06.823721217 +0000 UTC m=+1377.868103308" Nov 29 06:57:07 crc kubenswrapper[4947]: I1129 06:57:07.189745 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e" path="/var/lib/kubelet/pods/f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e/volumes" Nov 29 06:57:08 crc kubenswrapper[4947]: I1129 06:57:08.406936 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-586fc4c58d-gvrpk" Nov 29 06:57:08 crc kubenswrapper[4947]: I1129 06:57:08.795516 4947 generic.go:334] "Generic (PLEG): container finished" podID="f3ec3135-6049-4853-b571-d23200456fc5" containerID="57d4ad2c4e9379e05d802770be0a292157299c4dba878fc3378985bd6dd3f6bb" exitCode=0 Nov 29 06:57:08 crc kubenswrapper[4947]: I1129 06:57:08.795912 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f3ec3135-6049-4853-b571-d23200456fc5","Type":"ContainerDied","Data":"57d4ad2c4e9379e05d802770be0a292157299c4dba878fc3378985bd6dd3f6bb"} Nov 29 06:57:09 crc kubenswrapper[4947]: I1129 06:57:09.215970 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-dbcc54df-b7blt"] Nov 29 06:57:09 crc kubenswrapper[4947]: E1129 06:57:09.217040 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e" containerName="dnsmasq-dns" Nov 29 06:57:09 crc kubenswrapper[4947]: I1129 06:57:09.217077 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e" containerName="dnsmasq-dns" Nov 29 06:57:09 crc kubenswrapper[4947]: E1129 06:57:09.217134 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e" containerName="init" Nov 29 06:57:09 crc kubenswrapper[4947]: I1129 06:57:09.217144 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e" containerName="init" Nov 29 06:57:09 crc kubenswrapper[4947]: I1129 06:57:09.217798 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2f71b24-4d6a-439f-9dce-5f1b32bd6b7e" containerName="dnsmasq-dns" Nov 29 06:57:09 crc kubenswrapper[4947]: I1129 06:57:09.226772 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dbcc54df-b7blt" Nov 29 06:57:09 crc kubenswrapper[4947]: I1129 06:57:09.240612 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 29 06:57:09 crc kubenswrapper[4947]: I1129 06:57:09.242803 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 29 06:57:09 crc kubenswrapper[4947]: I1129 06:57:09.282353 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dbcc54df-b7blt"] Nov 29 06:57:09 crc kubenswrapper[4947]: I1129 06:57:09.324229 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc94a354-bf43-4d41-bd15-33a8c766752f-public-tls-certs\") pod \"neutron-dbcc54df-b7blt\" (UID: \"fc94a354-bf43-4d41-bd15-33a8c766752f\") " pod="openstack/neutron-dbcc54df-b7blt" Nov 29 06:57:09 crc kubenswrapper[4947]: I1129 06:57:09.324298 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-264fz\" (UniqueName: \"kubernetes.io/projected/fc94a354-bf43-4d41-bd15-33a8c766752f-kube-api-access-264fz\") pod \"neutron-dbcc54df-b7blt\" (UID: \"fc94a354-bf43-4d41-bd15-33a8c766752f\") " pod="openstack/neutron-dbcc54df-b7blt" Nov 29 06:57:09 crc kubenswrapper[4947]: I1129 06:57:09.324335 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc94a354-bf43-4d41-bd15-33a8c766752f-combined-ca-bundle\") pod \"neutron-dbcc54df-b7blt\" (UID: \"fc94a354-bf43-4d41-bd15-33a8c766752f\") " pod="openstack/neutron-dbcc54df-b7blt" Nov 29 06:57:09 crc kubenswrapper[4947]: I1129 06:57:09.324382 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc94a354-bf43-4d41-bd15-33a8c766752f-internal-tls-certs\") pod \"neutron-dbcc54df-b7blt\" (UID: \"fc94a354-bf43-4d41-bd15-33a8c766752f\") " pod="openstack/neutron-dbcc54df-b7blt" Nov 29 06:57:09 crc kubenswrapper[4947]: I1129 06:57:09.324436 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc94a354-bf43-4d41-bd15-33a8c766752f-ovndb-tls-certs\") pod \"neutron-dbcc54df-b7blt\" (UID: \"fc94a354-bf43-4d41-bd15-33a8c766752f\") " pod="openstack/neutron-dbcc54df-b7blt" Nov 29 06:57:09 crc kubenswrapper[4947]: I1129 06:57:09.324464 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fc94a354-bf43-4d41-bd15-33a8c766752f-config\") pod \"neutron-dbcc54df-b7blt\" (UID: \"fc94a354-bf43-4d41-bd15-33a8c766752f\") " pod="openstack/neutron-dbcc54df-b7blt" Nov 29 06:57:09 crc kubenswrapper[4947]: I1129 06:57:09.324488 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fc94a354-bf43-4d41-bd15-33a8c766752f-httpd-config\") pod \"neutron-dbcc54df-b7blt\" (UID: \"fc94a354-bf43-4d41-bd15-33a8c766752f\") " pod="openstack/neutron-dbcc54df-b7blt" Nov 29 06:57:09 crc kubenswrapper[4947]: I1129 06:57:09.426453 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc94a354-bf43-4d41-bd15-33a8c766752f-internal-tls-certs\") pod \"neutron-dbcc54df-b7blt\" (UID: \"fc94a354-bf43-4d41-bd15-33a8c766752f\") " pod="openstack/neutron-dbcc54df-b7blt" Nov 29 06:57:09 crc kubenswrapper[4947]: I1129 06:57:09.426551 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc94a354-bf43-4d41-bd15-33a8c766752f-ovndb-tls-certs\") pod \"neutron-dbcc54df-b7blt\" (UID: \"fc94a354-bf43-4d41-bd15-33a8c766752f\") " pod="openstack/neutron-dbcc54df-b7blt" Nov 29 06:57:09 crc kubenswrapper[4947]: I1129 06:57:09.426583 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fc94a354-bf43-4d41-bd15-33a8c766752f-config\") pod \"neutron-dbcc54df-b7blt\" (UID: \"fc94a354-bf43-4d41-bd15-33a8c766752f\") " pod="openstack/neutron-dbcc54df-b7blt" Nov 29 06:57:09 crc kubenswrapper[4947]: I1129 06:57:09.426608 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fc94a354-bf43-4d41-bd15-33a8c766752f-httpd-config\") pod \"neutron-dbcc54df-b7blt\" (UID: \"fc94a354-bf43-4d41-bd15-33a8c766752f\") " pod="openstack/neutron-dbcc54df-b7blt" Nov 29 06:57:09 crc kubenswrapper[4947]: I1129 06:57:09.426666 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc94a354-bf43-4d41-bd15-33a8c766752f-public-tls-certs\") pod \"neutron-dbcc54df-b7blt\" (UID: \"fc94a354-bf43-4d41-bd15-33a8c766752f\") " pod="openstack/neutron-dbcc54df-b7blt" Nov 29 06:57:09 crc kubenswrapper[4947]: I1129 06:57:09.426684 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-264fz\" (UniqueName: \"kubernetes.io/projected/fc94a354-bf43-4d41-bd15-33a8c766752f-kube-api-access-264fz\") pod \"neutron-dbcc54df-b7blt\" (UID: \"fc94a354-bf43-4d41-bd15-33a8c766752f\") " pod="openstack/neutron-dbcc54df-b7blt" Nov 29 06:57:09 crc kubenswrapper[4947]: I1129 06:57:09.426713 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc94a354-bf43-4d41-bd15-33a8c766752f-combined-ca-bundle\") pod \"neutron-dbcc54df-b7blt\" (UID: \"fc94a354-bf43-4d41-bd15-33a8c766752f\") " pod="openstack/neutron-dbcc54df-b7blt" Nov 29 06:57:09 crc kubenswrapper[4947]: I1129 06:57:09.441235 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fc94a354-bf43-4d41-bd15-33a8c766752f-httpd-config\") pod \"neutron-dbcc54df-b7blt\" (UID: \"fc94a354-bf43-4d41-bd15-33a8c766752f\") " pod="openstack/neutron-dbcc54df-b7blt" Nov 29 06:57:09 crc kubenswrapper[4947]: I1129 06:57:09.443055 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc94a354-bf43-4d41-bd15-33a8c766752f-combined-ca-bundle\") pod \"neutron-dbcc54df-b7blt\" (UID: \"fc94a354-bf43-4d41-bd15-33a8c766752f\") " pod="openstack/neutron-dbcc54df-b7blt" Nov 29 06:57:09 crc kubenswrapper[4947]: I1129 06:57:09.443562 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/fc94a354-bf43-4d41-bd15-33a8c766752f-config\") pod \"neutron-dbcc54df-b7blt\" (UID: \"fc94a354-bf43-4d41-bd15-33a8c766752f\") " pod="openstack/neutron-dbcc54df-b7blt" Nov 29 06:57:09 crc kubenswrapper[4947]: I1129 06:57:09.463292 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc94a354-bf43-4d41-bd15-33a8c766752f-ovndb-tls-certs\") pod \"neutron-dbcc54df-b7blt\" (UID: \"fc94a354-bf43-4d41-bd15-33a8c766752f\") " pod="openstack/neutron-dbcc54df-b7blt" Nov 29 06:57:09 crc kubenswrapper[4947]: I1129 06:57:09.470076 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc94a354-bf43-4d41-bd15-33a8c766752f-public-tls-certs\") pod \"neutron-dbcc54df-b7blt\" (UID: \"fc94a354-bf43-4d41-bd15-33a8c766752f\") " pod="openstack/neutron-dbcc54df-b7blt" Nov 29 06:57:09 crc kubenswrapper[4947]: I1129 06:57:09.473955 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc94a354-bf43-4d41-bd15-33a8c766752f-internal-tls-certs\") pod \"neutron-dbcc54df-b7blt\" (UID: \"fc94a354-bf43-4d41-bd15-33a8c766752f\") " pod="openstack/neutron-dbcc54df-b7blt" Nov 29 06:57:09 crc kubenswrapper[4947]: I1129 06:57:09.488288 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-264fz\" (UniqueName: \"kubernetes.io/projected/fc94a354-bf43-4d41-bd15-33a8c766752f-kube-api-access-264fz\") pod \"neutron-dbcc54df-b7blt\" (UID: \"fc94a354-bf43-4d41-bd15-33a8c766752f\") " pod="openstack/neutron-dbcc54df-b7blt" Nov 29 06:57:09 crc kubenswrapper[4947]: I1129 06:57:09.537245 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6bf6f7cb48-d9zj4"] Nov 29 06:57:09 crc kubenswrapper[4947]: I1129 06:57:09.543862 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6bf6f7cb48-d9zj4" Nov 29 06:57:09 crc kubenswrapper[4947]: I1129 06:57:09.548722 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 29 06:57:09 crc kubenswrapper[4947]: I1129 06:57:09.549029 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 29 06:57:09 crc kubenswrapper[4947]: I1129 06:57:09.563078 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6bf6f7cb48-d9zj4"] Nov 29 06:57:09 crc kubenswrapper[4947]: I1129 06:57:09.579401 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dbcc54df-b7blt" Nov 29 06:57:09 crc kubenswrapper[4947]: I1129 06:57:09.638904 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f0759441-d9a0-4d4d-aead-69e48bcc16c7-config-data-custom\") pod \"barbican-api-6bf6f7cb48-d9zj4\" (UID: \"f0759441-d9a0-4d4d-aead-69e48bcc16c7\") " pod="openstack/barbican-api-6bf6f7cb48-d9zj4" Nov 29 06:57:09 crc kubenswrapper[4947]: I1129 06:57:09.638977 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0759441-d9a0-4d4d-aead-69e48bcc16c7-public-tls-certs\") pod \"barbican-api-6bf6f7cb48-d9zj4\" (UID: \"f0759441-d9a0-4d4d-aead-69e48bcc16c7\") " pod="openstack/barbican-api-6bf6f7cb48-d9zj4" Nov 29 06:57:09 crc kubenswrapper[4947]: I1129 06:57:09.639054 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0759441-d9a0-4d4d-aead-69e48bcc16c7-internal-tls-certs\") pod \"barbican-api-6bf6f7cb48-d9zj4\" (UID: \"f0759441-d9a0-4d4d-aead-69e48bcc16c7\") " pod="openstack/barbican-api-6bf6f7cb48-d9zj4" Nov 29 06:57:09 crc kubenswrapper[4947]: I1129 06:57:09.639198 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0759441-d9a0-4d4d-aead-69e48bcc16c7-logs\") pod \"barbican-api-6bf6f7cb48-d9zj4\" (UID: \"f0759441-d9a0-4d4d-aead-69e48bcc16c7\") " pod="openstack/barbican-api-6bf6f7cb48-d9zj4" Nov 29 06:57:09 crc kubenswrapper[4947]: I1129 06:57:09.639281 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98tr4\" (UniqueName: \"kubernetes.io/projected/f0759441-d9a0-4d4d-aead-69e48bcc16c7-kube-api-access-98tr4\") pod \"barbican-api-6bf6f7cb48-d9zj4\" (UID: \"f0759441-d9a0-4d4d-aead-69e48bcc16c7\") " pod="openstack/barbican-api-6bf6f7cb48-d9zj4" Nov 29 06:57:09 crc kubenswrapper[4947]: I1129 06:57:09.639361 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0759441-d9a0-4d4d-aead-69e48bcc16c7-config-data\") pod \"barbican-api-6bf6f7cb48-d9zj4\" (UID: \"f0759441-d9a0-4d4d-aead-69e48bcc16c7\") " pod="openstack/barbican-api-6bf6f7cb48-d9zj4" Nov 29 06:57:09 crc kubenswrapper[4947]: I1129 06:57:09.639412 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0759441-d9a0-4d4d-aead-69e48bcc16c7-combined-ca-bundle\") pod \"barbican-api-6bf6f7cb48-d9zj4\" (UID: \"f0759441-d9a0-4d4d-aead-69e48bcc16c7\") " pod="openstack/barbican-api-6bf6f7cb48-d9zj4" Nov 29 06:57:09 crc kubenswrapper[4947]: I1129 06:57:09.741804 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0759441-d9a0-4d4d-aead-69e48bcc16c7-logs\") pod \"barbican-api-6bf6f7cb48-d9zj4\" (UID: \"f0759441-d9a0-4d4d-aead-69e48bcc16c7\") " pod="openstack/barbican-api-6bf6f7cb48-d9zj4" Nov 29 06:57:09 crc kubenswrapper[4947]: I1129 06:57:09.741892 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98tr4\" (UniqueName: \"kubernetes.io/projected/f0759441-d9a0-4d4d-aead-69e48bcc16c7-kube-api-access-98tr4\") pod \"barbican-api-6bf6f7cb48-d9zj4\" (UID: \"f0759441-d9a0-4d4d-aead-69e48bcc16c7\") " pod="openstack/barbican-api-6bf6f7cb48-d9zj4" Nov 29 06:57:09 crc kubenswrapper[4947]: I1129 06:57:09.741980 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0759441-d9a0-4d4d-aead-69e48bcc16c7-config-data\") pod \"barbican-api-6bf6f7cb48-d9zj4\" (UID: \"f0759441-d9a0-4d4d-aead-69e48bcc16c7\") " pod="openstack/barbican-api-6bf6f7cb48-d9zj4" Nov 29 06:57:09 crc kubenswrapper[4947]: I1129 06:57:09.742039 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0759441-d9a0-4d4d-aead-69e48bcc16c7-combined-ca-bundle\") pod \"barbican-api-6bf6f7cb48-d9zj4\" (UID: \"f0759441-d9a0-4d4d-aead-69e48bcc16c7\") " pod="openstack/barbican-api-6bf6f7cb48-d9zj4" Nov 29 06:57:09 crc kubenswrapper[4947]: I1129 06:57:09.742122 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f0759441-d9a0-4d4d-aead-69e48bcc16c7-config-data-custom\") pod \"barbican-api-6bf6f7cb48-d9zj4\" (UID: \"f0759441-d9a0-4d4d-aead-69e48bcc16c7\") " pod="openstack/barbican-api-6bf6f7cb48-d9zj4" Nov 29 06:57:09 crc kubenswrapper[4947]: I1129 06:57:09.742152 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0759441-d9a0-4d4d-aead-69e48bcc16c7-public-tls-certs\") pod \"barbican-api-6bf6f7cb48-d9zj4\" (UID: \"f0759441-d9a0-4d4d-aead-69e48bcc16c7\") " pod="openstack/barbican-api-6bf6f7cb48-d9zj4" Nov 29 06:57:09 crc kubenswrapper[4947]: I1129 06:57:09.742238 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0759441-d9a0-4d4d-aead-69e48bcc16c7-internal-tls-certs\") pod \"barbican-api-6bf6f7cb48-d9zj4\" (UID: \"f0759441-d9a0-4d4d-aead-69e48bcc16c7\") " pod="openstack/barbican-api-6bf6f7cb48-d9zj4" Nov 29 06:57:09 crc kubenswrapper[4947]: I1129 06:57:09.883178 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0759441-d9a0-4d4d-aead-69e48bcc16c7-logs\") pod \"barbican-api-6bf6f7cb48-d9zj4\" (UID: \"f0759441-d9a0-4d4d-aead-69e48bcc16c7\") " pod="openstack/barbican-api-6bf6f7cb48-d9zj4" Nov 29 06:57:09 crc kubenswrapper[4947]: I1129 06:57:09.884951 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0759441-d9a0-4d4d-aead-69e48bcc16c7-internal-tls-certs\") pod \"barbican-api-6bf6f7cb48-d9zj4\" (UID: \"f0759441-d9a0-4d4d-aead-69e48bcc16c7\") " pod="openstack/barbican-api-6bf6f7cb48-d9zj4" Nov 29 06:57:09 crc kubenswrapper[4947]: I1129 06:57:09.890199 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f0759441-d9a0-4d4d-aead-69e48bcc16c7-config-data-custom\") pod \"barbican-api-6bf6f7cb48-d9zj4\" (UID: \"f0759441-d9a0-4d4d-aead-69e48bcc16c7\") " pod="openstack/barbican-api-6bf6f7cb48-d9zj4" Nov 29 06:57:09 crc kubenswrapper[4947]: I1129 06:57:09.893722 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0759441-d9a0-4d4d-aead-69e48bcc16c7-public-tls-certs\") pod \"barbican-api-6bf6f7cb48-d9zj4\" (UID: \"f0759441-d9a0-4d4d-aead-69e48bcc16c7\") " pod="openstack/barbican-api-6bf6f7cb48-d9zj4" Nov 29 06:57:09 crc kubenswrapper[4947]: I1129 06:57:09.920335 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0759441-d9a0-4d4d-aead-69e48bcc16c7-combined-ca-bundle\") pod \"barbican-api-6bf6f7cb48-d9zj4\" (UID: \"f0759441-d9a0-4d4d-aead-69e48bcc16c7\") " pod="openstack/barbican-api-6bf6f7cb48-d9zj4" Nov 29 06:57:09 crc kubenswrapper[4947]: I1129 06:57:09.934789 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0759441-d9a0-4d4d-aead-69e48bcc16c7-config-data\") pod \"barbican-api-6bf6f7cb48-d9zj4\" (UID: \"f0759441-d9a0-4d4d-aead-69e48bcc16c7\") " pod="openstack/barbican-api-6bf6f7cb48-d9zj4" Nov 29 06:57:09 crc kubenswrapper[4947]: I1129 06:57:09.939466 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-569f86dc65-rkk52" Nov 29 06:57:09 crc kubenswrapper[4947]: I1129 06:57:09.945988 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98tr4\" (UniqueName: \"kubernetes.io/projected/f0759441-d9a0-4d4d-aead-69e48bcc16c7-kube-api-access-98tr4\") pod \"barbican-api-6bf6f7cb48-d9zj4\" (UID: \"f0759441-d9a0-4d4d-aead-69e48bcc16c7\") " pod="openstack/barbican-api-6bf6f7cb48-d9zj4" Nov 29 06:57:10 crc kubenswrapper[4947]: I1129 06:57:10.073181 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-74cff4c986-xvvw9" Nov 29 06:57:10 crc kubenswrapper[4947]: I1129 06:57:10.201688 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-74cff4c986-xvvw9" Nov 29 06:57:10 crc kubenswrapper[4947]: I1129 06:57:10.212099 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6bf6f7cb48-d9zj4" Nov 29 06:57:10 crc kubenswrapper[4947]: I1129 06:57:10.677025 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dbcc54df-b7blt"] Nov 29 06:57:10 crc kubenswrapper[4947]: I1129 06:57:10.919949 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6bf6f7cb48-d9zj4"] Nov 29 06:57:11 crc kubenswrapper[4947]: I1129 06:57:11.070013 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6bf6f7cb48-d9zj4" event={"ID":"f0759441-d9a0-4d4d-aead-69e48bcc16c7","Type":"ContainerStarted","Data":"f76c9517605010e000d4fa9f1f709d8d3cb43830725bbcaa52be1369f2d882ef"} Nov 29 06:57:11 crc kubenswrapper[4947]: I1129 06:57:11.073107 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dbcc54df-b7blt" event={"ID":"fc94a354-bf43-4d41-bd15-33a8c766752f","Type":"ContainerStarted","Data":"04d0673df876c91f16960c4cf6144dc4829ba58cda8a2a70a3de11c4e1274cb2"} Nov 29 06:57:11 crc kubenswrapper[4947]: I1129 06:57:11.535363 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 29 06:57:11 crc kubenswrapper[4947]: I1129 06:57:11.537624 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 29 06:57:11 crc kubenswrapper[4947]: I1129 06:57:11.543480 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-w4ltp" Nov 29 06:57:11 crc kubenswrapper[4947]: I1129 06:57:11.543831 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 29 06:57:11 crc kubenswrapper[4947]: I1129 06:57:11.549186 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 29 06:57:11 crc kubenswrapper[4947]: I1129 06:57:11.549447 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 29 06:57:11 crc kubenswrapper[4947]: I1129 06:57:11.672256 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c4vr\" (UniqueName: \"kubernetes.io/projected/ee3635c4-674f-4ea9-890c-882857f766ab-kube-api-access-8c4vr\") pod \"openstackclient\" (UID: \"ee3635c4-674f-4ea9-890c-882857f766ab\") " pod="openstack/openstackclient" Nov 29 06:57:11 crc kubenswrapper[4947]: I1129 06:57:11.672658 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ee3635c4-674f-4ea9-890c-882857f766ab-openstack-config\") pod \"openstackclient\" (UID: \"ee3635c4-674f-4ea9-890c-882857f766ab\") " pod="openstack/openstackclient" Nov 29 06:57:11 crc kubenswrapper[4947]: I1129 06:57:11.672744 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee3635c4-674f-4ea9-890c-882857f766ab-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ee3635c4-674f-4ea9-890c-882857f766ab\") " pod="openstack/openstackclient" Nov 29 06:57:11 crc kubenswrapper[4947]: I1129 06:57:11.672791 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ee3635c4-674f-4ea9-890c-882857f766ab-openstack-config-secret\") pod \"openstackclient\" (UID: \"ee3635c4-674f-4ea9-890c-882857f766ab\") " pod="openstack/openstackclient" Nov 29 06:57:11 crc kubenswrapper[4947]: I1129 06:57:11.774912 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ee3635c4-674f-4ea9-890c-882857f766ab-openstack-config-secret\") pod \"openstackclient\" (UID: \"ee3635c4-674f-4ea9-890c-882857f766ab\") " pod="openstack/openstackclient" Nov 29 06:57:11 crc kubenswrapper[4947]: I1129 06:57:11.775045 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8c4vr\" (UniqueName: \"kubernetes.io/projected/ee3635c4-674f-4ea9-890c-882857f766ab-kube-api-access-8c4vr\") pod \"openstackclient\" (UID: \"ee3635c4-674f-4ea9-890c-882857f766ab\") " pod="openstack/openstackclient" Nov 29 06:57:11 crc kubenswrapper[4947]: I1129 06:57:11.775115 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ee3635c4-674f-4ea9-890c-882857f766ab-openstack-config\") pod \"openstackclient\" (UID: \"ee3635c4-674f-4ea9-890c-882857f766ab\") " pod="openstack/openstackclient" Nov 29 06:57:11 crc kubenswrapper[4947]: I1129 06:57:11.775209 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee3635c4-674f-4ea9-890c-882857f766ab-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ee3635c4-674f-4ea9-890c-882857f766ab\") " pod="openstack/openstackclient" Nov 29 06:57:11 crc kubenswrapper[4947]: I1129 06:57:11.777173 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ee3635c4-674f-4ea9-890c-882857f766ab-openstack-config\") pod \"openstackclient\" (UID: \"ee3635c4-674f-4ea9-890c-882857f766ab\") " pod="openstack/openstackclient" Nov 29 06:57:11 crc kubenswrapper[4947]: I1129 06:57:11.787417 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee3635c4-674f-4ea9-890c-882857f766ab-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ee3635c4-674f-4ea9-890c-882857f766ab\") " pod="openstack/openstackclient" Nov 29 06:57:11 crc kubenswrapper[4947]: I1129 06:57:11.795676 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c4vr\" (UniqueName: \"kubernetes.io/projected/ee3635c4-674f-4ea9-890c-882857f766ab-kube-api-access-8c4vr\") pod \"openstackclient\" (UID: \"ee3635c4-674f-4ea9-890c-882857f766ab\") " pod="openstack/openstackclient" Nov 29 06:57:11 crc kubenswrapper[4947]: I1129 06:57:11.800541 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ee3635c4-674f-4ea9-890c-882857f766ab-openstack-config-secret\") pod \"openstackclient\" (UID: \"ee3635c4-674f-4ea9-890c-882857f766ab\") " pod="openstack/openstackclient" Nov 29 06:57:11 crc kubenswrapper[4947]: I1129 06:57:11.927600 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 29 06:57:12 crc kubenswrapper[4947]: I1129 06:57:12.134018 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6bf6f7cb48-d9zj4" event={"ID":"f0759441-d9a0-4d4d-aead-69e48bcc16c7","Type":"ContainerStarted","Data":"77e33c6d3241449ae163b0b436792bd2e238b226f06751602832c3bc170a8d0c"} Nov 29 06:57:12 crc kubenswrapper[4947]: I1129 06:57:12.134579 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6bf6f7cb48-d9zj4" event={"ID":"f0759441-d9a0-4d4d-aead-69e48bcc16c7","Type":"ContainerStarted","Data":"1eb20e82e4455f937c5266da3e3abe5add1ec12d1990cb05edac137787035032"} Nov 29 06:57:12 crc kubenswrapper[4947]: I1129 06:57:12.134706 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6bf6f7cb48-d9zj4" Nov 29 06:57:12 crc kubenswrapper[4947]: I1129 06:57:12.134741 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6bf6f7cb48-d9zj4" Nov 29 06:57:12 crc kubenswrapper[4947]: I1129 06:57:12.147643 4947 generic.go:334] "Generic (PLEG): container finished" podID="f3ec3135-6049-4853-b571-d23200456fc5" containerID="b95afbfd702ee6af53c7279a3db23229d79ce557513cd99b8ff3373116d61f92" exitCode=0 Nov 29 06:57:12 crc kubenswrapper[4947]: I1129 06:57:12.147788 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f3ec3135-6049-4853-b571-d23200456fc5","Type":"ContainerDied","Data":"b95afbfd702ee6af53c7279a3db23229d79ce557513cd99b8ff3373116d61f92"} Nov 29 06:57:12 crc kubenswrapper[4947]: I1129 06:57:12.156789 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dbcc54df-b7blt" event={"ID":"fc94a354-bf43-4d41-bd15-33a8c766752f","Type":"ContainerStarted","Data":"c88deb555857ceb6c78fb0b72567035e14c0982b8d1cc32e80af7ad22f577aa1"} Nov 29 06:57:12 crc kubenswrapper[4947]: I1129 06:57:12.156897 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dbcc54df-b7blt" event={"ID":"fc94a354-bf43-4d41-bd15-33a8c766752f","Type":"ContainerStarted","Data":"d0ddc7ad2149018cf3080337acd14cafa0b0d75c9179153e3bf5dc69ff3c53c8"} Nov 29 06:57:12 crc kubenswrapper[4947]: I1129 06:57:12.157433 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-dbcc54df-b7blt" Nov 29 06:57:12 crc kubenswrapper[4947]: I1129 06:57:12.180772 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6bf6f7cb48-d9zj4" podStartSLOduration=3.180739032 podStartE2EDuration="3.180739032s" podCreationTimestamp="2025-11-29 06:57:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:57:12.164548315 +0000 UTC m=+1383.208930396" watchObservedRunningTime="2025-11-29 06:57:12.180739032 +0000 UTC m=+1383.225121113" Nov 29 06:57:12 crc kubenswrapper[4947]: I1129 06:57:12.186803 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 29 06:57:12 crc kubenswrapper[4947]: I1129 06:57:12.233274 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-dbcc54df-b7blt" podStartSLOduration=3.233243222 podStartE2EDuration="3.233243222s" podCreationTimestamp="2025-11-29 06:57:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:57:12.216784058 +0000 UTC m=+1383.261166149" watchObservedRunningTime="2025-11-29 06:57:12.233243222 +0000 UTC m=+1383.277625313" Nov 29 06:57:12 crc kubenswrapper[4947]: I1129 06:57:12.284673 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3ec3135-6049-4853-b571-d23200456fc5-config-data\") pod \"f3ec3135-6049-4853-b571-d23200456fc5\" (UID: \"f3ec3135-6049-4853-b571-d23200456fc5\") " Nov 29 06:57:12 crc kubenswrapper[4947]: I1129 06:57:12.285205 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3ec3135-6049-4853-b571-d23200456fc5-config-data-custom\") pod \"f3ec3135-6049-4853-b571-d23200456fc5\" (UID: \"f3ec3135-6049-4853-b571-d23200456fc5\") " Nov 29 06:57:12 crc kubenswrapper[4947]: I1129 06:57:12.285281 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3ec3135-6049-4853-b571-d23200456fc5-scripts\") pod \"f3ec3135-6049-4853-b571-d23200456fc5\" (UID: \"f3ec3135-6049-4853-b571-d23200456fc5\") " Nov 29 06:57:12 crc kubenswrapper[4947]: I1129 06:57:12.285434 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3ec3135-6049-4853-b571-d23200456fc5-combined-ca-bundle\") pod \"f3ec3135-6049-4853-b571-d23200456fc5\" (UID: \"f3ec3135-6049-4853-b571-d23200456fc5\") " Nov 29 06:57:12 crc kubenswrapper[4947]: I1129 06:57:12.285470 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f3ec3135-6049-4853-b571-d23200456fc5-etc-machine-id\") pod \"f3ec3135-6049-4853-b571-d23200456fc5\" (UID: \"f3ec3135-6049-4853-b571-d23200456fc5\") " Nov 29 06:57:12 crc kubenswrapper[4947]: I1129 06:57:12.285590 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f22vf\" (UniqueName: \"kubernetes.io/projected/f3ec3135-6049-4853-b571-d23200456fc5-kube-api-access-f22vf\") pod \"f3ec3135-6049-4853-b571-d23200456fc5\" (UID: \"f3ec3135-6049-4853-b571-d23200456fc5\") " Nov 29 06:57:12 crc kubenswrapper[4947]: I1129 06:57:12.298114 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3ec3135-6049-4853-b571-d23200456fc5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f3ec3135-6049-4853-b571-d23200456fc5" (UID: "f3ec3135-6049-4853-b571-d23200456fc5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:57:12 crc kubenswrapper[4947]: I1129 06:57:12.301536 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3ec3135-6049-4853-b571-d23200456fc5-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f3ec3135-6049-4853-b571-d23200456fc5" (UID: "f3ec3135-6049-4853-b571-d23200456fc5"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 06:57:12 crc kubenswrapper[4947]: I1129 06:57:12.317431 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3ec3135-6049-4853-b571-d23200456fc5-scripts" (OuterVolumeSpecName: "scripts") pod "f3ec3135-6049-4853-b571-d23200456fc5" (UID: "f3ec3135-6049-4853-b571-d23200456fc5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:57:12 crc kubenswrapper[4947]: I1129 06:57:12.327390 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3ec3135-6049-4853-b571-d23200456fc5-kube-api-access-f22vf" (OuterVolumeSpecName: "kube-api-access-f22vf") pod "f3ec3135-6049-4853-b571-d23200456fc5" (UID: "f3ec3135-6049-4853-b571-d23200456fc5"). InnerVolumeSpecName "kube-api-access-f22vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:57:12 crc kubenswrapper[4947]: I1129 06:57:12.389667 4947 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3ec3135-6049-4853-b571-d23200456fc5-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:12 crc kubenswrapper[4947]: I1129 06:57:12.389727 4947 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3ec3135-6049-4853-b571-d23200456fc5-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:12 crc kubenswrapper[4947]: I1129 06:57:12.389751 4947 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f3ec3135-6049-4853-b571-d23200456fc5-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:12 crc kubenswrapper[4947]: I1129 06:57:12.389767 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f22vf\" (UniqueName: \"kubernetes.io/projected/f3ec3135-6049-4853-b571-d23200456fc5-kube-api-access-f22vf\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:12 crc kubenswrapper[4947]: I1129 06:57:12.934744 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3ec3135-6049-4853-b571-d23200456fc5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f3ec3135-6049-4853-b571-d23200456fc5" (UID: "f3ec3135-6049-4853-b571-d23200456fc5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:57:12 crc kubenswrapper[4947]: I1129 06:57:12.976815 4947 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3ec3135-6049-4853-b571-d23200456fc5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:13 crc kubenswrapper[4947]: I1129 06:57:13.043356 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3ec3135-6049-4853-b571-d23200456fc5-config-data" (OuterVolumeSpecName: "config-data") pod "f3ec3135-6049-4853-b571-d23200456fc5" (UID: "f3ec3135-6049-4853-b571-d23200456fc5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:57:13 crc kubenswrapper[4947]: I1129 06:57:13.081614 4947 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3ec3135-6049-4853-b571-d23200456fc5-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:13 crc kubenswrapper[4947]: I1129 06:57:13.170097 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f3ec3135-6049-4853-b571-d23200456fc5","Type":"ContainerDied","Data":"14bd56706c0de0c3807cff266d7f3b2c1b2b4aaf920c97524436d23de043e8c1"} Nov 29 06:57:13 crc kubenswrapper[4947]: I1129 06:57:13.170644 4947 scope.go:117] "RemoveContainer" containerID="57d4ad2c4e9379e05d802770be0a292157299c4dba878fc3378985bd6dd3f6bb" Nov 29 06:57:13 crc kubenswrapper[4947]: I1129 06:57:13.170204 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 29 06:57:13 crc kubenswrapper[4947]: I1129 06:57:13.206989 4947 scope.go:117] "RemoveContainer" containerID="b95afbfd702ee6af53c7279a3db23229d79ce557513cd99b8ff3373116d61f92" Nov 29 06:57:13 crc kubenswrapper[4947]: I1129 06:57:13.235316 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 29 06:57:13 crc kubenswrapper[4947]: I1129 06:57:13.253395 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 29 06:57:13 crc kubenswrapper[4947]: I1129 06:57:13.269309 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 29 06:57:13 crc kubenswrapper[4947]: I1129 06:57:13.300319 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 29 06:57:13 crc kubenswrapper[4947]: E1129 06:57:13.301033 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3ec3135-6049-4853-b571-d23200456fc5" containerName="cinder-scheduler" Nov 29 06:57:13 crc kubenswrapper[4947]: I1129 06:57:13.301066 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3ec3135-6049-4853-b571-d23200456fc5" containerName="cinder-scheduler" Nov 29 06:57:13 crc kubenswrapper[4947]: E1129 06:57:13.301114 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3ec3135-6049-4853-b571-d23200456fc5" containerName="probe" Nov 29 06:57:13 crc kubenswrapper[4947]: I1129 06:57:13.301124 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3ec3135-6049-4853-b571-d23200456fc5" containerName="probe" Nov 29 06:57:13 crc kubenswrapper[4947]: I1129 06:57:13.301352 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3ec3135-6049-4853-b571-d23200456fc5" containerName="cinder-scheduler" Nov 29 06:57:13 crc kubenswrapper[4947]: I1129 06:57:13.301388 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3ec3135-6049-4853-b571-d23200456fc5" containerName="probe" Nov 29 06:57:13 crc kubenswrapper[4947]: I1129 06:57:13.302810 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 29 06:57:13 crc kubenswrapper[4947]: I1129 06:57:13.305647 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 29 06:57:13 crc kubenswrapper[4947]: I1129 06:57:13.316469 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 29 06:57:13 crc kubenswrapper[4947]: I1129 06:57:13.390571 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q94wg\" (UniqueName: \"kubernetes.io/projected/6addf10a-a22b-445f-af40-5812ab69c7a0-kube-api-access-q94wg\") pod \"cinder-scheduler-0\" (UID: \"6addf10a-a22b-445f-af40-5812ab69c7a0\") " pod="openstack/cinder-scheduler-0" Nov 29 06:57:13 crc kubenswrapper[4947]: I1129 06:57:13.390638 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6addf10a-a22b-445f-af40-5812ab69c7a0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6addf10a-a22b-445f-af40-5812ab69c7a0\") " pod="openstack/cinder-scheduler-0" Nov 29 06:57:13 crc kubenswrapper[4947]: I1129 06:57:13.390699 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6addf10a-a22b-445f-af40-5812ab69c7a0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6addf10a-a22b-445f-af40-5812ab69c7a0\") " pod="openstack/cinder-scheduler-0" Nov 29 06:57:13 crc kubenswrapper[4947]: I1129 06:57:13.390722 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6addf10a-a22b-445f-af40-5812ab69c7a0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6addf10a-a22b-445f-af40-5812ab69c7a0\") " pod="openstack/cinder-scheduler-0" Nov 29 06:57:13 crc kubenswrapper[4947]: I1129 06:57:13.390820 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6addf10a-a22b-445f-af40-5812ab69c7a0-scripts\") pod \"cinder-scheduler-0\" (UID: \"6addf10a-a22b-445f-af40-5812ab69c7a0\") " pod="openstack/cinder-scheduler-0" Nov 29 06:57:13 crc kubenswrapper[4947]: I1129 06:57:13.390850 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6addf10a-a22b-445f-af40-5812ab69c7a0-config-data\") pod \"cinder-scheduler-0\" (UID: \"6addf10a-a22b-445f-af40-5812ab69c7a0\") " pod="openstack/cinder-scheduler-0" Nov 29 06:57:13 crc kubenswrapper[4947]: I1129 06:57:13.493327 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6addf10a-a22b-445f-af40-5812ab69c7a0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6addf10a-a22b-445f-af40-5812ab69c7a0\") " pod="openstack/cinder-scheduler-0" Nov 29 06:57:13 crc kubenswrapper[4947]: I1129 06:57:13.493404 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6addf10a-a22b-445f-af40-5812ab69c7a0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6addf10a-a22b-445f-af40-5812ab69c7a0\") " pod="openstack/cinder-scheduler-0" Nov 29 06:57:13 crc kubenswrapper[4947]: I1129 06:57:13.493469 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6addf10a-a22b-445f-af40-5812ab69c7a0-scripts\") pod \"cinder-scheduler-0\" (UID: \"6addf10a-a22b-445f-af40-5812ab69c7a0\") " pod="openstack/cinder-scheduler-0" Nov 29 06:57:13 crc kubenswrapper[4947]: I1129 06:57:13.493514 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6addf10a-a22b-445f-af40-5812ab69c7a0-config-data\") pod \"cinder-scheduler-0\" (UID: \"6addf10a-a22b-445f-af40-5812ab69c7a0\") " pod="openstack/cinder-scheduler-0" Nov 29 06:57:13 crc kubenswrapper[4947]: I1129 06:57:13.493617 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q94wg\" (UniqueName: \"kubernetes.io/projected/6addf10a-a22b-445f-af40-5812ab69c7a0-kube-api-access-q94wg\") pod \"cinder-scheduler-0\" (UID: \"6addf10a-a22b-445f-af40-5812ab69c7a0\") " pod="openstack/cinder-scheduler-0" Nov 29 06:57:13 crc kubenswrapper[4947]: I1129 06:57:13.493648 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6addf10a-a22b-445f-af40-5812ab69c7a0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6addf10a-a22b-445f-af40-5812ab69c7a0\") " pod="openstack/cinder-scheduler-0" Nov 29 06:57:13 crc kubenswrapper[4947]: I1129 06:57:13.493753 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6addf10a-a22b-445f-af40-5812ab69c7a0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6addf10a-a22b-445f-af40-5812ab69c7a0\") " pod="openstack/cinder-scheduler-0" Nov 29 06:57:13 crc kubenswrapper[4947]: I1129 06:57:13.502432 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6addf10a-a22b-445f-af40-5812ab69c7a0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6addf10a-a22b-445f-af40-5812ab69c7a0\") " pod="openstack/cinder-scheduler-0" Nov 29 06:57:13 crc kubenswrapper[4947]: I1129 06:57:13.506124 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6addf10a-a22b-445f-af40-5812ab69c7a0-scripts\") pod \"cinder-scheduler-0\" (UID: \"6addf10a-a22b-445f-af40-5812ab69c7a0\") " pod="openstack/cinder-scheduler-0" Nov 29 06:57:13 crc kubenswrapper[4947]: I1129 06:57:13.513102 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6addf10a-a22b-445f-af40-5812ab69c7a0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6addf10a-a22b-445f-af40-5812ab69c7a0\") " pod="openstack/cinder-scheduler-0" Nov 29 06:57:13 crc kubenswrapper[4947]: I1129 06:57:13.513876 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6addf10a-a22b-445f-af40-5812ab69c7a0-config-data\") pod \"cinder-scheduler-0\" (UID: \"6addf10a-a22b-445f-af40-5812ab69c7a0\") " pod="openstack/cinder-scheduler-0" Nov 29 06:57:13 crc kubenswrapper[4947]: I1129 06:57:13.516784 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q94wg\" (UniqueName: \"kubernetes.io/projected/6addf10a-a22b-445f-af40-5812ab69c7a0-kube-api-access-q94wg\") pod \"cinder-scheduler-0\" (UID: \"6addf10a-a22b-445f-af40-5812ab69c7a0\") " pod="openstack/cinder-scheduler-0" Nov 29 06:57:13 crc kubenswrapper[4947]: I1129 06:57:13.636207 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 29 06:57:14 crc kubenswrapper[4947]: I1129 06:57:14.181100 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"ee3635c4-674f-4ea9-890c-882857f766ab","Type":"ContainerStarted","Data":"f6e44fa588fbdda40bc21d4c9422d336755418e1992569e7f047b93125bfa03f"} Nov 29 06:57:14 crc kubenswrapper[4947]: W1129 06:57:14.240295 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6addf10a_a22b_445f_af40_5812ab69c7a0.slice/crio-2a164557b0baa780355e0872ecdebf045006e5c8d18fb719def4a2a56d2ed6a4 WatchSource:0}: Error finding container 2a164557b0baa780355e0872ecdebf045006e5c8d18fb719def4a2a56d2ed6a4: Status 404 returned error can't find the container with id 2a164557b0baa780355e0872ecdebf045006e5c8d18fb719def4a2a56d2ed6a4 Nov 29 06:57:14 crc kubenswrapper[4947]: I1129 06:57:14.270744 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 29 06:57:14 crc kubenswrapper[4947]: I1129 06:57:14.521691 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d97fcdd8f-cksns" Nov 29 06:57:14 crc kubenswrapper[4947]: I1129 06:57:14.585561 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-745b9ddc8c-7fptv"] Nov 29 06:57:14 crc kubenswrapper[4947]: I1129 06:57:14.586723 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-745b9ddc8c-7fptv" podUID="8bfd51e3-c63d-4382-a864-cb0570e277d8" containerName="dnsmasq-dns" containerID="cri-o://8731b73dcd468f70161bee9662a54eaf91c21f855da0fbd1d56ae2c80052d69f" gracePeriod=10 Nov 29 06:57:15 crc kubenswrapper[4947]: I1129 06:57:15.198131 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3ec3135-6049-4853-b571-d23200456fc5" path="/var/lib/kubelet/pods/f3ec3135-6049-4853-b571-d23200456fc5/volumes" Nov 29 06:57:15 crc kubenswrapper[4947]: I1129 06:57:15.214687 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6addf10a-a22b-445f-af40-5812ab69c7a0","Type":"ContainerStarted","Data":"031833eec0b333537f189196eded44bc1837784a5e0a43a4bab769c2f31b7bd0"} Nov 29 06:57:15 crc kubenswrapper[4947]: I1129 06:57:15.215177 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6addf10a-a22b-445f-af40-5812ab69c7a0","Type":"ContainerStarted","Data":"2a164557b0baa780355e0872ecdebf045006e5c8d18fb719def4a2a56d2ed6a4"} Nov 29 06:57:15 crc kubenswrapper[4947]: I1129 06:57:15.225557 4947 generic.go:334] "Generic (PLEG): container finished" podID="8bfd51e3-c63d-4382-a864-cb0570e277d8" containerID="8731b73dcd468f70161bee9662a54eaf91c21f855da0fbd1d56ae2c80052d69f" exitCode=0 Nov 29 06:57:15 crc kubenswrapper[4947]: I1129 06:57:15.225631 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-745b9ddc8c-7fptv" event={"ID":"8bfd51e3-c63d-4382-a864-cb0570e277d8","Type":"ContainerDied","Data":"8731b73dcd468f70161bee9662a54eaf91c21f855da0fbd1d56ae2c80052d69f"} Nov 29 06:57:15 crc kubenswrapper[4947]: I1129 06:57:15.225676 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-745b9ddc8c-7fptv" event={"ID":"8bfd51e3-c63d-4382-a864-cb0570e277d8","Type":"ContainerDied","Data":"7e5822a3cc7c73b40b26869a36246c16c397ebf79c6cb546e3d84b4f2844cdaf"} Nov 29 06:57:15 crc kubenswrapper[4947]: I1129 06:57:15.225691 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e5822a3cc7c73b40b26869a36246c16c397ebf79c6cb546e3d84b4f2844cdaf" Nov 29 06:57:15 crc kubenswrapper[4947]: I1129 06:57:15.298493 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-745b9ddc8c-7fptv" Nov 29 06:57:15 crc kubenswrapper[4947]: I1129 06:57:15.442936 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8bfd51e3-c63d-4382-a864-cb0570e277d8-ovsdbserver-nb\") pod \"8bfd51e3-c63d-4382-a864-cb0570e277d8\" (UID: \"8bfd51e3-c63d-4382-a864-cb0570e277d8\") " Nov 29 06:57:15 crc kubenswrapper[4947]: I1129 06:57:15.443094 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fm66t\" (UniqueName: \"kubernetes.io/projected/8bfd51e3-c63d-4382-a864-cb0570e277d8-kube-api-access-fm66t\") pod \"8bfd51e3-c63d-4382-a864-cb0570e277d8\" (UID: \"8bfd51e3-c63d-4382-a864-cb0570e277d8\") " Nov 29 06:57:15 crc kubenswrapper[4947]: I1129 06:57:15.443251 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8bfd51e3-c63d-4382-a864-cb0570e277d8-dns-svc\") pod \"8bfd51e3-c63d-4382-a864-cb0570e277d8\" (UID: \"8bfd51e3-c63d-4382-a864-cb0570e277d8\") " Nov 29 06:57:15 crc kubenswrapper[4947]: I1129 06:57:15.443276 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8bfd51e3-c63d-4382-a864-cb0570e277d8-ovsdbserver-sb\") pod \"8bfd51e3-c63d-4382-a864-cb0570e277d8\" (UID: \"8bfd51e3-c63d-4382-a864-cb0570e277d8\") " Nov 29 06:57:15 crc kubenswrapper[4947]: I1129 06:57:15.443313 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bfd51e3-c63d-4382-a864-cb0570e277d8-config\") pod \"8bfd51e3-c63d-4382-a864-cb0570e277d8\" (UID: \"8bfd51e3-c63d-4382-a864-cb0570e277d8\") " Nov 29 06:57:15 crc kubenswrapper[4947]: I1129 06:57:15.489254 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bfd51e3-c63d-4382-a864-cb0570e277d8-kube-api-access-fm66t" (OuterVolumeSpecName: "kube-api-access-fm66t") pod "8bfd51e3-c63d-4382-a864-cb0570e277d8" (UID: "8bfd51e3-c63d-4382-a864-cb0570e277d8"). InnerVolumeSpecName "kube-api-access-fm66t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:57:15 crc kubenswrapper[4947]: I1129 06:57:15.545796 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fm66t\" (UniqueName: \"kubernetes.io/projected/8bfd51e3-c63d-4382-a864-cb0570e277d8-kube-api-access-fm66t\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:15 crc kubenswrapper[4947]: I1129 06:57:15.549434 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bfd51e3-c63d-4382-a864-cb0570e277d8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8bfd51e3-c63d-4382-a864-cb0570e277d8" (UID: "8bfd51e3-c63d-4382-a864-cb0570e277d8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:57:15 crc kubenswrapper[4947]: I1129 06:57:15.562264 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bfd51e3-c63d-4382-a864-cb0570e277d8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8bfd51e3-c63d-4382-a864-cb0570e277d8" (UID: "8bfd51e3-c63d-4382-a864-cb0570e277d8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:57:15 crc kubenswrapper[4947]: I1129 06:57:15.567198 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bfd51e3-c63d-4382-a864-cb0570e277d8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8bfd51e3-c63d-4382-a864-cb0570e277d8" (UID: "8bfd51e3-c63d-4382-a864-cb0570e277d8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:57:15 crc kubenswrapper[4947]: I1129 06:57:15.591624 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bfd51e3-c63d-4382-a864-cb0570e277d8-config" (OuterVolumeSpecName: "config") pod "8bfd51e3-c63d-4382-a864-cb0570e277d8" (UID: "8bfd51e3-c63d-4382-a864-cb0570e277d8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:57:15 crc kubenswrapper[4947]: I1129 06:57:15.649344 4947 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8bfd51e3-c63d-4382-a864-cb0570e277d8-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:15 crc kubenswrapper[4947]: I1129 06:57:15.649398 4947 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8bfd51e3-c63d-4382-a864-cb0570e277d8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:15 crc kubenswrapper[4947]: I1129 06:57:15.649412 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bfd51e3-c63d-4382-a864-cb0570e277d8-config\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:15 crc kubenswrapper[4947]: I1129 06:57:15.649422 4947 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8bfd51e3-c63d-4382-a864-cb0570e277d8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:16 crc kubenswrapper[4947]: I1129 06:57:16.243609 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-745b9ddc8c-7fptv" Nov 29 06:57:16 crc kubenswrapper[4947]: I1129 06:57:16.244033 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6addf10a-a22b-445f-af40-5812ab69c7a0","Type":"ContainerStarted","Data":"77ce504e4fa9e39d842df3dc01324a3cd31e504fa64713faef7fdaa0de000886"} Nov 29 06:57:16 crc kubenswrapper[4947]: I1129 06:57:16.289651 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.289613358 podStartE2EDuration="3.289613358s" podCreationTimestamp="2025-11-29 06:57:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:57:16.269502353 +0000 UTC m=+1387.313884434" watchObservedRunningTime="2025-11-29 06:57:16.289613358 +0000 UTC m=+1387.333995439" Nov 29 06:57:16 crc kubenswrapper[4947]: I1129 06:57:16.304126 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-745b9ddc8c-7fptv"] Nov 29 06:57:16 crc kubenswrapper[4947]: I1129 06:57:16.323275 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-745b9ddc8c-7fptv"] Nov 29 06:57:16 crc kubenswrapper[4947]: I1129 06:57:16.796862 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 29 06:57:17 crc kubenswrapper[4947]: I1129 06:57:17.332819 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bfd51e3-c63d-4382-a864-cb0570e277d8" path="/var/lib/kubelet/pods/8bfd51e3-c63d-4382-a864-cb0570e277d8/volumes" Nov 29 06:57:18 crc kubenswrapper[4947]: I1129 06:57:18.636906 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 29 06:57:21 crc kubenswrapper[4947]: I1129 06:57:21.931432 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6bf6f7cb48-d9zj4" Nov 29 06:57:22 crc kubenswrapper[4947]: I1129 06:57:22.104044 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6bf6f7cb48-d9zj4" Nov 29 06:57:22 crc kubenswrapper[4947]: I1129 06:57:22.200862 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-74cff4c986-xvvw9"] Nov 29 06:57:22 crc kubenswrapper[4947]: I1129 06:57:22.201270 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-74cff4c986-xvvw9" podUID="64edb581-33ba-46eb-a455-ce4f733f6944" containerName="barbican-api-log" containerID="cri-o://ed921614ee825db304d84fd0399b2288485e209049695a02fee9507f5f0dfd5f" gracePeriod=30 Nov 29 06:57:22 crc kubenswrapper[4947]: I1129 06:57:22.202323 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-74cff4c986-xvvw9" podUID="64edb581-33ba-46eb-a455-ce4f733f6944" containerName="barbican-api" containerID="cri-o://073329272db47c35438b64c66a2b0ca12747bb814aec8f5ca5d1ea0ab8637b33" gracePeriod=30 Nov 29 06:57:22 crc kubenswrapper[4947]: I1129 06:57:22.408722 4947 generic.go:334] "Generic (PLEG): container finished" podID="64edb581-33ba-46eb-a455-ce4f733f6944" containerID="ed921614ee825db304d84fd0399b2288485e209049695a02fee9507f5f0dfd5f" exitCode=143 Nov 29 06:57:22 crc kubenswrapper[4947]: I1129 06:57:22.408953 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74cff4c986-xvvw9" event={"ID":"64edb581-33ba-46eb-a455-ce4f733f6944","Type":"ContainerDied","Data":"ed921614ee825db304d84fd0399b2288485e209049695a02fee9507f5f0dfd5f"} Nov 29 06:57:22 crc kubenswrapper[4947]: I1129 06:57:22.582117 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 29 06:57:23 crc kubenswrapper[4947]: I1129 06:57:23.914687 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 29 06:57:25 crc kubenswrapper[4947]: I1129 06:57:25.369162 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-74cff4c986-xvvw9" podUID="64edb581-33ba-46eb-a455-ce4f733f6944" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.146:9311/healthcheck\": read tcp 10.217.0.2:39996->10.217.0.146:9311: read: connection reset by peer" Nov 29 06:57:25 crc kubenswrapper[4947]: I1129 06:57:25.369246 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-74cff4c986-xvvw9" podUID="64edb581-33ba-46eb-a455-ce4f733f6944" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.146:9311/healthcheck\": read tcp 10.217.0.2:40002->10.217.0.146:9311: read: connection reset by peer" Nov 29 06:57:25 crc kubenswrapper[4947]: I1129 06:57:25.458766 4947 generic.go:334] "Generic (PLEG): container finished" podID="64edb581-33ba-46eb-a455-ce4f733f6944" containerID="073329272db47c35438b64c66a2b0ca12747bb814aec8f5ca5d1ea0ab8637b33" exitCode=0 Nov 29 06:57:25 crc kubenswrapper[4947]: I1129 06:57:25.458826 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74cff4c986-xvvw9" event={"ID":"64edb581-33ba-46eb-a455-ce4f733f6944","Type":"ContainerDied","Data":"073329272db47c35438b64c66a2b0ca12747bb814aec8f5ca5d1ea0ab8637b33"} Nov 29 06:57:26 crc kubenswrapper[4947]: I1129 06:57:26.209203 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 06:57:26 crc kubenswrapper[4947]: I1129 06:57:26.209552 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c9825e55-7596-4c45-aa4c-0b74cc470e65" containerName="ceilometer-central-agent" containerID="cri-o://c2398ce9e4ac38cb6fbea2be89e944f75376f74a1462601f3734203cb62d622a" gracePeriod=30 Nov 29 06:57:26 crc kubenswrapper[4947]: I1129 06:57:26.209597 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c9825e55-7596-4c45-aa4c-0b74cc470e65" containerName="proxy-httpd" containerID="cri-o://ac0b51b01da8f62c1b943c2fdd2504630ba0c4e130fbd7fe7750c0e0dae57019" gracePeriod=30 Nov 29 06:57:26 crc kubenswrapper[4947]: I1129 06:57:26.209679 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c9825e55-7596-4c45-aa4c-0b74cc470e65" containerName="sg-core" containerID="cri-o://30ce8e2ee9e46d4f382b608f6eb99b4727165939538e98a61788a1256b061db2" gracePeriod=30 Nov 29 06:57:26 crc kubenswrapper[4947]: I1129 06:57:26.209720 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c9825e55-7596-4c45-aa4c-0b74cc470e65" containerName="ceilometer-notification-agent" containerID="cri-o://81435585e674f07540a92eb4a4645d7cc44ffa1e357d5aa68425c08abbe3c3fc" gracePeriod=30 Nov 29 06:57:26 crc kubenswrapper[4947]: I1129 06:57:26.473349 4947 generic.go:334] "Generic (PLEG): container finished" podID="c9825e55-7596-4c45-aa4c-0b74cc470e65" containerID="30ce8e2ee9e46d4f382b608f6eb99b4727165939538e98a61788a1256b061db2" exitCode=2 Nov 29 06:57:26 crc kubenswrapper[4947]: I1129 06:57:26.473399 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9825e55-7596-4c45-aa4c-0b74cc470e65","Type":"ContainerDied","Data":"30ce8e2ee9e46d4f382b608f6eb99b4727165939538e98a61788a1256b061db2"} Nov 29 06:57:27 crc kubenswrapper[4947]: I1129 06:57:27.084150 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-74cff4c986-xvvw9" Nov 29 06:57:27 crc kubenswrapper[4947]: I1129 06:57:27.155882 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64edb581-33ba-46eb-a455-ce4f733f6944-config-data-custom\") pod \"64edb581-33ba-46eb-a455-ce4f733f6944\" (UID: \"64edb581-33ba-46eb-a455-ce4f733f6944\") " Nov 29 06:57:27 crc kubenswrapper[4947]: I1129 06:57:27.156315 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64edb581-33ba-46eb-a455-ce4f733f6944-combined-ca-bundle\") pod \"64edb581-33ba-46eb-a455-ce4f733f6944\" (UID: \"64edb581-33ba-46eb-a455-ce4f733f6944\") " Nov 29 06:57:27 crc kubenswrapper[4947]: I1129 06:57:27.156612 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxv2t\" (UniqueName: \"kubernetes.io/projected/64edb581-33ba-46eb-a455-ce4f733f6944-kube-api-access-rxv2t\") pod \"64edb581-33ba-46eb-a455-ce4f733f6944\" (UID: \"64edb581-33ba-46eb-a455-ce4f733f6944\") " Nov 29 06:57:27 crc kubenswrapper[4947]: I1129 06:57:27.156765 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64edb581-33ba-46eb-a455-ce4f733f6944-config-data\") pod \"64edb581-33ba-46eb-a455-ce4f733f6944\" (UID: \"64edb581-33ba-46eb-a455-ce4f733f6944\") " Nov 29 06:57:27 crc kubenswrapper[4947]: I1129 06:57:27.158399 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64edb581-33ba-46eb-a455-ce4f733f6944-logs\") pod \"64edb581-33ba-46eb-a455-ce4f733f6944\" (UID: \"64edb581-33ba-46eb-a455-ce4f733f6944\") " Nov 29 06:57:27 crc kubenswrapper[4947]: I1129 06:57:27.161978 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64edb581-33ba-46eb-a455-ce4f733f6944-logs" (OuterVolumeSpecName: "logs") pod "64edb581-33ba-46eb-a455-ce4f733f6944" (UID: "64edb581-33ba-46eb-a455-ce4f733f6944"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:57:27 crc kubenswrapper[4947]: I1129 06:57:27.162687 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64edb581-33ba-46eb-a455-ce4f733f6944-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "64edb581-33ba-46eb-a455-ce4f733f6944" (UID: "64edb581-33ba-46eb-a455-ce4f733f6944"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:57:27 crc kubenswrapper[4947]: I1129 06:57:27.165798 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64edb581-33ba-46eb-a455-ce4f733f6944-kube-api-access-rxv2t" (OuterVolumeSpecName: "kube-api-access-rxv2t") pod "64edb581-33ba-46eb-a455-ce4f733f6944" (UID: "64edb581-33ba-46eb-a455-ce4f733f6944"). InnerVolumeSpecName "kube-api-access-rxv2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:57:27 crc kubenswrapper[4947]: I1129 06:57:27.166604 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxv2t\" (UniqueName: \"kubernetes.io/projected/64edb581-33ba-46eb-a455-ce4f733f6944-kube-api-access-rxv2t\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:27 crc kubenswrapper[4947]: I1129 06:57:27.166715 4947 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64edb581-33ba-46eb-a455-ce4f733f6944-logs\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:27 crc kubenswrapper[4947]: I1129 06:57:27.166782 4947 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64edb581-33ba-46eb-a455-ce4f733f6944-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:27 crc kubenswrapper[4947]: I1129 06:57:27.188931 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64edb581-33ba-46eb-a455-ce4f733f6944-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "64edb581-33ba-46eb-a455-ce4f733f6944" (UID: "64edb581-33ba-46eb-a455-ce4f733f6944"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:57:27 crc kubenswrapper[4947]: I1129 06:57:27.235411 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64edb581-33ba-46eb-a455-ce4f733f6944-config-data" (OuterVolumeSpecName: "config-data") pod "64edb581-33ba-46eb-a455-ce4f733f6944" (UID: "64edb581-33ba-46eb-a455-ce4f733f6944"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:57:27 crc kubenswrapper[4947]: I1129 06:57:27.269355 4947 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64edb581-33ba-46eb-a455-ce4f733f6944-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:27 crc kubenswrapper[4947]: I1129 06:57:27.269395 4947 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64edb581-33ba-46eb-a455-ce4f733f6944-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:27 crc kubenswrapper[4947]: I1129 06:57:27.485045 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"ee3635c4-674f-4ea9-890c-882857f766ab","Type":"ContainerStarted","Data":"a8d870b6053eebc83b92e45eefb66aa2c8d1d754685470c3c52d959eefa20360"} Nov 29 06:57:27 crc kubenswrapper[4947]: I1129 06:57:27.491542 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74cff4c986-xvvw9" event={"ID":"64edb581-33ba-46eb-a455-ce4f733f6944","Type":"ContainerDied","Data":"00da878760f6635ad6f00a3eb8a86297344bee87a8ee18d327a7e734d108070b"} Nov 29 06:57:27 crc kubenswrapper[4947]: I1129 06:57:27.491847 4947 scope.go:117] "RemoveContainer" containerID="073329272db47c35438b64c66a2b0ca12747bb814aec8f5ca5d1ea0ab8637b33" Nov 29 06:57:27 crc kubenswrapper[4947]: I1129 06:57:27.492068 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-74cff4c986-xvvw9" Nov 29 06:57:27 crc kubenswrapper[4947]: I1129 06:57:27.496717 4947 generic.go:334] "Generic (PLEG): container finished" podID="c9825e55-7596-4c45-aa4c-0b74cc470e65" containerID="ac0b51b01da8f62c1b943c2fdd2504630ba0c4e130fbd7fe7750c0e0dae57019" exitCode=0 Nov 29 06:57:27 crc kubenswrapper[4947]: I1129 06:57:27.496832 4947 generic.go:334] "Generic (PLEG): container finished" podID="c9825e55-7596-4c45-aa4c-0b74cc470e65" containerID="81435585e674f07540a92eb4a4645d7cc44ffa1e357d5aa68425c08abbe3c3fc" exitCode=0 Nov 29 06:57:27 crc kubenswrapper[4947]: I1129 06:57:27.496903 4947 generic.go:334] "Generic (PLEG): container finished" podID="c9825e55-7596-4c45-aa4c-0b74cc470e65" containerID="c2398ce9e4ac38cb6fbea2be89e944f75376f74a1462601f3734203cb62d622a" exitCode=0 Nov 29 06:57:27 crc kubenswrapper[4947]: I1129 06:57:27.496985 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9825e55-7596-4c45-aa4c-0b74cc470e65","Type":"ContainerDied","Data":"ac0b51b01da8f62c1b943c2fdd2504630ba0c4e130fbd7fe7750c0e0dae57019"} Nov 29 06:57:27 crc kubenswrapper[4947]: I1129 06:57:27.497091 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9825e55-7596-4c45-aa4c-0b74cc470e65","Type":"ContainerDied","Data":"81435585e674f07540a92eb4a4645d7cc44ffa1e357d5aa68425c08abbe3c3fc"} Nov 29 06:57:27 crc kubenswrapper[4947]: I1129 06:57:27.497175 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9825e55-7596-4c45-aa4c-0b74cc470e65","Type":"ContainerDied","Data":"c2398ce9e4ac38cb6fbea2be89e944f75376f74a1462601f3734203cb62d622a"} Nov 29 06:57:27 crc kubenswrapper[4947]: I1129 06:57:27.515949 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.941863214 podStartE2EDuration="16.515920154s" podCreationTimestamp="2025-11-29 06:57:11 +0000 UTC" firstStartedPulling="2025-11-29 06:57:13.25510088 +0000 UTC m=+1384.299482961" lastFinishedPulling="2025-11-29 06:57:26.82915782 +0000 UTC m=+1397.873539901" observedRunningTime="2025-11-29 06:57:27.507099593 +0000 UTC m=+1398.551481674" watchObservedRunningTime="2025-11-29 06:57:27.515920154 +0000 UTC m=+1398.560302245" Nov 29 06:57:27 crc kubenswrapper[4947]: I1129 06:57:27.552369 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-74cff4c986-xvvw9"] Nov 29 06:57:27 crc kubenswrapper[4947]: I1129 06:57:27.568482 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-74cff4c986-xvvw9"] Nov 29 06:57:27 crc kubenswrapper[4947]: I1129 06:57:27.568660 4947 scope.go:117] "RemoveContainer" containerID="ed921614ee825db304d84fd0399b2288485e209049695a02fee9507f5f0dfd5f" Nov 29 06:57:27 crc kubenswrapper[4947]: I1129 06:57:27.725888 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 06:57:27 crc kubenswrapper[4947]: I1129 06:57:27.781030 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glvt6\" (UniqueName: \"kubernetes.io/projected/c9825e55-7596-4c45-aa4c-0b74cc470e65-kube-api-access-glvt6\") pod \"c9825e55-7596-4c45-aa4c-0b74cc470e65\" (UID: \"c9825e55-7596-4c45-aa4c-0b74cc470e65\") " Nov 29 06:57:27 crc kubenswrapper[4947]: I1129 06:57:27.781150 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c9825e55-7596-4c45-aa4c-0b74cc470e65-sg-core-conf-yaml\") pod \"c9825e55-7596-4c45-aa4c-0b74cc470e65\" (UID: \"c9825e55-7596-4c45-aa4c-0b74cc470e65\") " Nov 29 06:57:27 crc kubenswrapper[4947]: I1129 06:57:27.781182 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9825e55-7596-4c45-aa4c-0b74cc470e65-scripts\") pod \"c9825e55-7596-4c45-aa4c-0b74cc470e65\" (UID: \"c9825e55-7596-4c45-aa4c-0b74cc470e65\") " Nov 29 06:57:27 crc kubenswrapper[4947]: I1129 06:57:27.781314 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9825e55-7596-4c45-aa4c-0b74cc470e65-run-httpd\") pod \"c9825e55-7596-4c45-aa4c-0b74cc470e65\" (UID: \"c9825e55-7596-4c45-aa4c-0b74cc470e65\") " Nov 29 06:57:27 crc kubenswrapper[4947]: I1129 06:57:27.781429 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9825e55-7596-4c45-aa4c-0b74cc470e65-config-data\") pod \"c9825e55-7596-4c45-aa4c-0b74cc470e65\" (UID: \"c9825e55-7596-4c45-aa4c-0b74cc470e65\") " Nov 29 06:57:27 crc kubenswrapper[4947]: I1129 06:57:27.781466 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9825e55-7596-4c45-aa4c-0b74cc470e65-combined-ca-bundle\") pod \"c9825e55-7596-4c45-aa4c-0b74cc470e65\" (UID: \"c9825e55-7596-4c45-aa4c-0b74cc470e65\") " Nov 29 06:57:27 crc kubenswrapper[4947]: I1129 06:57:27.781560 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9825e55-7596-4c45-aa4c-0b74cc470e65-log-httpd\") pod \"c9825e55-7596-4c45-aa4c-0b74cc470e65\" (UID: \"c9825e55-7596-4c45-aa4c-0b74cc470e65\") " Nov 29 06:57:27 crc kubenswrapper[4947]: I1129 06:57:27.781863 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9825e55-7596-4c45-aa4c-0b74cc470e65-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c9825e55-7596-4c45-aa4c-0b74cc470e65" (UID: "c9825e55-7596-4c45-aa4c-0b74cc470e65"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:57:27 crc kubenswrapper[4947]: I1129 06:57:27.782236 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9825e55-7596-4c45-aa4c-0b74cc470e65-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c9825e55-7596-4c45-aa4c-0b74cc470e65" (UID: "c9825e55-7596-4c45-aa4c-0b74cc470e65"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:57:27 crc kubenswrapper[4947]: I1129 06:57:27.782687 4947 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9825e55-7596-4c45-aa4c-0b74cc470e65-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:27 crc kubenswrapper[4947]: I1129 06:57:27.782708 4947 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9825e55-7596-4c45-aa4c-0b74cc470e65-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:27 crc kubenswrapper[4947]: I1129 06:57:27.798849 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9825e55-7596-4c45-aa4c-0b74cc470e65-scripts" (OuterVolumeSpecName: "scripts") pod "c9825e55-7596-4c45-aa4c-0b74cc470e65" (UID: "c9825e55-7596-4c45-aa4c-0b74cc470e65"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:57:27 crc kubenswrapper[4947]: I1129 06:57:27.809885 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9825e55-7596-4c45-aa4c-0b74cc470e65-kube-api-access-glvt6" (OuterVolumeSpecName: "kube-api-access-glvt6") pod "c9825e55-7596-4c45-aa4c-0b74cc470e65" (UID: "c9825e55-7596-4c45-aa4c-0b74cc470e65"). InnerVolumeSpecName "kube-api-access-glvt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:57:27 crc kubenswrapper[4947]: I1129 06:57:27.830145 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9825e55-7596-4c45-aa4c-0b74cc470e65-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c9825e55-7596-4c45-aa4c-0b74cc470e65" (UID: "c9825e55-7596-4c45-aa4c-0b74cc470e65"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:57:27 crc kubenswrapper[4947]: I1129 06:57:27.885016 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glvt6\" (UniqueName: \"kubernetes.io/projected/c9825e55-7596-4c45-aa4c-0b74cc470e65-kube-api-access-glvt6\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:27 crc kubenswrapper[4947]: I1129 06:57:27.885078 4947 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c9825e55-7596-4c45-aa4c-0b74cc470e65-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:27 crc kubenswrapper[4947]: I1129 06:57:27.885094 4947 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9825e55-7596-4c45-aa4c-0b74cc470e65-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:27 crc kubenswrapper[4947]: I1129 06:57:27.975046 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9825e55-7596-4c45-aa4c-0b74cc470e65-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c9825e55-7596-4c45-aa4c-0b74cc470e65" (UID: "c9825e55-7596-4c45-aa4c-0b74cc470e65"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:57:27 crc kubenswrapper[4947]: I1129 06:57:27.991287 4947 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9825e55-7596-4c45-aa4c-0b74cc470e65-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.049718 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-d5xx7"] Nov 29 06:57:28 crc kubenswrapper[4947]: E1129 06:57:28.051143 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bfd51e3-c63d-4382-a864-cb0570e277d8" containerName="init" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.051170 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bfd51e3-c63d-4382-a864-cb0570e277d8" containerName="init" Nov 29 06:57:28 crc kubenswrapper[4947]: E1129 06:57:28.051211 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9825e55-7596-4c45-aa4c-0b74cc470e65" containerName="ceilometer-central-agent" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.051246 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9825e55-7596-4c45-aa4c-0b74cc470e65" containerName="ceilometer-central-agent" Nov 29 06:57:28 crc kubenswrapper[4947]: E1129 06:57:28.051281 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bfd51e3-c63d-4382-a864-cb0570e277d8" containerName="dnsmasq-dns" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.051290 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bfd51e3-c63d-4382-a864-cb0570e277d8" containerName="dnsmasq-dns" Nov 29 06:57:28 crc kubenswrapper[4947]: E1129 06:57:28.051302 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9825e55-7596-4c45-aa4c-0b74cc470e65" containerName="proxy-httpd" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.051310 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9825e55-7596-4c45-aa4c-0b74cc470e65" containerName="proxy-httpd" Nov 29 06:57:28 crc kubenswrapper[4947]: E1129 06:57:28.051357 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64edb581-33ba-46eb-a455-ce4f733f6944" containerName="barbican-api-log" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.051366 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="64edb581-33ba-46eb-a455-ce4f733f6944" containerName="barbican-api-log" Nov 29 06:57:28 crc kubenswrapper[4947]: E1129 06:57:28.051407 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9825e55-7596-4c45-aa4c-0b74cc470e65" containerName="sg-core" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.051416 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9825e55-7596-4c45-aa4c-0b74cc470e65" containerName="sg-core" Nov 29 06:57:28 crc kubenswrapper[4947]: E1129 06:57:28.051435 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64edb581-33ba-46eb-a455-ce4f733f6944" containerName="barbican-api" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.051442 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="64edb581-33ba-46eb-a455-ce4f733f6944" containerName="barbican-api" Nov 29 06:57:28 crc kubenswrapper[4947]: E1129 06:57:28.051457 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9825e55-7596-4c45-aa4c-0b74cc470e65" containerName="ceilometer-notification-agent" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.051464 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9825e55-7596-4c45-aa4c-0b74cc470e65" containerName="ceilometer-notification-agent" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.051889 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9825e55-7596-4c45-aa4c-0b74cc470e65" containerName="ceilometer-central-agent" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.051910 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="64edb581-33ba-46eb-a455-ce4f733f6944" containerName="barbican-api-log" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.051940 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9825e55-7596-4c45-aa4c-0b74cc470e65" containerName="ceilometer-notification-agent" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.051955 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bfd51e3-c63d-4382-a864-cb0570e277d8" containerName="dnsmasq-dns" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.051972 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9825e55-7596-4c45-aa4c-0b74cc470e65" containerName="sg-core" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.051989 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="64edb581-33ba-46eb-a455-ce4f733f6944" containerName="barbican-api" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.052022 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9825e55-7596-4c45-aa4c-0b74cc470e65" containerName="proxy-httpd" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.055031 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9825e55-7596-4c45-aa4c-0b74cc470e65-config-data" (OuterVolumeSpecName: "config-data") pod "c9825e55-7596-4c45-aa4c-0b74cc470e65" (UID: "c9825e55-7596-4c45-aa4c-0b74cc470e65"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.080757 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-d5xx7" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.100045 4947 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9825e55-7596-4c45-aa4c-0b74cc470e65-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.112664 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-d5xx7"] Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.172569 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-lwblx"] Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.183170 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-lwblx" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.203803 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhhvj\" (UniqueName: \"kubernetes.io/projected/69c8e8b7-9d6e-4918-acb3-77788534fba2-kube-api-access-rhhvj\") pod \"nova-cell0-db-create-lwblx\" (UID: \"69c8e8b7-9d6e-4918-acb3-77788534fba2\") " pod="openstack/nova-cell0-db-create-lwblx" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.203908 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92c4d360-3806-49fc-85be-f8e1ea6d5975-operator-scripts\") pod \"nova-api-db-create-d5xx7\" (UID: \"92c4d360-3806-49fc-85be-f8e1ea6d5975\") " pod="openstack/nova-api-db-create-d5xx7" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.203981 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5pcl\" (UniqueName: \"kubernetes.io/projected/92c4d360-3806-49fc-85be-f8e1ea6d5975-kube-api-access-b5pcl\") pod \"nova-api-db-create-d5xx7\" (UID: \"92c4d360-3806-49fc-85be-f8e1ea6d5975\") " pod="openstack/nova-api-db-create-d5xx7" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.204083 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69c8e8b7-9d6e-4918-acb3-77788534fba2-operator-scripts\") pod \"nova-cell0-db-create-lwblx\" (UID: \"69c8e8b7-9d6e-4918-acb3-77788534fba2\") " pod="openstack/nova-cell0-db-create-lwblx" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.207793 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-lwblx"] Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.217677 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-e513-account-create-update-nsfwk"] Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.219046 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e513-account-create-update-nsfwk" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.221455 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.230121 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-e513-account-create-update-nsfwk"] Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.305946 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-29srh"] Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.306013 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5pcl\" (UniqueName: \"kubernetes.io/projected/92c4d360-3806-49fc-85be-f8e1ea6d5975-kube-api-access-b5pcl\") pod \"nova-api-db-create-d5xx7\" (UID: \"92c4d360-3806-49fc-85be-f8e1ea6d5975\") " pod="openstack/nova-api-db-create-d5xx7" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.306106 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e2b4d19-d04b-4a05-9967-6c08d6b7b3a7-operator-scripts\") pod \"nova-api-e513-account-create-update-nsfwk\" (UID: \"8e2b4d19-d04b-4a05-9967-6c08d6b7b3a7\") " pod="openstack/nova-api-e513-account-create-update-nsfwk" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.306135 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69c8e8b7-9d6e-4918-acb3-77788534fba2-operator-scripts\") pod \"nova-cell0-db-create-lwblx\" (UID: \"69c8e8b7-9d6e-4918-acb3-77788534fba2\") " pod="openstack/nova-cell0-db-create-lwblx" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.306166 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hkqd\" (UniqueName: \"kubernetes.io/projected/8e2b4d19-d04b-4a05-9967-6c08d6b7b3a7-kube-api-access-9hkqd\") pod \"nova-api-e513-account-create-update-nsfwk\" (UID: \"8e2b4d19-d04b-4a05-9967-6c08d6b7b3a7\") " pod="openstack/nova-api-e513-account-create-update-nsfwk" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.306236 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhhvj\" (UniqueName: \"kubernetes.io/projected/69c8e8b7-9d6e-4918-acb3-77788534fba2-kube-api-access-rhhvj\") pod \"nova-cell0-db-create-lwblx\" (UID: \"69c8e8b7-9d6e-4918-acb3-77788534fba2\") " pod="openstack/nova-cell0-db-create-lwblx" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.306294 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92c4d360-3806-49fc-85be-f8e1ea6d5975-operator-scripts\") pod \"nova-api-db-create-d5xx7\" (UID: \"92c4d360-3806-49fc-85be-f8e1ea6d5975\") " pod="openstack/nova-api-db-create-d5xx7" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.307194 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69c8e8b7-9d6e-4918-acb3-77788534fba2-operator-scripts\") pod \"nova-cell0-db-create-lwblx\" (UID: \"69c8e8b7-9d6e-4918-acb3-77788534fba2\") " pod="openstack/nova-cell0-db-create-lwblx" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.307553 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92c4d360-3806-49fc-85be-f8e1ea6d5975-operator-scripts\") pod \"nova-api-db-create-d5xx7\" (UID: \"92c4d360-3806-49fc-85be-f8e1ea6d5975\") " pod="openstack/nova-api-db-create-d5xx7" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.310738 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-29srh" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.326112 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-29srh"] Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.328701 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhhvj\" (UniqueName: \"kubernetes.io/projected/69c8e8b7-9d6e-4918-acb3-77788534fba2-kube-api-access-rhhvj\") pod \"nova-cell0-db-create-lwblx\" (UID: \"69c8e8b7-9d6e-4918-acb3-77788534fba2\") " pod="openstack/nova-cell0-db-create-lwblx" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.335942 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5pcl\" (UniqueName: \"kubernetes.io/projected/92c4d360-3806-49fc-85be-f8e1ea6d5975-kube-api-access-b5pcl\") pod \"nova-api-db-create-d5xx7\" (UID: \"92c4d360-3806-49fc-85be-f8e1ea6d5975\") " pod="openstack/nova-api-db-create-d5xx7" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.339659 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-ecc6-account-create-update-299dm"] Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.341735 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-ecc6-account-create-update-299dm" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.351812 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-ecc6-account-create-update-299dm"] Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.355555 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.407890 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e2b4d19-d04b-4a05-9967-6c08d6b7b3a7-operator-scripts\") pod \"nova-api-e513-account-create-update-nsfwk\" (UID: \"8e2b4d19-d04b-4a05-9967-6c08d6b7b3a7\") " pod="openstack/nova-api-e513-account-create-update-nsfwk" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.408344 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hkqd\" (UniqueName: \"kubernetes.io/projected/8e2b4d19-d04b-4a05-9967-6c08d6b7b3a7-kube-api-access-9hkqd\") pod \"nova-api-e513-account-create-update-nsfwk\" (UID: \"8e2b4d19-d04b-4a05-9967-6c08d6b7b3a7\") " pod="openstack/nova-api-e513-account-create-update-nsfwk" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.408397 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-222v4\" (UniqueName: \"kubernetes.io/projected/5b100e9b-0224-436a-a3a5-73587eda6743-kube-api-access-222v4\") pod \"nova-cell0-ecc6-account-create-update-299dm\" (UID: \"5b100e9b-0224-436a-a3a5-73587eda6743\") " pod="openstack/nova-cell0-ecc6-account-create-update-299dm" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.408467 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mplkt\" (UniqueName: \"kubernetes.io/projected/aadd9963-9c1e-4c5e-b03e-6577b3f1f139-kube-api-access-mplkt\") pod \"nova-cell1-db-create-29srh\" (UID: \"aadd9963-9c1e-4c5e-b03e-6577b3f1f139\") " pod="openstack/nova-cell1-db-create-29srh" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.408497 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aadd9963-9c1e-4c5e-b03e-6577b3f1f139-operator-scripts\") pod \"nova-cell1-db-create-29srh\" (UID: \"aadd9963-9c1e-4c5e-b03e-6577b3f1f139\") " pod="openstack/nova-cell1-db-create-29srh" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.408587 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b100e9b-0224-436a-a3a5-73587eda6743-operator-scripts\") pod \"nova-cell0-ecc6-account-create-update-299dm\" (UID: \"5b100e9b-0224-436a-a3a5-73587eda6743\") " pod="openstack/nova-cell0-ecc6-account-create-update-299dm" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.408773 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e2b4d19-d04b-4a05-9967-6c08d6b7b3a7-operator-scripts\") pod \"nova-api-e513-account-create-update-nsfwk\" (UID: \"8e2b4d19-d04b-4a05-9967-6c08d6b7b3a7\") " pod="openstack/nova-api-e513-account-create-update-nsfwk" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.434691 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-d5xx7" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.441651 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hkqd\" (UniqueName: \"kubernetes.io/projected/8e2b4d19-d04b-4a05-9967-6c08d6b7b3a7-kube-api-access-9hkqd\") pod \"nova-api-e513-account-create-update-nsfwk\" (UID: \"8e2b4d19-d04b-4a05-9967-6c08d6b7b3a7\") " pod="openstack/nova-api-e513-account-create-update-nsfwk" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.510952 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mplkt\" (UniqueName: \"kubernetes.io/projected/aadd9963-9c1e-4c5e-b03e-6577b3f1f139-kube-api-access-mplkt\") pod \"nova-cell1-db-create-29srh\" (UID: \"aadd9963-9c1e-4c5e-b03e-6577b3f1f139\") " pod="openstack/nova-cell1-db-create-29srh" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.511011 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aadd9963-9c1e-4c5e-b03e-6577b3f1f139-operator-scripts\") pod \"nova-cell1-db-create-29srh\" (UID: \"aadd9963-9c1e-4c5e-b03e-6577b3f1f139\") " pod="openstack/nova-cell1-db-create-29srh" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.511065 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b100e9b-0224-436a-a3a5-73587eda6743-operator-scripts\") pod \"nova-cell0-ecc6-account-create-update-299dm\" (UID: \"5b100e9b-0224-436a-a3a5-73587eda6743\") " pod="openstack/nova-cell0-ecc6-account-create-update-299dm" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.511160 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-222v4\" (UniqueName: \"kubernetes.io/projected/5b100e9b-0224-436a-a3a5-73587eda6743-kube-api-access-222v4\") pod \"nova-cell0-ecc6-account-create-update-299dm\" (UID: \"5b100e9b-0224-436a-a3a5-73587eda6743\") " pod="openstack/nova-cell0-ecc6-account-create-update-299dm" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.512309 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aadd9963-9c1e-4c5e-b03e-6577b3f1f139-operator-scripts\") pod \"nova-cell1-db-create-29srh\" (UID: \"aadd9963-9c1e-4c5e-b03e-6577b3f1f139\") " pod="openstack/nova-cell1-db-create-29srh" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.512404 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b100e9b-0224-436a-a3a5-73587eda6743-operator-scripts\") pod \"nova-cell0-ecc6-account-create-update-299dm\" (UID: \"5b100e9b-0224-436a-a3a5-73587eda6743\") " pod="openstack/nova-cell0-ecc6-account-create-update-299dm" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.512514 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-lwblx" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.520373 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.521483 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9825e55-7596-4c45-aa4c-0b74cc470e65","Type":"ContainerDied","Data":"7546638c37d626da1f4f4b7c74ce66a7c1fc52c5acd9d42cd360cb93676dbf8a"} Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.521578 4947 scope.go:117] "RemoveContainer" containerID="ac0b51b01da8f62c1b943c2fdd2504630ba0c4e130fbd7fe7750c0e0dae57019" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.533594 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-d40e-account-create-update-rx5gn"] Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.534996 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d40e-account-create-update-rx5gn" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.535893 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e513-account-create-update-nsfwk" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.538127 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.539345 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-222v4\" (UniqueName: \"kubernetes.io/projected/5b100e9b-0224-436a-a3a5-73587eda6743-kube-api-access-222v4\") pod \"nova-cell0-ecc6-account-create-update-299dm\" (UID: \"5b100e9b-0224-436a-a3a5-73587eda6743\") " pod="openstack/nova-cell0-ecc6-account-create-update-299dm" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.551859 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-d40e-account-create-update-rx5gn"] Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.559266 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mplkt\" (UniqueName: \"kubernetes.io/projected/aadd9963-9c1e-4c5e-b03e-6577b3f1f139-kube-api-access-mplkt\") pod \"nova-cell1-db-create-29srh\" (UID: \"aadd9963-9c1e-4c5e-b03e-6577b3f1f139\") " pod="openstack/nova-cell1-db-create-29srh" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.609069 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.614783 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d5a6e2f-204f-4356-b140-e1a58c242965-operator-scripts\") pod \"nova-cell1-d40e-account-create-update-rx5gn\" (UID: \"5d5a6e2f-204f-4356-b140-e1a58c242965\") " pod="openstack/nova-cell1-d40e-account-create-update-rx5gn" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.614868 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z8rc\" (UniqueName: \"kubernetes.io/projected/5d5a6e2f-204f-4356-b140-e1a58c242965-kube-api-access-9z8rc\") pod \"nova-cell1-d40e-account-create-update-rx5gn\" (UID: \"5d5a6e2f-204f-4356-b140-e1a58c242965\") " pod="openstack/nova-cell1-d40e-account-create-update-rx5gn" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.629514 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-29srh" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.632172 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.664002 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-ecc6-account-create-update-299dm" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.668422 4947 scope.go:117] "RemoveContainer" containerID="30ce8e2ee9e46d4f382b608f6eb99b4727165939538e98a61788a1256b061db2" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.678030 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.680783 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.688262 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.689285 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.709522 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.720295 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d5a6e2f-204f-4356-b140-e1a58c242965-operator-scripts\") pod \"nova-cell1-d40e-account-create-update-rx5gn\" (UID: \"5d5a6e2f-204f-4356-b140-e1a58c242965\") " pod="openstack/nova-cell1-d40e-account-create-update-rx5gn" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.720355 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f46ebbae-61e6-4e58-9f82-f47aca4269f5-run-httpd\") pod \"ceilometer-0\" (UID: \"f46ebbae-61e6-4e58-9f82-f47aca4269f5\") " pod="openstack/ceilometer-0" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.720402 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z8rc\" (UniqueName: \"kubernetes.io/projected/5d5a6e2f-204f-4356-b140-e1a58c242965-kube-api-access-9z8rc\") pod \"nova-cell1-d40e-account-create-update-rx5gn\" (UID: \"5d5a6e2f-204f-4356-b140-e1a58c242965\") " pod="openstack/nova-cell1-d40e-account-create-update-rx5gn" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.720490 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f46ebbae-61e6-4e58-9f82-f47aca4269f5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f46ebbae-61e6-4e58-9f82-f47aca4269f5\") " pod="openstack/ceilometer-0" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.720535 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f46ebbae-61e6-4e58-9f82-f47aca4269f5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f46ebbae-61e6-4e58-9f82-f47aca4269f5\") " pod="openstack/ceilometer-0" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.720561 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f46ebbae-61e6-4e58-9f82-f47aca4269f5-log-httpd\") pod \"ceilometer-0\" (UID: \"f46ebbae-61e6-4e58-9f82-f47aca4269f5\") " pod="openstack/ceilometer-0" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.720609 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f46ebbae-61e6-4e58-9f82-f47aca4269f5-scripts\") pod \"ceilometer-0\" (UID: \"f46ebbae-61e6-4e58-9f82-f47aca4269f5\") " pod="openstack/ceilometer-0" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.720629 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f46ebbae-61e6-4e58-9f82-f47aca4269f5-config-data\") pod \"ceilometer-0\" (UID: \"f46ebbae-61e6-4e58-9f82-f47aca4269f5\") " pod="openstack/ceilometer-0" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.720654 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qqc8\" (UniqueName: \"kubernetes.io/projected/f46ebbae-61e6-4e58-9f82-f47aca4269f5-kube-api-access-9qqc8\") pod \"ceilometer-0\" (UID: \"f46ebbae-61e6-4e58-9f82-f47aca4269f5\") " pod="openstack/ceilometer-0" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.721681 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d5a6e2f-204f-4356-b140-e1a58c242965-operator-scripts\") pod \"nova-cell1-d40e-account-create-update-rx5gn\" (UID: \"5d5a6e2f-204f-4356-b140-e1a58c242965\") " pod="openstack/nova-cell1-d40e-account-create-update-rx5gn" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.759392 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z8rc\" (UniqueName: \"kubernetes.io/projected/5d5a6e2f-204f-4356-b140-e1a58c242965-kube-api-access-9z8rc\") pod \"nova-cell1-d40e-account-create-update-rx5gn\" (UID: \"5d5a6e2f-204f-4356-b140-e1a58c242965\") " pod="openstack/nova-cell1-d40e-account-create-update-rx5gn" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.808440 4947 scope.go:117] "RemoveContainer" containerID="81435585e674f07540a92eb4a4645d7cc44ffa1e357d5aa68425c08abbe3c3fc" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.822955 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f46ebbae-61e6-4e58-9f82-f47aca4269f5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f46ebbae-61e6-4e58-9f82-f47aca4269f5\") " pod="openstack/ceilometer-0" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.823065 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f46ebbae-61e6-4e58-9f82-f47aca4269f5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f46ebbae-61e6-4e58-9f82-f47aca4269f5\") " pod="openstack/ceilometer-0" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.823100 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f46ebbae-61e6-4e58-9f82-f47aca4269f5-log-httpd\") pod \"ceilometer-0\" (UID: \"f46ebbae-61e6-4e58-9f82-f47aca4269f5\") " pod="openstack/ceilometer-0" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.823148 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f46ebbae-61e6-4e58-9f82-f47aca4269f5-scripts\") pod \"ceilometer-0\" (UID: \"f46ebbae-61e6-4e58-9f82-f47aca4269f5\") " pod="openstack/ceilometer-0" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.823212 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f46ebbae-61e6-4e58-9f82-f47aca4269f5-config-data\") pod \"ceilometer-0\" (UID: \"f46ebbae-61e6-4e58-9f82-f47aca4269f5\") " pod="openstack/ceilometer-0" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.823263 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qqc8\" (UniqueName: \"kubernetes.io/projected/f46ebbae-61e6-4e58-9f82-f47aca4269f5-kube-api-access-9qqc8\") pod \"ceilometer-0\" (UID: \"f46ebbae-61e6-4e58-9f82-f47aca4269f5\") " pod="openstack/ceilometer-0" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.823354 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f46ebbae-61e6-4e58-9f82-f47aca4269f5-run-httpd\") pod \"ceilometer-0\" (UID: \"f46ebbae-61e6-4e58-9f82-f47aca4269f5\") " pod="openstack/ceilometer-0" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.823987 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f46ebbae-61e6-4e58-9f82-f47aca4269f5-run-httpd\") pod \"ceilometer-0\" (UID: \"f46ebbae-61e6-4e58-9f82-f47aca4269f5\") " pod="openstack/ceilometer-0" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.828660 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f46ebbae-61e6-4e58-9f82-f47aca4269f5-log-httpd\") pod \"ceilometer-0\" (UID: \"f46ebbae-61e6-4e58-9f82-f47aca4269f5\") " pod="openstack/ceilometer-0" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.830046 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f46ebbae-61e6-4e58-9f82-f47aca4269f5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f46ebbae-61e6-4e58-9f82-f47aca4269f5\") " pod="openstack/ceilometer-0" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.832941 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f46ebbae-61e6-4e58-9f82-f47aca4269f5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f46ebbae-61e6-4e58-9f82-f47aca4269f5\") " pod="openstack/ceilometer-0" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.835002 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f46ebbae-61e6-4e58-9f82-f47aca4269f5-scripts\") pod \"ceilometer-0\" (UID: \"f46ebbae-61e6-4e58-9f82-f47aca4269f5\") " pod="openstack/ceilometer-0" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.836166 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f46ebbae-61e6-4e58-9f82-f47aca4269f5-config-data\") pod \"ceilometer-0\" (UID: \"f46ebbae-61e6-4e58-9f82-f47aca4269f5\") " pod="openstack/ceilometer-0" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.865271 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qqc8\" (UniqueName: \"kubernetes.io/projected/f46ebbae-61e6-4e58-9f82-f47aca4269f5-kube-api-access-9qqc8\") pod \"ceilometer-0\" (UID: \"f46ebbae-61e6-4e58-9f82-f47aca4269f5\") " pod="openstack/ceilometer-0" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.892426 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d40e-account-create-update-rx5gn" Nov 29 06:57:28 crc kubenswrapper[4947]: I1129 06:57:28.993649 4947 scope.go:117] "RemoveContainer" containerID="c2398ce9e4ac38cb6fbea2be89e944f75376f74a1462601f3734203cb62d622a" Nov 29 06:57:29 crc kubenswrapper[4947]: I1129 06:57:29.008695 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 06:57:29 crc kubenswrapper[4947]: I1129 06:57:29.113969 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-d5xx7"] Nov 29 06:57:29 crc kubenswrapper[4947]: I1129 06:57:29.249903 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64edb581-33ba-46eb-a455-ce4f733f6944" path="/var/lib/kubelet/pods/64edb581-33ba-46eb-a455-ce4f733f6944/volumes" Nov 29 06:57:29 crc kubenswrapper[4947]: I1129 06:57:29.251074 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9825e55-7596-4c45-aa4c-0b74cc470e65" path="/var/lib/kubelet/pods/c9825e55-7596-4c45-aa4c-0b74cc470e65/volumes" Nov 29 06:57:29 crc kubenswrapper[4947]: I1129 06:57:29.296575 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-lwblx"] Nov 29 06:57:29 crc kubenswrapper[4947]: W1129 06:57:29.307326 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69c8e8b7_9d6e_4918_acb3_77788534fba2.slice/crio-370b858bdc512212715bfa451067c31450bfe56569cdaaf64d809882c4dc91bc WatchSource:0}: Error finding container 370b858bdc512212715bfa451067c31450bfe56569cdaaf64d809882c4dc91bc: Status 404 returned error can't find the container with id 370b858bdc512212715bfa451067c31450bfe56569cdaaf64d809882c4dc91bc Nov 29 06:57:29 crc kubenswrapper[4947]: I1129 06:57:29.444707 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-e513-account-create-update-nsfwk"] Nov 29 06:57:29 crc kubenswrapper[4947]: I1129 06:57:29.536577 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-lwblx" event={"ID":"69c8e8b7-9d6e-4918-acb3-77788534fba2","Type":"ContainerStarted","Data":"370b858bdc512212715bfa451067c31450bfe56569cdaaf64d809882c4dc91bc"} Nov 29 06:57:29 crc kubenswrapper[4947]: I1129 06:57:29.544311 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e513-account-create-update-nsfwk" event={"ID":"8e2b4d19-d04b-4a05-9967-6c08d6b7b3a7","Type":"ContainerStarted","Data":"4ae87e2438b965d7d4291184fac2d1af47d0de1ec5f4f1d9fdfda389569302ed"} Nov 29 06:57:29 crc kubenswrapper[4947]: I1129 06:57:29.546379 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-d5xx7" event={"ID":"92c4d360-3806-49fc-85be-f8e1ea6d5975","Type":"ContainerStarted","Data":"ce8f8cb9067aa12271fb0cb6afd6975372be8f57fc10889e30cd152f2cbc058d"} Nov 29 06:57:29 crc kubenswrapper[4947]: I1129 06:57:29.546531 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-d5xx7" event={"ID":"92c4d360-3806-49fc-85be-f8e1ea6d5975","Type":"ContainerStarted","Data":"ad319412f4c9ff70ef91e4f8a7d0f0b87c02272e232c4d5b1b69f6ef0df6fe3d"} Nov 29 06:57:29 crc kubenswrapper[4947]: I1129 06:57:29.557686 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-lwblx" podStartSLOduration=1.5576595439999998 podStartE2EDuration="1.557659544s" podCreationTimestamp="2025-11-29 06:57:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:57:29.554597537 +0000 UTC m=+1400.598979618" watchObservedRunningTime="2025-11-29 06:57:29.557659544 +0000 UTC m=+1400.602041625" Nov 29 06:57:29 crc kubenswrapper[4947]: I1129 06:57:29.598276 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-d5xx7" podStartSLOduration=2.598245814 podStartE2EDuration="2.598245814s" podCreationTimestamp="2025-11-29 06:57:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:57:29.581879353 +0000 UTC m=+1400.626261444" watchObservedRunningTime="2025-11-29 06:57:29.598245814 +0000 UTC m=+1400.642627895" Nov 29 06:57:29 crc kubenswrapper[4947]: I1129 06:57:29.675884 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-d40e-account-create-update-rx5gn"] Nov 29 06:57:29 crc kubenswrapper[4947]: I1129 06:57:29.685708 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-29srh"] Nov 29 06:57:29 crc kubenswrapper[4947]: I1129 06:57:29.698983 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-ecc6-account-create-update-299dm"] Nov 29 06:57:29 crc kubenswrapper[4947]: I1129 06:57:29.917179 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 06:57:29 crc kubenswrapper[4947]: W1129 06:57:29.943055 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf46ebbae_61e6_4e58_9f82_f47aca4269f5.slice/crio-b37dabef3d365754f59ba99dc06e8af203f3b639ad70be1eb3d27b2fec136088 WatchSource:0}: Error finding container b37dabef3d365754f59ba99dc06e8af203f3b639ad70be1eb3d27b2fec136088: Status 404 returned error can't find the container with id b37dabef3d365754f59ba99dc06e8af203f3b639ad70be1eb3d27b2fec136088 Nov 29 06:57:30 crc kubenswrapper[4947]: I1129 06:57:30.558901 4947 generic.go:334] "Generic (PLEG): container finished" podID="8e2b4d19-d04b-4a05-9967-6c08d6b7b3a7" containerID="8f40eab40b5eb7d2e182604267d1bd92ea1eb89641fe6d90f1afa133e486177c" exitCode=0 Nov 29 06:57:30 crc kubenswrapper[4947]: I1129 06:57:30.559002 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e513-account-create-update-nsfwk" event={"ID":"8e2b4d19-d04b-4a05-9967-6c08d6b7b3a7","Type":"ContainerDied","Data":"8f40eab40b5eb7d2e182604267d1bd92ea1eb89641fe6d90f1afa133e486177c"} Nov 29 06:57:30 crc kubenswrapper[4947]: I1129 06:57:30.562797 4947 generic.go:334] "Generic (PLEG): container finished" podID="5d5a6e2f-204f-4356-b140-e1a58c242965" containerID="61b442d67041139b67a4033884792482a78ae8f47cfcce09d6795a7f7837fd8f" exitCode=0 Nov 29 06:57:30 crc kubenswrapper[4947]: I1129 06:57:30.562892 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d40e-account-create-update-rx5gn" event={"ID":"5d5a6e2f-204f-4356-b140-e1a58c242965","Type":"ContainerDied","Data":"61b442d67041139b67a4033884792482a78ae8f47cfcce09d6795a7f7837fd8f"} Nov 29 06:57:30 crc kubenswrapper[4947]: I1129 06:57:30.562933 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d40e-account-create-update-rx5gn" event={"ID":"5d5a6e2f-204f-4356-b140-e1a58c242965","Type":"ContainerStarted","Data":"ea2411a3a94ec7764e3be7ef8ad5367786005ec7212315002c67d88576821f60"} Nov 29 06:57:30 crc kubenswrapper[4947]: I1129 06:57:30.566085 4947 generic.go:334] "Generic (PLEG): container finished" podID="5b100e9b-0224-436a-a3a5-73587eda6743" containerID="e6bb25a7208b2914ae8e17881601d353d2e2fb5cd5549fd394606776c2443dc9" exitCode=0 Nov 29 06:57:30 crc kubenswrapper[4947]: I1129 06:57:30.566145 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-ecc6-account-create-update-299dm" event={"ID":"5b100e9b-0224-436a-a3a5-73587eda6743","Type":"ContainerDied","Data":"e6bb25a7208b2914ae8e17881601d353d2e2fb5cd5549fd394606776c2443dc9"} Nov 29 06:57:30 crc kubenswrapper[4947]: I1129 06:57:30.566165 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-ecc6-account-create-update-299dm" event={"ID":"5b100e9b-0224-436a-a3a5-73587eda6743","Type":"ContainerStarted","Data":"04fb9b2771e826403f457a2eb0f0120d1e52455bc1ee68b1291fd9edb0b1dec3"} Nov 29 06:57:30 crc kubenswrapper[4947]: I1129 06:57:30.568811 4947 generic.go:334] "Generic (PLEG): container finished" podID="aadd9963-9c1e-4c5e-b03e-6577b3f1f139" containerID="9da98ec2a732bde0b44163e194a27d5414f8a25f43c72dab0f8f03eeea461c10" exitCode=0 Nov 29 06:57:30 crc kubenswrapper[4947]: I1129 06:57:30.568862 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-29srh" event={"ID":"aadd9963-9c1e-4c5e-b03e-6577b3f1f139","Type":"ContainerDied","Data":"9da98ec2a732bde0b44163e194a27d5414f8a25f43c72dab0f8f03eeea461c10"} Nov 29 06:57:30 crc kubenswrapper[4947]: I1129 06:57:30.568881 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-29srh" event={"ID":"aadd9963-9c1e-4c5e-b03e-6577b3f1f139","Type":"ContainerStarted","Data":"61ad46fd9902a4807b4ae62cca7cfd92c6e47a9c395083cc97fd41e316fe10e8"} Nov 29 06:57:30 crc kubenswrapper[4947]: I1129 06:57:30.571134 4947 generic.go:334] "Generic (PLEG): container finished" podID="92c4d360-3806-49fc-85be-f8e1ea6d5975" containerID="ce8f8cb9067aa12271fb0cb6afd6975372be8f57fc10889e30cd152f2cbc058d" exitCode=0 Nov 29 06:57:30 crc kubenswrapper[4947]: I1129 06:57:30.571269 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-d5xx7" event={"ID":"92c4d360-3806-49fc-85be-f8e1ea6d5975","Type":"ContainerDied","Data":"ce8f8cb9067aa12271fb0cb6afd6975372be8f57fc10889e30cd152f2cbc058d"} Nov 29 06:57:30 crc kubenswrapper[4947]: I1129 06:57:30.574625 4947 generic.go:334] "Generic (PLEG): container finished" podID="69c8e8b7-9d6e-4918-acb3-77788534fba2" containerID="51f1569b74daff9ffa35e5d003b27155e38ea55be2d197756e6ddf3a87ae8a41" exitCode=0 Nov 29 06:57:30 crc kubenswrapper[4947]: I1129 06:57:30.574791 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-lwblx" event={"ID":"69c8e8b7-9d6e-4918-acb3-77788534fba2","Type":"ContainerDied","Data":"51f1569b74daff9ffa35e5d003b27155e38ea55be2d197756e6ddf3a87ae8a41"} Nov 29 06:57:30 crc kubenswrapper[4947]: I1129 06:57:30.576669 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f46ebbae-61e6-4e58-9f82-f47aca4269f5","Type":"ContainerStarted","Data":"b37dabef3d365754f59ba99dc06e8af203f3b639ad70be1eb3d27b2fec136088"} Nov 29 06:57:31 crc kubenswrapper[4947]: I1129 06:57:31.588679 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f46ebbae-61e6-4e58-9f82-f47aca4269f5","Type":"ContainerStarted","Data":"e927edc3487ec2833d7b1aca11bcba15dcad0c3fa70c97a5e2354370b50ff4f6"} Nov 29 06:57:31 crc kubenswrapper[4947]: I1129 06:57:31.892598 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.113559 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-lwblx" Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.133949 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhhvj\" (UniqueName: \"kubernetes.io/projected/69c8e8b7-9d6e-4918-acb3-77788534fba2-kube-api-access-rhhvj\") pod \"69c8e8b7-9d6e-4918-acb3-77788534fba2\" (UID: \"69c8e8b7-9d6e-4918-acb3-77788534fba2\") " Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.134253 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69c8e8b7-9d6e-4918-acb3-77788534fba2-operator-scripts\") pod \"69c8e8b7-9d6e-4918-acb3-77788534fba2\" (UID: \"69c8e8b7-9d6e-4918-acb3-77788534fba2\") " Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.134992 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69c8e8b7-9d6e-4918-acb3-77788534fba2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "69c8e8b7-9d6e-4918-acb3-77788534fba2" (UID: "69c8e8b7-9d6e-4918-acb3-77788534fba2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.143459 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69c8e8b7-9d6e-4918-acb3-77788534fba2-kube-api-access-rhhvj" (OuterVolumeSpecName: "kube-api-access-rhhvj") pod "69c8e8b7-9d6e-4918-acb3-77788534fba2" (UID: "69c8e8b7-9d6e-4918-acb3-77788534fba2"). InnerVolumeSpecName "kube-api-access-rhhvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.236676 4947 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69c8e8b7-9d6e-4918-acb3-77788534fba2-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.236725 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhhvj\" (UniqueName: \"kubernetes.io/projected/69c8e8b7-9d6e-4918-acb3-77788534fba2-kube-api-access-rhhvj\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.515096 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d40e-account-create-update-rx5gn" Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.521079 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-d5xx7" Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.534770 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e513-account-create-update-nsfwk" Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.559951 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92c4d360-3806-49fc-85be-f8e1ea6d5975-operator-scripts\") pod \"92c4d360-3806-49fc-85be-f8e1ea6d5975\" (UID: \"92c4d360-3806-49fc-85be-f8e1ea6d5975\") " Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.560510 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5pcl\" (UniqueName: \"kubernetes.io/projected/92c4d360-3806-49fc-85be-f8e1ea6d5975-kube-api-access-b5pcl\") pod \"92c4d360-3806-49fc-85be-f8e1ea6d5975\" (UID: \"92c4d360-3806-49fc-85be-f8e1ea6d5975\") " Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.560680 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d5a6e2f-204f-4356-b140-e1a58c242965-operator-scripts\") pod \"5d5a6e2f-204f-4356-b140-e1a58c242965\" (UID: \"5d5a6e2f-204f-4356-b140-e1a58c242965\") " Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.560758 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z8rc\" (UniqueName: \"kubernetes.io/projected/5d5a6e2f-204f-4356-b140-e1a58c242965-kube-api-access-9z8rc\") pod \"5d5a6e2f-204f-4356-b140-e1a58c242965\" (UID: \"5d5a6e2f-204f-4356-b140-e1a58c242965\") " Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.564074 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92c4d360-3806-49fc-85be-f8e1ea6d5975-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "92c4d360-3806-49fc-85be-f8e1ea6d5975" (UID: "92c4d360-3806-49fc-85be-f8e1ea6d5975"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.564159 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d5a6e2f-204f-4356-b140-e1a58c242965-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5d5a6e2f-204f-4356-b140-e1a58c242965" (UID: "5d5a6e2f-204f-4356-b140-e1a58c242965"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.569410 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d5a6e2f-204f-4356-b140-e1a58c242965-kube-api-access-9z8rc" (OuterVolumeSpecName: "kube-api-access-9z8rc") pod "5d5a6e2f-204f-4356-b140-e1a58c242965" (UID: "5d5a6e2f-204f-4356-b140-e1a58c242965"). InnerVolumeSpecName "kube-api-access-9z8rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.569521 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-29srh" Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.579153 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-ecc6-account-create-update-299dm" Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.580914 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92c4d360-3806-49fc-85be-f8e1ea6d5975-kube-api-access-b5pcl" (OuterVolumeSpecName: "kube-api-access-b5pcl") pod "92c4d360-3806-49fc-85be-f8e1ea6d5975" (UID: "92c4d360-3806-49fc-85be-f8e1ea6d5975"). InnerVolumeSpecName "kube-api-access-b5pcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.618140 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-29srh" Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.618136 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-29srh" event={"ID":"aadd9963-9c1e-4c5e-b03e-6577b3f1f139","Type":"ContainerDied","Data":"61ad46fd9902a4807b4ae62cca7cfd92c6e47a9c395083cc97fd41e316fe10e8"} Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.618393 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61ad46fd9902a4807b4ae62cca7cfd92c6e47a9c395083cc97fd41e316fe10e8" Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.624822 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-d5xx7" Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.626295 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-d5xx7" event={"ID":"92c4d360-3806-49fc-85be-f8e1ea6d5975","Type":"ContainerDied","Data":"ad319412f4c9ff70ef91e4f8a7d0f0b87c02272e232c4d5b1b69f6ef0df6fe3d"} Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.626345 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad319412f4c9ff70ef91e4f8a7d0f0b87c02272e232c4d5b1b69f6ef0df6fe3d" Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.632751 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-lwblx" event={"ID":"69c8e8b7-9d6e-4918-acb3-77788534fba2","Type":"ContainerDied","Data":"370b858bdc512212715bfa451067c31450bfe56569cdaaf64d809882c4dc91bc"} Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.632805 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="370b858bdc512212715bfa451067c31450bfe56569cdaaf64d809882c4dc91bc" Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.632865 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-lwblx" Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.638393 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f46ebbae-61e6-4e58-9f82-f47aca4269f5","Type":"ContainerStarted","Data":"36286bf1fe0fa21e11986f770e2cc76fbef333fe6e42f4278750c784b5daaca4"} Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.640734 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e513-account-create-update-nsfwk" event={"ID":"8e2b4d19-d04b-4a05-9967-6c08d6b7b3a7","Type":"ContainerDied","Data":"4ae87e2438b965d7d4291184fac2d1af47d0de1ec5f4f1d9fdfda389569302ed"} Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.641929 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ae87e2438b965d7d4291184fac2d1af47d0de1ec5f4f1d9fdfda389569302ed" Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.641627 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e513-account-create-update-nsfwk" Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.643989 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d40e-account-create-update-rx5gn" event={"ID":"5d5a6e2f-204f-4356-b140-e1a58c242965","Type":"ContainerDied","Data":"ea2411a3a94ec7764e3be7ef8ad5367786005ec7212315002c67d88576821f60"} Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.644049 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea2411a3a94ec7764e3be7ef8ad5367786005ec7212315002c67d88576821f60" Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.645428 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d40e-account-create-update-rx5gn" Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.648350 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-ecc6-account-create-update-299dm" event={"ID":"5b100e9b-0224-436a-a3a5-73587eda6743","Type":"ContainerDied","Data":"04fb9b2771e826403f457a2eb0f0120d1e52455bc1ee68b1291fd9edb0b1dec3"} Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.648386 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04fb9b2771e826403f457a2eb0f0120d1e52455bc1ee68b1291fd9edb0b1dec3" Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.648438 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-ecc6-account-create-update-299dm" Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.663129 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b100e9b-0224-436a-a3a5-73587eda6743-operator-scripts\") pod \"5b100e9b-0224-436a-a3a5-73587eda6743\" (UID: \"5b100e9b-0224-436a-a3a5-73587eda6743\") " Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.663191 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e2b4d19-d04b-4a05-9967-6c08d6b7b3a7-operator-scripts\") pod \"8e2b4d19-d04b-4a05-9967-6c08d6b7b3a7\" (UID: \"8e2b4d19-d04b-4a05-9967-6c08d6b7b3a7\") " Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.663263 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-222v4\" (UniqueName: \"kubernetes.io/projected/5b100e9b-0224-436a-a3a5-73587eda6743-kube-api-access-222v4\") pod \"5b100e9b-0224-436a-a3a5-73587eda6743\" (UID: \"5b100e9b-0224-436a-a3a5-73587eda6743\") " Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.663311 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aadd9963-9c1e-4c5e-b03e-6577b3f1f139-operator-scripts\") pod \"aadd9963-9c1e-4c5e-b03e-6577b3f1f139\" (UID: \"aadd9963-9c1e-4c5e-b03e-6577b3f1f139\") " Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.663362 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hkqd\" (UniqueName: \"kubernetes.io/projected/8e2b4d19-d04b-4a05-9967-6c08d6b7b3a7-kube-api-access-9hkqd\") pod \"8e2b4d19-d04b-4a05-9967-6c08d6b7b3a7\" (UID: \"8e2b4d19-d04b-4a05-9967-6c08d6b7b3a7\") " Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.663457 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mplkt\" (UniqueName: \"kubernetes.io/projected/aadd9963-9c1e-4c5e-b03e-6577b3f1f139-kube-api-access-mplkt\") pod \"aadd9963-9c1e-4c5e-b03e-6577b3f1f139\" (UID: \"aadd9963-9c1e-4c5e-b03e-6577b3f1f139\") " Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.664438 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b100e9b-0224-436a-a3a5-73587eda6743-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5b100e9b-0224-436a-a3a5-73587eda6743" (UID: "5b100e9b-0224-436a-a3a5-73587eda6743"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.664546 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aadd9963-9c1e-4c5e-b03e-6577b3f1f139-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "aadd9963-9c1e-4c5e-b03e-6577b3f1f139" (UID: "aadd9963-9c1e-4c5e-b03e-6577b3f1f139"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.665147 4947 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92c4d360-3806-49fc-85be-f8e1ea6d5975-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.665182 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5pcl\" (UniqueName: \"kubernetes.io/projected/92c4d360-3806-49fc-85be-f8e1ea6d5975-kube-api-access-b5pcl\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.665198 4947 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b100e9b-0224-436a-a3a5-73587eda6743-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.665210 4947 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aadd9963-9c1e-4c5e-b03e-6577b3f1f139-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.665287 4947 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d5a6e2f-204f-4356-b140-e1a58c242965-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.665303 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9z8rc\" (UniqueName: \"kubernetes.io/projected/5d5a6e2f-204f-4356-b140-e1a58c242965-kube-api-access-9z8rc\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.665727 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e2b4d19-d04b-4a05-9967-6c08d6b7b3a7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8e2b4d19-d04b-4a05-9967-6c08d6b7b3a7" (UID: "8e2b4d19-d04b-4a05-9967-6c08d6b7b3a7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.672262 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b100e9b-0224-436a-a3a5-73587eda6743-kube-api-access-222v4" (OuterVolumeSpecName: "kube-api-access-222v4") pod "5b100e9b-0224-436a-a3a5-73587eda6743" (UID: "5b100e9b-0224-436a-a3a5-73587eda6743"). InnerVolumeSpecName "kube-api-access-222v4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.676917 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e2b4d19-d04b-4a05-9967-6c08d6b7b3a7-kube-api-access-9hkqd" (OuterVolumeSpecName: "kube-api-access-9hkqd") pod "8e2b4d19-d04b-4a05-9967-6c08d6b7b3a7" (UID: "8e2b4d19-d04b-4a05-9967-6c08d6b7b3a7"). InnerVolumeSpecName "kube-api-access-9hkqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.677002 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aadd9963-9c1e-4c5e-b03e-6577b3f1f139-kube-api-access-mplkt" (OuterVolumeSpecName: "kube-api-access-mplkt") pod "aadd9963-9c1e-4c5e-b03e-6577b3f1f139" (UID: "aadd9963-9c1e-4c5e-b03e-6577b3f1f139"). InnerVolumeSpecName "kube-api-access-mplkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.767468 4947 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e2b4d19-d04b-4a05-9967-6c08d6b7b3a7-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.767712 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-222v4\" (UniqueName: \"kubernetes.io/projected/5b100e9b-0224-436a-a3a5-73587eda6743-kube-api-access-222v4\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.767816 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hkqd\" (UniqueName: \"kubernetes.io/projected/8e2b4d19-d04b-4a05-9967-6c08d6b7b3a7-kube-api-access-9hkqd\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:32 crc kubenswrapper[4947]: I1129 06:57:32.767892 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mplkt\" (UniqueName: \"kubernetes.io/projected/aadd9963-9c1e-4c5e-b03e-6577b3f1f139-kube-api-access-mplkt\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:33 crc kubenswrapper[4947]: I1129 06:57:33.678314 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f46ebbae-61e6-4e58-9f82-f47aca4269f5","Type":"ContainerStarted","Data":"d2d492d5d1459ed41d5fb2244a536d64169c93d969f19b60b580bf156b2e2fb6"} Nov 29 06:57:34 crc kubenswrapper[4947]: I1129 06:57:34.560989 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7d7467cbc8-jttcd" Nov 29 06:57:34 crc kubenswrapper[4947]: I1129 06:57:34.691695 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f46ebbae-61e6-4e58-9f82-f47aca4269f5","Type":"ContainerStarted","Data":"ac6a9f067d0041e19e6c2cf915fb4679bef995b45a89f1f5fef88f133b721b25"} Nov 29 06:57:34 crc kubenswrapper[4947]: I1129 06:57:34.691951 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f46ebbae-61e6-4e58-9f82-f47aca4269f5" containerName="ceilometer-central-agent" containerID="cri-o://e927edc3487ec2833d7b1aca11bcba15dcad0c3fa70c97a5e2354370b50ff4f6" gracePeriod=30 Nov 29 06:57:34 crc kubenswrapper[4947]: I1129 06:57:34.692501 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 29 06:57:34 crc kubenswrapper[4947]: I1129 06:57:34.692762 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f46ebbae-61e6-4e58-9f82-f47aca4269f5" containerName="proxy-httpd" containerID="cri-o://ac6a9f067d0041e19e6c2cf915fb4679bef995b45a89f1f5fef88f133b721b25" gracePeriod=30 Nov 29 06:57:34 crc kubenswrapper[4947]: I1129 06:57:34.692811 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f46ebbae-61e6-4e58-9f82-f47aca4269f5" containerName="sg-core" containerID="cri-o://d2d492d5d1459ed41d5fb2244a536d64169c93d969f19b60b580bf156b2e2fb6" gracePeriod=30 Nov 29 06:57:34 crc kubenswrapper[4947]: I1129 06:57:34.692826 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f46ebbae-61e6-4e58-9f82-f47aca4269f5" containerName="ceilometer-notification-agent" containerID="cri-o://36286bf1fe0fa21e11986f770e2cc76fbef333fe6e42f4278750c784b5daaca4" gracePeriod=30 Nov 29 06:57:34 crc kubenswrapper[4947]: I1129 06:57:34.725754 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.849688245 podStartE2EDuration="6.725721658s" podCreationTimestamp="2025-11-29 06:57:28 +0000 UTC" firstStartedPulling="2025-11-29 06:57:29.947451093 +0000 UTC m=+1400.991833174" lastFinishedPulling="2025-11-29 06:57:33.823484506 +0000 UTC m=+1404.867866587" observedRunningTime="2025-11-29 06:57:34.724812265 +0000 UTC m=+1405.769194356" watchObservedRunningTime="2025-11-29 06:57:34.725721658 +0000 UTC m=+1405.770103739" Nov 29 06:57:35 crc kubenswrapper[4947]: I1129 06:57:35.720803 4947 generic.go:334] "Generic (PLEG): container finished" podID="f46ebbae-61e6-4e58-9f82-f47aca4269f5" containerID="ac6a9f067d0041e19e6c2cf915fb4679bef995b45a89f1f5fef88f133b721b25" exitCode=0 Nov 29 06:57:35 crc kubenswrapper[4947]: I1129 06:57:35.721497 4947 generic.go:334] "Generic (PLEG): container finished" podID="f46ebbae-61e6-4e58-9f82-f47aca4269f5" containerID="d2d492d5d1459ed41d5fb2244a536d64169c93d969f19b60b580bf156b2e2fb6" exitCode=2 Nov 29 06:57:35 crc kubenswrapper[4947]: I1129 06:57:35.721514 4947 generic.go:334] "Generic (PLEG): container finished" podID="f46ebbae-61e6-4e58-9f82-f47aca4269f5" containerID="36286bf1fe0fa21e11986f770e2cc76fbef333fe6e42f4278750c784b5daaca4" exitCode=0 Nov 29 06:57:35 crc kubenswrapper[4947]: I1129 06:57:35.721524 4947 generic.go:334] "Generic (PLEG): container finished" podID="f46ebbae-61e6-4e58-9f82-f47aca4269f5" containerID="e927edc3487ec2833d7b1aca11bcba15dcad0c3fa70c97a5e2354370b50ff4f6" exitCode=0 Nov 29 06:57:35 crc kubenswrapper[4947]: I1129 06:57:35.720954 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f46ebbae-61e6-4e58-9f82-f47aca4269f5","Type":"ContainerDied","Data":"ac6a9f067d0041e19e6c2cf915fb4679bef995b45a89f1f5fef88f133b721b25"} Nov 29 06:57:35 crc kubenswrapper[4947]: I1129 06:57:35.721583 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f46ebbae-61e6-4e58-9f82-f47aca4269f5","Type":"ContainerDied","Data":"d2d492d5d1459ed41d5fb2244a536d64169c93d969f19b60b580bf156b2e2fb6"} Nov 29 06:57:35 crc kubenswrapper[4947]: I1129 06:57:35.721606 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f46ebbae-61e6-4e58-9f82-f47aca4269f5","Type":"ContainerDied","Data":"36286bf1fe0fa21e11986f770e2cc76fbef333fe6e42f4278750c784b5daaca4"} Nov 29 06:57:35 crc kubenswrapper[4947]: I1129 06:57:35.721615 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f46ebbae-61e6-4e58-9f82-f47aca4269f5","Type":"ContainerDied","Data":"e927edc3487ec2833d7b1aca11bcba15dcad0c3fa70c97a5e2354370b50ff4f6"} Nov 29 06:57:35 crc kubenswrapper[4947]: I1129 06:57:35.769879 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 06:57:35 crc kubenswrapper[4947]: I1129 06:57:35.838399 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f46ebbae-61e6-4e58-9f82-f47aca4269f5-sg-core-conf-yaml\") pod \"f46ebbae-61e6-4e58-9f82-f47aca4269f5\" (UID: \"f46ebbae-61e6-4e58-9f82-f47aca4269f5\") " Nov 29 06:57:35 crc kubenswrapper[4947]: I1129 06:57:35.838513 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f46ebbae-61e6-4e58-9f82-f47aca4269f5-scripts\") pod \"f46ebbae-61e6-4e58-9f82-f47aca4269f5\" (UID: \"f46ebbae-61e6-4e58-9f82-f47aca4269f5\") " Nov 29 06:57:35 crc kubenswrapper[4947]: I1129 06:57:35.838539 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f46ebbae-61e6-4e58-9f82-f47aca4269f5-config-data\") pod \"f46ebbae-61e6-4e58-9f82-f47aca4269f5\" (UID: \"f46ebbae-61e6-4e58-9f82-f47aca4269f5\") " Nov 29 06:57:35 crc kubenswrapper[4947]: I1129 06:57:35.838718 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f46ebbae-61e6-4e58-9f82-f47aca4269f5-combined-ca-bundle\") pod \"f46ebbae-61e6-4e58-9f82-f47aca4269f5\" (UID: \"f46ebbae-61e6-4e58-9f82-f47aca4269f5\") " Nov 29 06:57:35 crc kubenswrapper[4947]: I1129 06:57:35.838749 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f46ebbae-61e6-4e58-9f82-f47aca4269f5-log-httpd\") pod \"f46ebbae-61e6-4e58-9f82-f47aca4269f5\" (UID: \"f46ebbae-61e6-4e58-9f82-f47aca4269f5\") " Nov 29 06:57:35 crc kubenswrapper[4947]: I1129 06:57:35.838813 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qqc8\" (UniqueName: \"kubernetes.io/projected/f46ebbae-61e6-4e58-9f82-f47aca4269f5-kube-api-access-9qqc8\") pod \"f46ebbae-61e6-4e58-9f82-f47aca4269f5\" (UID: \"f46ebbae-61e6-4e58-9f82-f47aca4269f5\") " Nov 29 06:57:35 crc kubenswrapper[4947]: I1129 06:57:35.838867 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f46ebbae-61e6-4e58-9f82-f47aca4269f5-run-httpd\") pod \"f46ebbae-61e6-4e58-9f82-f47aca4269f5\" (UID: \"f46ebbae-61e6-4e58-9f82-f47aca4269f5\") " Nov 29 06:57:35 crc kubenswrapper[4947]: I1129 06:57:35.840201 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f46ebbae-61e6-4e58-9f82-f47aca4269f5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f46ebbae-61e6-4e58-9f82-f47aca4269f5" (UID: "f46ebbae-61e6-4e58-9f82-f47aca4269f5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:57:35 crc kubenswrapper[4947]: I1129 06:57:35.840739 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f46ebbae-61e6-4e58-9f82-f47aca4269f5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f46ebbae-61e6-4e58-9f82-f47aca4269f5" (UID: "f46ebbae-61e6-4e58-9f82-f47aca4269f5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:57:35 crc kubenswrapper[4947]: I1129 06:57:35.851951 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f46ebbae-61e6-4e58-9f82-f47aca4269f5-kube-api-access-9qqc8" (OuterVolumeSpecName: "kube-api-access-9qqc8") pod "f46ebbae-61e6-4e58-9f82-f47aca4269f5" (UID: "f46ebbae-61e6-4e58-9f82-f47aca4269f5"). InnerVolumeSpecName "kube-api-access-9qqc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:57:35 crc kubenswrapper[4947]: I1129 06:57:35.856602 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f46ebbae-61e6-4e58-9f82-f47aca4269f5-scripts" (OuterVolumeSpecName: "scripts") pod "f46ebbae-61e6-4e58-9f82-f47aca4269f5" (UID: "f46ebbae-61e6-4e58-9f82-f47aca4269f5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:57:35 crc kubenswrapper[4947]: I1129 06:57:35.880969 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f46ebbae-61e6-4e58-9f82-f47aca4269f5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f46ebbae-61e6-4e58-9f82-f47aca4269f5" (UID: "f46ebbae-61e6-4e58-9f82-f47aca4269f5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:57:35 crc kubenswrapper[4947]: I1129 06:57:35.941725 4947 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f46ebbae-61e6-4e58-9f82-f47aca4269f5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:35 crc kubenswrapper[4947]: I1129 06:57:35.941820 4947 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f46ebbae-61e6-4e58-9f82-f47aca4269f5-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:35 crc kubenswrapper[4947]: I1129 06:57:35.941835 4947 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f46ebbae-61e6-4e58-9f82-f47aca4269f5-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:35 crc kubenswrapper[4947]: I1129 06:57:35.941848 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qqc8\" (UniqueName: \"kubernetes.io/projected/f46ebbae-61e6-4e58-9f82-f47aca4269f5-kube-api-access-9qqc8\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:35 crc kubenswrapper[4947]: I1129 06:57:35.941869 4947 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f46ebbae-61e6-4e58-9f82-f47aca4269f5-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:35 crc kubenswrapper[4947]: I1129 06:57:35.955985 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f46ebbae-61e6-4e58-9f82-f47aca4269f5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f46ebbae-61e6-4e58-9f82-f47aca4269f5" (UID: "f46ebbae-61e6-4e58-9f82-f47aca4269f5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:57:35 crc kubenswrapper[4947]: I1129 06:57:35.965453 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f46ebbae-61e6-4e58-9f82-f47aca4269f5-config-data" (OuterVolumeSpecName: "config-data") pod "f46ebbae-61e6-4e58-9f82-f47aca4269f5" (UID: "f46ebbae-61e6-4e58-9f82-f47aca4269f5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:57:36 crc kubenswrapper[4947]: I1129 06:57:36.044302 4947 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f46ebbae-61e6-4e58-9f82-f47aca4269f5-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:36 crc kubenswrapper[4947]: I1129 06:57:36.044361 4947 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f46ebbae-61e6-4e58-9f82-f47aca4269f5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:36 crc kubenswrapper[4947]: I1129 06:57:36.737928 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f46ebbae-61e6-4e58-9f82-f47aca4269f5","Type":"ContainerDied","Data":"b37dabef3d365754f59ba99dc06e8af203f3b639ad70be1eb3d27b2fec136088"} Nov 29 06:57:36 crc kubenswrapper[4947]: I1129 06:57:36.738017 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 06:57:36 crc kubenswrapper[4947]: I1129 06:57:36.738419 4947 scope.go:117] "RemoveContainer" containerID="ac6a9f067d0041e19e6c2cf915fb4679bef995b45a89f1f5fef88f133b721b25" Nov 29 06:57:36 crc kubenswrapper[4947]: I1129 06:57:36.774809 4947 scope.go:117] "RemoveContainer" containerID="d2d492d5d1459ed41d5fb2244a536d64169c93d969f19b60b580bf156b2e2fb6" Nov 29 06:57:36 crc kubenswrapper[4947]: I1129 06:57:36.793703 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 06:57:36 crc kubenswrapper[4947]: I1129 06:57:36.814626 4947 scope.go:117] "RemoveContainer" containerID="36286bf1fe0fa21e11986f770e2cc76fbef333fe6e42f4278750c784b5daaca4" Nov 29 06:57:36 crc kubenswrapper[4947]: I1129 06:57:36.815871 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 29 06:57:36 crc kubenswrapper[4947]: I1129 06:57:36.829636 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 29 06:57:36 crc kubenswrapper[4947]: E1129 06:57:36.830052 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d5a6e2f-204f-4356-b140-e1a58c242965" containerName="mariadb-account-create-update" Nov 29 06:57:36 crc kubenswrapper[4947]: I1129 06:57:36.830069 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d5a6e2f-204f-4356-b140-e1a58c242965" containerName="mariadb-account-create-update" Nov 29 06:57:36 crc kubenswrapper[4947]: E1129 06:57:36.830081 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b100e9b-0224-436a-a3a5-73587eda6743" containerName="mariadb-account-create-update" Nov 29 06:57:36 crc kubenswrapper[4947]: I1129 06:57:36.830089 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b100e9b-0224-436a-a3a5-73587eda6743" containerName="mariadb-account-create-update" Nov 29 06:57:36 crc kubenswrapper[4947]: E1129 06:57:36.830098 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f46ebbae-61e6-4e58-9f82-f47aca4269f5" containerName="ceilometer-notification-agent" Nov 29 06:57:36 crc kubenswrapper[4947]: I1129 06:57:36.830107 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="f46ebbae-61e6-4e58-9f82-f47aca4269f5" containerName="ceilometer-notification-agent" Nov 29 06:57:36 crc kubenswrapper[4947]: E1129 06:57:36.830118 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aadd9963-9c1e-4c5e-b03e-6577b3f1f139" containerName="mariadb-database-create" Nov 29 06:57:36 crc kubenswrapper[4947]: I1129 06:57:36.830126 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="aadd9963-9c1e-4c5e-b03e-6577b3f1f139" containerName="mariadb-database-create" Nov 29 06:57:36 crc kubenswrapper[4947]: E1129 06:57:36.830140 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e2b4d19-d04b-4a05-9967-6c08d6b7b3a7" containerName="mariadb-account-create-update" Nov 29 06:57:36 crc kubenswrapper[4947]: I1129 06:57:36.830146 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e2b4d19-d04b-4a05-9967-6c08d6b7b3a7" containerName="mariadb-account-create-update" Nov 29 06:57:36 crc kubenswrapper[4947]: E1129 06:57:36.830169 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f46ebbae-61e6-4e58-9f82-f47aca4269f5" containerName="proxy-httpd" Nov 29 06:57:36 crc kubenswrapper[4947]: I1129 06:57:36.830176 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="f46ebbae-61e6-4e58-9f82-f47aca4269f5" containerName="proxy-httpd" Nov 29 06:57:36 crc kubenswrapper[4947]: E1129 06:57:36.830185 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f46ebbae-61e6-4e58-9f82-f47aca4269f5" containerName="ceilometer-central-agent" Nov 29 06:57:36 crc kubenswrapper[4947]: I1129 06:57:36.830192 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="f46ebbae-61e6-4e58-9f82-f47aca4269f5" containerName="ceilometer-central-agent" Nov 29 06:57:36 crc kubenswrapper[4947]: E1129 06:57:36.830211 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f46ebbae-61e6-4e58-9f82-f47aca4269f5" containerName="sg-core" Nov 29 06:57:36 crc kubenswrapper[4947]: I1129 06:57:36.830219 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="f46ebbae-61e6-4e58-9f82-f47aca4269f5" containerName="sg-core" Nov 29 06:57:36 crc kubenswrapper[4947]: E1129 06:57:36.830236 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92c4d360-3806-49fc-85be-f8e1ea6d5975" containerName="mariadb-database-create" Nov 29 06:57:36 crc kubenswrapper[4947]: I1129 06:57:36.830242 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="92c4d360-3806-49fc-85be-f8e1ea6d5975" containerName="mariadb-database-create" Nov 29 06:57:36 crc kubenswrapper[4947]: E1129 06:57:36.830270 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69c8e8b7-9d6e-4918-acb3-77788534fba2" containerName="mariadb-database-create" Nov 29 06:57:36 crc kubenswrapper[4947]: I1129 06:57:36.830276 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="69c8e8b7-9d6e-4918-acb3-77788534fba2" containerName="mariadb-database-create" Nov 29 06:57:36 crc kubenswrapper[4947]: I1129 06:57:36.830465 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="f46ebbae-61e6-4e58-9f82-f47aca4269f5" containerName="ceilometer-central-agent" Nov 29 06:57:36 crc kubenswrapper[4947]: I1129 06:57:36.830479 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="f46ebbae-61e6-4e58-9f82-f47aca4269f5" containerName="proxy-httpd" Nov 29 06:57:36 crc kubenswrapper[4947]: I1129 06:57:36.830491 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b100e9b-0224-436a-a3a5-73587eda6743" containerName="mariadb-account-create-update" Nov 29 06:57:36 crc kubenswrapper[4947]: I1129 06:57:36.830499 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="aadd9963-9c1e-4c5e-b03e-6577b3f1f139" containerName="mariadb-database-create" Nov 29 06:57:36 crc kubenswrapper[4947]: I1129 06:57:36.830511 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="f46ebbae-61e6-4e58-9f82-f47aca4269f5" containerName="ceilometer-notification-agent" Nov 29 06:57:36 crc kubenswrapper[4947]: I1129 06:57:36.830522 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="92c4d360-3806-49fc-85be-f8e1ea6d5975" containerName="mariadb-database-create" Nov 29 06:57:36 crc kubenswrapper[4947]: I1129 06:57:36.830533 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="69c8e8b7-9d6e-4918-acb3-77788534fba2" containerName="mariadb-database-create" Nov 29 06:57:36 crc kubenswrapper[4947]: I1129 06:57:36.830545 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e2b4d19-d04b-4a05-9967-6c08d6b7b3a7" containerName="mariadb-account-create-update" Nov 29 06:57:36 crc kubenswrapper[4947]: I1129 06:57:36.830557 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d5a6e2f-204f-4356-b140-e1a58c242965" containerName="mariadb-account-create-update" Nov 29 06:57:36 crc kubenswrapper[4947]: I1129 06:57:36.830564 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="f46ebbae-61e6-4e58-9f82-f47aca4269f5" containerName="sg-core" Nov 29 06:57:36 crc kubenswrapper[4947]: I1129 06:57:36.841861 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 06:57:36 crc kubenswrapper[4947]: I1129 06:57:36.842547 4947 scope.go:117] "RemoveContainer" containerID="e927edc3487ec2833d7b1aca11bcba15dcad0c3fa70c97a5e2354370b50ff4f6" Nov 29 06:57:36 crc kubenswrapper[4947]: I1129 06:57:36.845499 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 29 06:57:36 crc kubenswrapper[4947]: I1129 06:57:36.845910 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 29 06:57:36 crc kubenswrapper[4947]: I1129 06:57:36.856287 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 06:57:36 crc kubenswrapper[4947]: I1129 06:57:36.965094 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8a50b2c-5e30-48c2-82ba-6af920f73a6b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b8a50b2c-5e30-48c2-82ba-6af920f73a6b\") " pod="openstack/ceilometer-0" Nov 29 06:57:36 crc kubenswrapper[4947]: I1129 06:57:36.965182 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84kpn\" (UniqueName: \"kubernetes.io/projected/b8a50b2c-5e30-48c2-82ba-6af920f73a6b-kube-api-access-84kpn\") pod \"ceilometer-0\" (UID: \"b8a50b2c-5e30-48c2-82ba-6af920f73a6b\") " pod="openstack/ceilometer-0" Nov 29 06:57:36 crc kubenswrapper[4947]: I1129 06:57:36.965215 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8a50b2c-5e30-48c2-82ba-6af920f73a6b-run-httpd\") pod \"ceilometer-0\" (UID: \"b8a50b2c-5e30-48c2-82ba-6af920f73a6b\") " pod="openstack/ceilometer-0" Nov 29 06:57:36 crc kubenswrapper[4947]: I1129 06:57:36.965294 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8a50b2c-5e30-48c2-82ba-6af920f73a6b-config-data\") pod \"ceilometer-0\" (UID: \"b8a50b2c-5e30-48c2-82ba-6af920f73a6b\") " pod="openstack/ceilometer-0" Nov 29 06:57:36 crc kubenswrapper[4947]: I1129 06:57:36.965349 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8a50b2c-5e30-48c2-82ba-6af920f73a6b-scripts\") pod \"ceilometer-0\" (UID: \"b8a50b2c-5e30-48c2-82ba-6af920f73a6b\") " pod="openstack/ceilometer-0" Nov 29 06:57:36 crc kubenswrapper[4947]: I1129 06:57:36.965381 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b8a50b2c-5e30-48c2-82ba-6af920f73a6b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b8a50b2c-5e30-48c2-82ba-6af920f73a6b\") " pod="openstack/ceilometer-0" Nov 29 06:57:36 crc kubenswrapper[4947]: I1129 06:57:36.965418 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8a50b2c-5e30-48c2-82ba-6af920f73a6b-log-httpd\") pod \"ceilometer-0\" (UID: \"b8a50b2c-5e30-48c2-82ba-6af920f73a6b\") " pod="openstack/ceilometer-0" Nov 29 06:57:37 crc kubenswrapper[4947]: I1129 06:57:37.067040 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8a50b2c-5e30-48c2-82ba-6af920f73a6b-scripts\") pod \"ceilometer-0\" (UID: \"b8a50b2c-5e30-48c2-82ba-6af920f73a6b\") " pod="openstack/ceilometer-0" Nov 29 06:57:37 crc kubenswrapper[4947]: I1129 06:57:37.067131 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b8a50b2c-5e30-48c2-82ba-6af920f73a6b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b8a50b2c-5e30-48c2-82ba-6af920f73a6b\") " pod="openstack/ceilometer-0" Nov 29 06:57:37 crc kubenswrapper[4947]: I1129 06:57:37.067169 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8a50b2c-5e30-48c2-82ba-6af920f73a6b-log-httpd\") pod \"ceilometer-0\" (UID: \"b8a50b2c-5e30-48c2-82ba-6af920f73a6b\") " pod="openstack/ceilometer-0" Nov 29 06:57:37 crc kubenswrapper[4947]: I1129 06:57:37.067222 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8a50b2c-5e30-48c2-82ba-6af920f73a6b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b8a50b2c-5e30-48c2-82ba-6af920f73a6b\") " pod="openstack/ceilometer-0" Nov 29 06:57:37 crc kubenswrapper[4947]: I1129 06:57:37.067277 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84kpn\" (UniqueName: \"kubernetes.io/projected/b8a50b2c-5e30-48c2-82ba-6af920f73a6b-kube-api-access-84kpn\") pod \"ceilometer-0\" (UID: \"b8a50b2c-5e30-48c2-82ba-6af920f73a6b\") " pod="openstack/ceilometer-0" Nov 29 06:57:37 crc kubenswrapper[4947]: I1129 06:57:37.067302 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8a50b2c-5e30-48c2-82ba-6af920f73a6b-run-httpd\") pod \"ceilometer-0\" (UID: \"b8a50b2c-5e30-48c2-82ba-6af920f73a6b\") " pod="openstack/ceilometer-0" Nov 29 06:57:37 crc kubenswrapper[4947]: I1129 06:57:37.067351 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8a50b2c-5e30-48c2-82ba-6af920f73a6b-config-data\") pod \"ceilometer-0\" (UID: \"b8a50b2c-5e30-48c2-82ba-6af920f73a6b\") " pod="openstack/ceilometer-0" Nov 29 06:57:37 crc kubenswrapper[4947]: I1129 06:57:37.069252 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8a50b2c-5e30-48c2-82ba-6af920f73a6b-run-httpd\") pod \"ceilometer-0\" (UID: \"b8a50b2c-5e30-48c2-82ba-6af920f73a6b\") " pod="openstack/ceilometer-0" Nov 29 06:57:37 crc kubenswrapper[4947]: I1129 06:57:37.069630 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8a50b2c-5e30-48c2-82ba-6af920f73a6b-log-httpd\") pod \"ceilometer-0\" (UID: \"b8a50b2c-5e30-48c2-82ba-6af920f73a6b\") " pod="openstack/ceilometer-0" Nov 29 06:57:37 crc kubenswrapper[4947]: I1129 06:57:37.074944 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b8a50b2c-5e30-48c2-82ba-6af920f73a6b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b8a50b2c-5e30-48c2-82ba-6af920f73a6b\") " pod="openstack/ceilometer-0" Nov 29 06:57:37 crc kubenswrapper[4947]: I1129 06:57:37.075709 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8a50b2c-5e30-48c2-82ba-6af920f73a6b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b8a50b2c-5e30-48c2-82ba-6af920f73a6b\") " pod="openstack/ceilometer-0" Nov 29 06:57:37 crc kubenswrapper[4947]: I1129 06:57:37.076002 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8a50b2c-5e30-48c2-82ba-6af920f73a6b-scripts\") pod \"ceilometer-0\" (UID: \"b8a50b2c-5e30-48c2-82ba-6af920f73a6b\") " pod="openstack/ceilometer-0" Nov 29 06:57:37 crc kubenswrapper[4947]: I1129 06:57:37.076173 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8a50b2c-5e30-48c2-82ba-6af920f73a6b-config-data\") pod \"ceilometer-0\" (UID: \"b8a50b2c-5e30-48c2-82ba-6af920f73a6b\") " pod="openstack/ceilometer-0" Nov 29 06:57:37 crc kubenswrapper[4947]: I1129 06:57:37.107123 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84kpn\" (UniqueName: \"kubernetes.io/projected/b8a50b2c-5e30-48c2-82ba-6af920f73a6b-kube-api-access-84kpn\") pod \"ceilometer-0\" (UID: \"b8a50b2c-5e30-48c2-82ba-6af920f73a6b\") " pod="openstack/ceilometer-0" Nov 29 06:57:37 crc kubenswrapper[4947]: I1129 06:57:37.175817 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 06:57:37 crc kubenswrapper[4947]: I1129 06:57:37.202616 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f46ebbae-61e6-4e58-9f82-f47aca4269f5" path="/var/lib/kubelet/pods/f46ebbae-61e6-4e58-9f82-f47aca4269f5/volumes" Nov 29 06:57:37 crc kubenswrapper[4947]: I1129 06:57:37.797662 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 06:57:38 crc kubenswrapper[4947]: I1129 06:57:38.787546 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8a50b2c-5e30-48c2-82ba-6af920f73a6b","Type":"ContainerStarted","Data":"67477ab5bbc36f7a0ca8cd3adcb9bc0e8bf1696a140c1cef83f51352e5ba97de"} Nov 29 06:57:38 crc kubenswrapper[4947]: I1129 06:57:38.788452 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8a50b2c-5e30-48c2-82ba-6af920f73a6b","Type":"ContainerStarted","Data":"9ea47039ed1d8900a8ab13a57fda43badc6ccd8d4b479f83f1adaab0819eab20"} Nov 29 06:57:38 crc kubenswrapper[4947]: I1129 06:57:38.792837 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 06:57:38 crc kubenswrapper[4947]: I1129 06:57:38.872764 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7l78q"] Nov 29 06:57:38 crc kubenswrapper[4947]: I1129 06:57:38.874358 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-7l78q" Nov 29 06:57:38 crc kubenswrapper[4947]: I1129 06:57:38.886050 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7l78q"] Nov 29 06:57:38 crc kubenswrapper[4947]: I1129 06:57:38.887707 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 29 06:57:38 crc kubenswrapper[4947]: I1129 06:57:38.888147 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 29 06:57:38 crc kubenswrapper[4947]: I1129 06:57:38.888369 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-qmt2g" Nov 29 06:57:38 crc kubenswrapper[4947]: I1129 06:57:38.917783 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d7c47be-5cbc-4cae-8eae-055a4693547c-scripts\") pod \"nova-cell0-conductor-db-sync-7l78q\" (UID: \"4d7c47be-5cbc-4cae-8eae-055a4693547c\") " pod="openstack/nova-cell0-conductor-db-sync-7l78q" Nov 29 06:57:38 crc kubenswrapper[4947]: I1129 06:57:38.917886 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d7c47be-5cbc-4cae-8eae-055a4693547c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-7l78q\" (UID: \"4d7c47be-5cbc-4cae-8eae-055a4693547c\") " pod="openstack/nova-cell0-conductor-db-sync-7l78q" Nov 29 06:57:38 crc kubenswrapper[4947]: I1129 06:57:38.917919 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khlp4\" (UniqueName: \"kubernetes.io/projected/4d7c47be-5cbc-4cae-8eae-055a4693547c-kube-api-access-khlp4\") pod \"nova-cell0-conductor-db-sync-7l78q\" (UID: \"4d7c47be-5cbc-4cae-8eae-055a4693547c\") " pod="openstack/nova-cell0-conductor-db-sync-7l78q" Nov 29 06:57:38 crc kubenswrapper[4947]: I1129 06:57:38.917981 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d7c47be-5cbc-4cae-8eae-055a4693547c-config-data\") pod \"nova-cell0-conductor-db-sync-7l78q\" (UID: \"4d7c47be-5cbc-4cae-8eae-055a4693547c\") " pod="openstack/nova-cell0-conductor-db-sync-7l78q" Nov 29 06:57:39 crc kubenswrapper[4947]: I1129 06:57:39.022830 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d7c47be-5cbc-4cae-8eae-055a4693547c-scripts\") pod \"nova-cell0-conductor-db-sync-7l78q\" (UID: \"4d7c47be-5cbc-4cae-8eae-055a4693547c\") " pod="openstack/nova-cell0-conductor-db-sync-7l78q" Nov 29 06:57:39 crc kubenswrapper[4947]: I1129 06:57:39.022913 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d7c47be-5cbc-4cae-8eae-055a4693547c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-7l78q\" (UID: \"4d7c47be-5cbc-4cae-8eae-055a4693547c\") " pod="openstack/nova-cell0-conductor-db-sync-7l78q" Nov 29 06:57:39 crc kubenswrapper[4947]: I1129 06:57:39.022950 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khlp4\" (UniqueName: \"kubernetes.io/projected/4d7c47be-5cbc-4cae-8eae-055a4693547c-kube-api-access-khlp4\") pod \"nova-cell0-conductor-db-sync-7l78q\" (UID: \"4d7c47be-5cbc-4cae-8eae-055a4693547c\") " pod="openstack/nova-cell0-conductor-db-sync-7l78q" Nov 29 06:57:39 crc kubenswrapper[4947]: I1129 06:57:39.023026 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d7c47be-5cbc-4cae-8eae-055a4693547c-config-data\") pod \"nova-cell0-conductor-db-sync-7l78q\" (UID: \"4d7c47be-5cbc-4cae-8eae-055a4693547c\") " pod="openstack/nova-cell0-conductor-db-sync-7l78q" Nov 29 06:57:39 crc kubenswrapper[4947]: I1129 06:57:39.031601 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d7c47be-5cbc-4cae-8eae-055a4693547c-scripts\") pod \"nova-cell0-conductor-db-sync-7l78q\" (UID: \"4d7c47be-5cbc-4cae-8eae-055a4693547c\") " pod="openstack/nova-cell0-conductor-db-sync-7l78q" Nov 29 06:57:39 crc kubenswrapper[4947]: I1129 06:57:39.032172 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d7c47be-5cbc-4cae-8eae-055a4693547c-config-data\") pod \"nova-cell0-conductor-db-sync-7l78q\" (UID: \"4d7c47be-5cbc-4cae-8eae-055a4693547c\") " pod="openstack/nova-cell0-conductor-db-sync-7l78q" Nov 29 06:57:39 crc kubenswrapper[4947]: I1129 06:57:39.032323 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d7c47be-5cbc-4cae-8eae-055a4693547c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-7l78q\" (UID: \"4d7c47be-5cbc-4cae-8eae-055a4693547c\") " pod="openstack/nova-cell0-conductor-db-sync-7l78q" Nov 29 06:57:39 crc kubenswrapper[4947]: I1129 06:57:39.046888 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khlp4\" (UniqueName: \"kubernetes.io/projected/4d7c47be-5cbc-4cae-8eae-055a4693547c-kube-api-access-khlp4\") pod \"nova-cell0-conductor-db-sync-7l78q\" (UID: \"4d7c47be-5cbc-4cae-8eae-055a4693547c\") " pod="openstack/nova-cell0-conductor-db-sync-7l78q" Nov 29 06:57:39 crc kubenswrapper[4947]: I1129 06:57:39.197760 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-7l78q" Nov 29 06:57:39 crc kubenswrapper[4947]: I1129 06:57:39.597847 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-dbcc54df-b7blt" Nov 29 06:57:39 crc kubenswrapper[4947]: I1129 06:57:39.702650 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7d7467cbc8-jttcd"] Nov 29 06:57:39 crc kubenswrapper[4947]: I1129 06:57:39.703446 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7d7467cbc8-jttcd" podUID="37e2a512-1c34-4400-8986-244e64410004" containerName="neutron-api" containerID="cri-o://b66610f79b9b74b6dc6abb459e34b205c173b7d7b7427422616711f042ef3e3e" gracePeriod=30 Nov 29 06:57:39 crc kubenswrapper[4947]: I1129 06:57:39.704048 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7d7467cbc8-jttcd" podUID="37e2a512-1c34-4400-8986-244e64410004" containerName="neutron-httpd" containerID="cri-o://6de20a3f02eacb904a39e3413e77f93d0c3c36c1cbbfa751bcb7cae8c3a74639" gracePeriod=30 Nov 29 06:57:39 crc kubenswrapper[4947]: I1129 06:57:39.762414 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7l78q"] Nov 29 06:57:40 crc kubenswrapper[4947]: I1129 06:57:40.819779 4947 generic.go:334] "Generic (PLEG): container finished" podID="37e2a512-1c34-4400-8986-244e64410004" containerID="6de20a3f02eacb904a39e3413e77f93d0c3c36c1cbbfa751bcb7cae8c3a74639" exitCode=0 Nov 29 06:57:40 crc kubenswrapper[4947]: I1129 06:57:40.819898 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7d7467cbc8-jttcd" event={"ID":"37e2a512-1c34-4400-8986-244e64410004","Type":"ContainerDied","Data":"6de20a3f02eacb904a39e3413e77f93d0c3c36c1cbbfa751bcb7cae8c3a74639"} Nov 29 06:57:40 crc kubenswrapper[4947]: I1129 06:57:40.834094 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8a50b2c-5e30-48c2-82ba-6af920f73a6b","Type":"ContainerStarted","Data":"b3cbbb385a505ce065545f8449d63dc10cfbbebdb3b53c291070d82e927f9ccd"} Nov 29 06:57:40 crc kubenswrapper[4947]: I1129 06:57:40.837192 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-7l78q" event={"ID":"4d7c47be-5cbc-4cae-8eae-055a4693547c","Type":"ContainerStarted","Data":"6fb12f998247e26704db3f9fd9e56b382ec4bfcaaea27b568ab17b72eefa347c"} Nov 29 06:57:42 crc kubenswrapper[4947]: E1129 06:57:42.198330 4947 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37e2a512_1c34_4400_8986_244e64410004.slice/crio-b66610f79b9b74b6dc6abb459e34b205c173b7d7b7427422616711f042ef3e3e.scope\": RecentStats: unable to find data in memory cache]" Nov 29 06:57:42 crc kubenswrapper[4947]: I1129 06:57:42.858878 4947 generic.go:334] "Generic (PLEG): container finished" podID="37e2a512-1c34-4400-8986-244e64410004" containerID="b66610f79b9b74b6dc6abb459e34b205c173b7d7b7427422616711f042ef3e3e" exitCode=0 Nov 29 06:57:42 crc kubenswrapper[4947]: I1129 06:57:42.858972 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7d7467cbc8-jttcd" event={"ID":"37e2a512-1c34-4400-8986-244e64410004","Type":"ContainerDied","Data":"b66610f79b9b74b6dc6abb459e34b205c173b7d7b7427422616711f042ef3e3e"} Nov 29 06:57:43 crc kubenswrapper[4947]: I1129 06:57:43.660904 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7d7467cbc8-jttcd" Nov 29 06:57:43 crc kubenswrapper[4947]: I1129 06:57:43.855941 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/37e2a512-1c34-4400-8986-244e64410004-httpd-config\") pod \"37e2a512-1c34-4400-8986-244e64410004\" (UID: \"37e2a512-1c34-4400-8986-244e64410004\") " Nov 29 06:57:43 crc kubenswrapper[4947]: I1129 06:57:43.856037 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/37e2a512-1c34-4400-8986-244e64410004-ovndb-tls-certs\") pod \"37e2a512-1c34-4400-8986-244e64410004\" (UID: \"37e2a512-1c34-4400-8986-244e64410004\") " Nov 29 06:57:43 crc kubenswrapper[4947]: I1129 06:57:43.856223 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/37e2a512-1c34-4400-8986-244e64410004-config\") pod \"37e2a512-1c34-4400-8986-244e64410004\" (UID: \"37e2a512-1c34-4400-8986-244e64410004\") " Nov 29 06:57:43 crc kubenswrapper[4947]: I1129 06:57:43.856307 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jb5fb\" (UniqueName: \"kubernetes.io/projected/37e2a512-1c34-4400-8986-244e64410004-kube-api-access-jb5fb\") pod \"37e2a512-1c34-4400-8986-244e64410004\" (UID: \"37e2a512-1c34-4400-8986-244e64410004\") " Nov 29 06:57:43 crc kubenswrapper[4947]: I1129 06:57:43.856359 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37e2a512-1c34-4400-8986-244e64410004-combined-ca-bundle\") pod \"37e2a512-1c34-4400-8986-244e64410004\" (UID: \"37e2a512-1c34-4400-8986-244e64410004\") " Nov 29 06:57:43 crc kubenswrapper[4947]: I1129 06:57:43.865224 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37e2a512-1c34-4400-8986-244e64410004-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "37e2a512-1c34-4400-8986-244e64410004" (UID: "37e2a512-1c34-4400-8986-244e64410004"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:57:43 crc kubenswrapper[4947]: I1129 06:57:43.882144 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7d7467cbc8-jttcd" Nov 29 06:57:43 crc kubenswrapper[4947]: I1129 06:57:43.882155 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7d7467cbc8-jttcd" event={"ID":"37e2a512-1c34-4400-8986-244e64410004","Type":"ContainerDied","Data":"3eca658dafbdb3640f4d19a53677695531eab41e8263797a8550999e17d0d64a"} Nov 29 06:57:43 crc kubenswrapper[4947]: I1129 06:57:43.882284 4947 scope.go:117] "RemoveContainer" containerID="6de20a3f02eacb904a39e3413e77f93d0c3c36c1cbbfa751bcb7cae8c3a74639" Nov 29 06:57:43 crc kubenswrapper[4947]: I1129 06:57:43.885977 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37e2a512-1c34-4400-8986-244e64410004-kube-api-access-jb5fb" (OuterVolumeSpecName: "kube-api-access-jb5fb") pod "37e2a512-1c34-4400-8986-244e64410004" (UID: "37e2a512-1c34-4400-8986-244e64410004"). InnerVolumeSpecName "kube-api-access-jb5fb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:57:43 crc kubenswrapper[4947]: I1129 06:57:43.887284 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8a50b2c-5e30-48c2-82ba-6af920f73a6b","Type":"ContainerStarted","Data":"34a98177a8729acc6c00fd84e4affbed0fd244c0f9f270d5f5d9f5d8a0acc548"} Nov 29 06:57:43 crc kubenswrapper[4947]: I1129 06:57:43.935466 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37e2a512-1c34-4400-8986-244e64410004-config" (OuterVolumeSpecName: "config") pod "37e2a512-1c34-4400-8986-244e64410004" (UID: "37e2a512-1c34-4400-8986-244e64410004"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:57:43 crc kubenswrapper[4947]: I1129 06:57:43.941188 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37e2a512-1c34-4400-8986-244e64410004-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "37e2a512-1c34-4400-8986-244e64410004" (UID: "37e2a512-1c34-4400-8986-244e64410004"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:57:43 crc kubenswrapper[4947]: I1129 06:57:43.952384 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37e2a512-1c34-4400-8986-244e64410004-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "37e2a512-1c34-4400-8986-244e64410004" (UID: "37e2a512-1c34-4400-8986-244e64410004"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:57:43 crc kubenswrapper[4947]: I1129 06:57:43.958855 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/37e2a512-1c34-4400-8986-244e64410004-config\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:43 crc kubenswrapper[4947]: I1129 06:57:43.958901 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jb5fb\" (UniqueName: \"kubernetes.io/projected/37e2a512-1c34-4400-8986-244e64410004-kube-api-access-jb5fb\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:43 crc kubenswrapper[4947]: I1129 06:57:43.958914 4947 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37e2a512-1c34-4400-8986-244e64410004-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:43 crc kubenswrapper[4947]: I1129 06:57:43.958925 4947 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/37e2a512-1c34-4400-8986-244e64410004-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:43 crc kubenswrapper[4947]: I1129 06:57:43.958936 4947 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/37e2a512-1c34-4400-8986-244e64410004-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:44 crc kubenswrapper[4947]: I1129 06:57:44.603139 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7d7467cbc8-jttcd"] Nov 29 06:57:44 crc kubenswrapper[4947]: I1129 06:57:44.634802 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7d7467cbc8-jttcd"] Nov 29 06:57:45 crc kubenswrapper[4947]: I1129 06:57:45.200244 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37e2a512-1c34-4400-8986-244e64410004" path="/var/lib/kubelet/pods/37e2a512-1c34-4400-8986-244e64410004/volumes" Nov 29 06:57:48 crc kubenswrapper[4947]: I1129 06:57:48.895469 4947 scope.go:117] "RemoveContainer" containerID="b66610f79b9b74b6dc6abb459e34b205c173b7d7b7427422616711f042ef3e3e" Nov 29 06:57:49 crc kubenswrapper[4947]: I1129 06:57:49.992896 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8a50b2c-5e30-48c2-82ba-6af920f73a6b","Type":"ContainerStarted","Data":"73a34274dd43927c30030b4fd12ccdd99c7cbc9d004b9561c5dca8e4caec8f09"} Nov 29 06:57:49 crc kubenswrapper[4947]: I1129 06:57:49.993651 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b8a50b2c-5e30-48c2-82ba-6af920f73a6b" containerName="ceilometer-central-agent" containerID="cri-o://67477ab5bbc36f7a0ca8cd3adcb9bc0e8bf1696a140c1cef83f51352e5ba97de" gracePeriod=30 Nov 29 06:57:49 crc kubenswrapper[4947]: I1129 06:57:49.994044 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 29 06:57:49 crc kubenswrapper[4947]: I1129 06:57:49.994371 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b8a50b2c-5e30-48c2-82ba-6af920f73a6b" containerName="proxy-httpd" containerID="cri-o://73a34274dd43927c30030b4fd12ccdd99c7cbc9d004b9561c5dca8e4caec8f09" gracePeriod=30 Nov 29 06:57:49 crc kubenswrapper[4947]: I1129 06:57:49.994425 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b8a50b2c-5e30-48c2-82ba-6af920f73a6b" containerName="sg-core" containerID="cri-o://34a98177a8729acc6c00fd84e4affbed0fd244c0f9f270d5f5d9f5d8a0acc548" gracePeriod=30 Nov 29 06:57:49 crc kubenswrapper[4947]: I1129 06:57:49.994467 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b8a50b2c-5e30-48c2-82ba-6af920f73a6b" containerName="ceilometer-notification-agent" containerID="cri-o://b3cbbb385a505ce065545f8449d63dc10cfbbebdb3b53c291070d82e927f9ccd" gracePeriod=30 Nov 29 06:57:50 crc kubenswrapper[4947]: I1129 06:57:50.009749 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-7l78q" event={"ID":"4d7c47be-5cbc-4cae-8eae-055a4693547c","Type":"ContainerStarted","Data":"89c1f2844aa41674ee039340c8f4908911aee71616ae7f37a62f133381644b56"} Nov 29 06:57:50 crc kubenswrapper[4947]: I1129 06:57:50.025743 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.791842541 podStartE2EDuration="14.025714268s" podCreationTimestamp="2025-11-29 06:57:36 +0000 UTC" firstStartedPulling="2025-11-29 06:57:37.804772436 +0000 UTC m=+1408.849154517" lastFinishedPulling="2025-11-29 06:57:49.038644163 +0000 UTC m=+1420.083026244" observedRunningTime="2025-11-29 06:57:50.016531207 +0000 UTC m=+1421.060913308" watchObservedRunningTime="2025-11-29 06:57:50.025714268 +0000 UTC m=+1421.070096349" Nov 29 06:57:50 crc kubenswrapper[4947]: I1129 06:57:50.051285 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-7l78q" podStartSLOduration=2.817472544 podStartE2EDuration="12.05126138s" podCreationTimestamp="2025-11-29 06:57:38 +0000 UTC" firstStartedPulling="2025-11-29 06:57:39.800940358 +0000 UTC m=+1410.845322439" lastFinishedPulling="2025-11-29 06:57:49.034729194 +0000 UTC m=+1420.079111275" observedRunningTime="2025-11-29 06:57:50.046374087 +0000 UTC m=+1421.090756208" watchObservedRunningTime="2025-11-29 06:57:50.05126138 +0000 UTC m=+1421.095643461" Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.028591 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.033133 4947 generic.go:334] "Generic (PLEG): container finished" podID="b8a50b2c-5e30-48c2-82ba-6af920f73a6b" containerID="73a34274dd43927c30030b4fd12ccdd99c7cbc9d004b9561c5dca8e4caec8f09" exitCode=0 Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.033184 4947 generic.go:334] "Generic (PLEG): container finished" podID="b8a50b2c-5e30-48c2-82ba-6af920f73a6b" containerID="34a98177a8729acc6c00fd84e4affbed0fd244c0f9f270d5f5d9f5d8a0acc548" exitCode=2 Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.033193 4947 generic.go:334] "Generic (PLEG): container finished" podID="b8a50b2c-5e30-48c2-82ba-6af920f73a6b" containerID="b3cbbb385a505ce065545f8449d63dc10cfbbebdb3b53c291070d82e927f9ccd" exitCode=0 Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.033202 4947 generic.go:334] "Generic (PLEG): container finished" podID="b8a50b2c-5e30-48c2-82ba-6af920f73a6b" containerID="67477ab5bbc36f7a0ca8cd3adcb9bc0e8bf1696a140c1cef83f51352e5ba97de" exitCode=0 Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.036590 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8a50b2c-5e30-48c2-82ba-6af920f73a6b","Type":"ContainerDied","Data":"73a34274dd43927c30030b4fd12ccdd99c7cbc9d004b9561c5dca8e4caec8f09"} Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.036712 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8a50b2c-5e30-48c2-82ba-6af920f73a6b","Type":"ContainerDied","Data":"34a98177a8729acc6c00fd84e4affbed0fd244c0f9f270d5f5d9f5d8a0acc548"} Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.036728 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8a50b2c-5e30-48c2-82ba-6af920f73a6b","Type":"ContainerDied","Data":"b3cbbb385a505ce065545f8449d63dc10cfbbebdb3b53c291070d82e927f9ccd"} Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.036745 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8a50b2c-5e30-48c2-82ba-6af920f73a6b","Type":"ContainerDied","Data":"67477ab5bbc36f7a0ca8cd3adcb9bc0e8bf1696a140c1cef83f51352e5ba97de"} Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.036761 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8a50b2c-5e30-48c2-82ba-6af920f73a6b","Type":"ContainerDied","Data":"9ea47039ed1d8900a8ab13a57fda43badc6ccd8d4b479f83f1adaab0819eab20"} Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.036787 4947 scope.go:117] "RemoveContainer" containerID="73a34274dd43927c30030b4fd12ccdd99c7cbc9d004b9561c5dca8e4caec8f09" Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.083872 4947 scope.go:117] "RemoveContainer" containerID="34a98177a8729acc6c00fd84e4affbed0fd244c0f9f270d5f5d9f5d8a0acc548" Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.091461 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8a50b2c-5e30-48c2-82ba-6af920f73a6b-config-data\") pod \"b8a50b2c-5e30-48c2-82ba-6af920f73a6b\" (UID: \"b8a50b2c-5e30-48c2-82ba-6af920f73a6b\") " Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.091547 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8a50b2c-5e30-48c2-82ba-6af920f73a6b-log-httpd\") pod \"b8a50b2c-5e30-48c2-82ba-6af920f73a6b\" (UID: \"b8a50b2c-5e30-48c2-82ba-6af920f73a6b\") " Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.091600 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8a50b2c-5e30-48c2-82ba-6af920f73a6b-run-httpd\") pod \"b8a50b2c-5e30-48c2-82ba-6af920f73a6b\" (UID: \"b8a50b2c-5e30-48c2-82ba-6af920f73a6b\") " Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.091659 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8a50b2c-5e30-48c2-82ba-6af920f73a6b-scripts\") pod \"b8a50b2c-5e30-48c2-82ba-6af920f73a6b\" (UID: \"b8a50b2c-5e30-48c2-82ba-6af920f73a6b\") " Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.091695 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8a50b2c-5e30-48c2-82ba-6af920f73a6b-combined-ca-bundle\") pod \"b8a50b2c-5e30-48c2-82ba-6af920f73a6b\" (UID: \"b8a50b2c-5e30-48c2-82ba-6af920f73a6b\") " Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.091729 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b8a50b2c-5e30-48c2-82ba-6af920f73a6b-sg-core-conf-yaml\") pod \"b8a50b2c-5e30-48c2-82ba-6af920f73a6b\" (UID: \"b8a50b2c-5e30-48c2-82ba-6af920f73a6b\") " Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.091798 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84kpn\" (UniqueName: \"kubernetes.io/projected/b8a50b2c-5e30-48c2-82ba-6af920f73a6b-kube-api-access-84kpn\") pod \"b8a50b2c-5e30-48c2-82ba-6af920f73a6b\" (UID: \"b8a50b2c-5e30-48c2-82ba-6af920f73a6b\") " Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.092631 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8a50b2c-5e30-48c2-82ba-6af920f73a6b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b8a50b2c-5e30-48c2-82ba-6af920f73a6b" (UID: "b8a50b2c-5e30-48c2-82ba-6af920f73a6b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.092688 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8a50b2c-5e30-48c2-82ba-6af920f73a6b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b8a50b2c-5e30-48c2-82ba-6af920f73a6b" (UID: "b8a50b2c-5e30-48c2-82ba-6af920f73a6b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.101820 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8a50b2c-5e30-48c2-82ba-6af920f73a6b-scripts" (OuterVolumeSpecName: "scripts") pod "b8a50b2c-5e30-48c2-82ba-6af920f73a6b" (UID: "b8a50b2c-5e30-48c2-82ba-6af920f73a6b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.103343 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8a50b2c-5e30-48c2-82ba-6af920f73a6b-kube-api-access-84kpn" (OuterVolumeSpecName: "kube-api-access-84kpn") pod "b8a50b2c-5e30-48c2-82ba-6af920f73a6b" (UID: "b8a50b2c-5e30-48c2-82ba-6af920f73a6b"). InnerVolumeSpecName "kube-api-access-84kpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.129455 4947 scope.go:117] "RemoveContainer" containerID="b3cbbb385a505ce065545f8449d63dc10cfbbebdb3b53c291070d82e927f9ccd" Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.181140 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8a50b2c-5e30-48c2-82ba-6af920f73a6b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b8a50b2c-5e30-48c2-82ba-6af920f73a6b" (UID: "b8a50b2c-5e30-48c2-82ba-6af920f73a6b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.194900 4947 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8a50b2c-5e30-48c2-82ba-6af920f73a6b-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.194974 4947 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8a50b2c-5e30-48c2-82ba-6af920f73a6b-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.194994 4947 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8a50b2c-5e30-48c2-82ba-6af920f73a6b-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.195026 4947 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b8a50b2c-5e30-48c2-82ba-6af920f73a6b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.195040 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84kpn\" (UniqueName: \"kubernetes.io/projected/b8a50b2c-5e30-48c2-82ba-6af920f73a6b-kube-api-access-84kpn\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.242789 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8a50b2c-5e30-48c2-82ba-6af920f73a6b-config-data" (OuterVolumeSpecName: "config-data") pod "b8a50b2c-5e30-48c2-82ba-6af920f73a6b" (UID: "b8a50b2c-5e30-48c2-82ba-6af920f73a6b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.250711 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8a50b2c-5e30-48c2-82ba-6af920f73a6b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b8a50b2c-5e30-48c2-82ba-6af920f73a6b" (UID: "b8a50b2c-5e30-48c2-82ba-6af920f73a6b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.298755 4947 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8a50b2c-5e30-48c2-82ba-6af920f73a6b-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.298822 4947 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8a50b2c-5e30-48c2-82ba-6af920f73a6b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.324809 4947 scope.go:117] "RemoveContainer" containerID="67477ab5bbc36f7a0ca8cd3adcb9bc0e8bf1696a140c1cef83f51352e5ba97de" Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.362611 4947 scope.go:117] "RemoveContainer" containerID="73a34274dd43927c30030b4fd12ccdd99c7cbc9d004b9561c5dca8e4caec8f09" Nov 29 06:57:51 crc kubenswrapper[4947]: E1129 06:57:51.363356 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73a34274dd43927c30030b4fd12ccdd99c7cbc9d004b9561c5dca8e4caec8f09\": container with ID starting with 73a34274dd43927c30030b4fd12ccdd99c7cbc9d004b9561c5dca8e4caec8f09 not found: ID does not exist" containerID="73a34274dd43927c30030b4fd12ccdd99c7cbc9d004b9561c5dca8e4caec8f09" Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.363435 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73a34274dd43927c30030b4fd12ccdd99c7cbc9d004b9561c5dca8e4caec8f09"} err="failed to get container status \"73a34274dd43927c30030b4fd12ccdd99c7cbc9d004b9561c5dca8e4caec8f09\": rpc error: code = NotFound desc = could not find container \"73a34274dd43927c30030b4fd12ccdd99c7cbc9d004b9561c5dca8e4caec8f09\": container with ID starting with 73a34274dd43927c30030b4fd12ccdd99c7cbc9d004b9561c5dca8e4caec8f09 not found: ID does not exist" Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.363482 4947 scope.go:117] "RemoveContainer" containerID="34a98177a8729acc6c00fd84e4affbed0fd244c0f9f270d5f5d9f5d8a0acc548" Nov 29 06:57:51 crc kubenswrapper[4947]: E1129 06:57:51.364296 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34a98177a8729acc6c00fd84e4affbed0fd244c0f9f270d5f5d9f5d8a0acc548\": container with ID starting with 34a98177a8729acc6c00fd84e4affbed0fd244c0f9f270d5f5d9f5d8a0acc548 not found: ID does not exist" containerID="34a98177a8729acc6c00fd84e4affbed0fd244c0f9f270d5f5d9f5d8a0acc548" Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.364354 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34a98177a8729acc6c00fd84e4affbed0fd244c0f9f270d5f5d9f5d8a0acc548"} err="failed to get container status \"34a98177a8729acc6c00fd84e4affbed0fd244c0f9f270d5f5d9f5d8a0acc548\": rpc error: code = NotFound desc = could not find container \"34a98177a8729acc6c00fd84e4affbed0fd244c0f9f270d5f5d9f5d8a0acc548\": container with ID starting with 34a98177a8729acc6c00fd84e4affbed0fd244c0f9f270d5f5d9f5d8a0acc548 not found: ID does not exist" Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.364374 4947 scope.go:117] "RemoveContainer" containerID="b3cbbb385a505ce065545f8449d63dc10cfbbebdb3b53c291070d82e927f9ccd" Nov 29 06:57:51 crc kubenswrapper[4947]: E1129 06:57:51.364820 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3cbbb385a505ce065545f8449d63dc10cfbbebdb3b53c291070d82e927f9ccd\": container with ID starting with b3cbbb385a505ce065545f8449d63dc10cfbbebdb3b53c291070d82e927f9ccd not found: ID does not exist" containerID="b3cbbb385a505ce065545f8449d63dc10cfbbebdb3b53c291070d82e927f9ccd" Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.364844 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3cbbb385a505ce065545f8449d63dc10cfbbebdb3b53c291070d82e927f9ccd"} err="failed to get container status \"b3cbbb385a505ce065545f8449d63dc10cfbbebdb3b53c291070d82e927f9ccd\": rpc error: code = NotFound desc = could not find container \"b3cbbb385a505ce065545f8449d63dc10cfbbebdb3b53c291070d82e927f9ccd\": container with ID starting with b3cbbb385a505ce065545f8449d63dc10cfbbebdb3b53c291070d82e927f9ccd not found: ID does not exist" Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.364884 4947 scope.go:117] "RemoveContainer" containerID="67477ab5bbc36f7a0ca8cd3adcb9bc0e8bf1696a140c1cef83f51352e5ba97de" Nov 29 06:57:51 crc kubenswrapper[4947]: E1129 06:57:51.365192 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67477ab5bbc36f7a0ca8cd3adcb9bc0e8bf1696a140c1cef83f51352e5ba97de\": container with ID starting with 67477ab5bbc36f7a0ca8cd3adcb9bc0e8bf1696a140c1cef83f51352e5ba97de not found: ID does not exist" containerID="67477ab5bbc36f7a0ca8cd3adcb9bc0e8bf1696a140c1cef83f51352e5ba97de" Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.365296 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67477ab5bbc36f7a0ca8cd3adcb9bc0e8bf1696a140c1cef83f51352e5ba97de"} err="failed to get container status \"67477ab5bbc36f7a0ca8cd3adcb9bc0e8bf1696a140c1cef83f51352e5ba97de\": rpc error: code = NotFound desc = could not find container \"67477ab5bbc36f7a0ca8cd3adcb9bc0e8bf1696a140c1cef83f51352e5ba97de\": container with ID starting with 67477ab5bbc36f7a0ca8cd3adcb9bc0e8bf1696a140c1cef83f51352e5ba97de not found: ID does not exist" Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.365321 4947 scope.go:117] "RemoveContainer" containerID="73a34274dd43927c30030b4fd12ccdd99c7cbc9d004b9561c5dca8e4caec8f09" Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.365951 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73a34274dd43927c30030b4fd12ccdd99c7cbc9d004b9561c5dca8e4caec8f09"} err="failed to get container status \"73a34274dd43927c30030b4fd12ccdd99c7cbc9d004b9561c5dca8e4caec8f09\": rpc error: code = NotFound desc = could not find container \"73a34274dd43927c30030b4fd12ccdd99c7cbc9d004b9561c5dca8e4caec8f09\": container with ID starting with 73a34274dd43927c30030b4fd12ccdd99c7cbc9d004b9561c5dca8e4caec8f09 not found: ID does not exist" Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.366023 4947 scope.go:117] "RemoveContainer" containerID="34a98177a8729acc6c00fd84e4affbed0fd244c0f9f270d5f5d9f5d8a0acc548" Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.366541 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34a98177a8729acc6c00fd84e4affbed0fd244c0f9f270d5f5d9f5d8a0acc548"} err="failed to get container status \"34a98177a8729acc6c00fd84e4affbed0fd244c0f9f270d5f5d9f5d8a0acc548\": rpc error: code = NotFound desc = could not find container \"34a98177a8729acc6c00fd84e4affbed0fd244c0f9f270d5f5d9f5d8a0acc548\": container with ID starting with 34a98177a8729acc6c00fd84e4affbed0fd244c0f9f270d5f5d9f5d8a0acc548 not found: ID does not exist" Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.366568 4947 scope.go:117] "RemoveContainer" containerID="b3cbbb385a505ce065545f8449d63dc10cfbbebdb3b53c291070d82e927f9ccd" Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.366980 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3cbbb385a505ce065545f8449d63dc10cfbbebdb3b53c291070d82e927f9ccd"} err="failed to get container status \"b3cbbb385a505ce065545f8449d63dc10cfbbebdb3b53c291070d82e927f9ccd\": rpc error: code = NotFound desc = could not find container \"b3cbbb385a505ce065545f8449d63dc10cfbbebdb3b53c291070d82e927f9ccd\": container with ID starting with b3cbbb385a505ce065545f8449d63dc10cfbbebdb3b53c291070d82e927f9ccd not found: ID does not exist" Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.367040 4947 scope.go:117] "RemoveContainer" containerID="67477ab5bbc36f7a0ca8cd3adcb9bc0e8bf1696a140c1cef83f51352e5ba97de" Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.367560 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67477ab5bbc36f7a0ca8cd3adcb9bc0e8bf1696a140c1cef83f51352e5ba97de"} err="failed to get container status \"67477ab5bbc36f7a0ca8cd3adcb9bc0e8bf1696a140c1cef83f51352e5ba97de\": rpc error: code = NotFound desc = could not find container \"67477ab5bbc36f7a0ca8cd3adcb9bc0e8bf1696a140c1cef83f51352e5ba97de\": container with ID starting with 67477ab5bbc36f7a0ca8cd3adcb9bc0e8bf1696a140c1cef83f51352e5ba97de not found: ID does not exist" Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.367598 4947 scope.go:117] "RemoveContainer" containerID="73a34274dd43927c30030b4fd12ccdd99c7cbc9d004b9561c5dca8e4caec8f09" Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.367973 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73a34274dd43927c30030b4fd12ccdd99c7cbc9d004b9561c5dca8e4caec8f09"} err="failed to get container status \"73a34274dd43927c30030b4fd12ccdd99c7cbc9d004b9561c5dca8e4caec8f09\": rpc error: code = NotFound desc = could not find container \"73a34274dd43927c30030b4fd12ccdd99c7cbc9d004b9561c5dca8e4caec8f09\": container with ID starting with 73a34274dd43927c30030b4fd12ccdd99c7cbc9d004b9561c5dca8e4caec8f09 not found: ID does not exist" Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.368008 4947 scope.go:117] "RemoveContainer" containerID="34a98177a8729acc6c00fd84e4affbed0fd244c0f9f270d5f5d9f5d8a0acc548" Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.368306 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34a98177a8729acc6c00fd84e4affbed0fd244c0f9f270d5f5d9f5d8a0acc548"} err="failed to get container status \"34a98177a8729acc6c00fd84e4affbed0fd244c0f9f270d5f5d9f5d8a0acc548\": rpc error: code = NotFound desc = could not find container \"34a98177a8729acc6c00fd84e4affbed0fd244c0f9f270d5f5d9f5d8a0acc548\": container with ID starting with 34a98177a8729acc6c00fd84e4affbed0fd244c0f9f270d5f5d9f5d8a0acc548 not found: ID does not exist" Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.368331 4947 scope.go:117] "RemoveContainer" containerID="b3cbbb385a505ce065545f8449d63dc10cfbbebdb3b53c291070d82e927f9ccd" Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.368578 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3cbbb385a505ce065545f8449d63dc10cfbbebdb3b53c291070d82e927f9ccd"} err="failed to get container status \"b3cbbb385a505ce065545f8449d63dc10cfbbebdb3b53c291070d82e927f9ccd\": rpc error: code = NotFound desc = could not find container \"b3cbbb385a505ce065545f8449d63dc10cfbbebdb3b53c291070d82e927f9ccd\": container with ID starting with b3cbbb385a505ce065545f8449d63dc10cfbbebdb3b53c291070d82e927f9ccd not found: ID does not exist" Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.368604 4947 scope.go:117] "RemoveContainer" containerID="67477ab5bbc36f7a0ca8cd3adcb9bc0e8bf1696a140c1cef83f51352e5ba97de" Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.368889 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67477ab5bbc36f7a0ca8cd3adcb9bc0e8bf1696a140c1cef83f51352e5ba97de"} err="failed to get container status \"67477ab5bbc36f7a0ca8cd3adcb9bc0e8bf1696a140c1cef83f51352e5ba97de\": rpc error: code = NotFound desc = could not find container \"67477ab5bbc36f7a0ca8cd3adcb9bc0e8bf1696a140c1cef83f51352e5ba97de\": container with ID starting with 67477ab5bbc36f7a0ca8cd3adcb9bc0e8bf1696a140c1cef83f51352e5ba97de not found: ID does not exist" Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.368914 4947 scope.go:117] "RemoveContainer" containerID="73a34274dd43927c30030b4fd12ccdd99c7cbc9d004b9561c5dca8e4caec8f09" Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.369211 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73a34274dd43927c30030b4fd12ccdd99c7cbc9d004b9561c5dca8e4caec8f09"} err="failed to get container status \"73a34274dd43927c30030b4fd12ccdd99c7cbc9d004b9561c5dca8e4caec8f09\": rpc error: code = NotFound desc = could not find container \"73a34274dd43927c30030b4fd12ccdd99c7cbc9d004b9561c5dca8e4caec8f09\": container with ID starting with 73a34274dd43927c30030b4fd12ccdd99c7cbc9d004b9561c5dca8e4caec8f09 not found: ID does not exist" Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.369264 4947 scope.go:117] "RemoveContainer" containerID="34a98177a8729acc6c00fd84e4affbed0fd244c0f9f270d5f5d9f5d8a0acc548" Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.369566 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34a98177a8729acc6c00fd84e4affbed0fd244c0f9f270d5f5d9f5d8a0acc548"} err="failed to get container status \"34a98177a8729acc6c00fd84e4affbed0fd244c0f9f270d5f5d9f5d8a0acc548\": rpc error: code = NotFound desc = could not find container \"34a98177a8729acc6c00fd84e4affbed0fd244c0f9f270d5f5d9f5d8a0acc548\": container with ID starting with 34a98177a8729acc6c00fd84e4affbed0fd244c0f9f270d5f5d9f5d8a0acc548 not found: ID does not exist" Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.369591 4947 scope.go:117] "RemoveContainer" containerID="b3cbbb385a505ce065545f8449d63dc10cfbbebdb3b53c291070d82e927f9ccd" Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.369857 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3cbbb385a505ce065545f8449d63dc10cfbbebdb3b53c291070d82e927f9ccd"} err="failed to get container status \"b3cbbb385a505ce065545f8449d63dc10cfbbebdb3b53c291070d82e927f9ccd\": rpc error: code = NotFound desc = could not find container \"b3cbbb385a505ce065545f8449d63dc10cfbbebdb3b53c291070d82e927f9ccd\": container with ID starting with b3cbbb385a505ce065545f8449d63dc10cfbbebdb3b53c291070d82e927f9ccd not found: ID does not exist" Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.369884 4947 scope.go:117] "RemoveContainer" containerID="67477ab5bbc36f7a0ca8cd3adcb9bc0e8bf1696a140c1cef83f51352e5ba97de" Nov 29 06:57:51 crc kubenswrapper[4947]: I1129 06:57:51.370263 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67477ab5bbc36f7a0ca8cd3adcb9bc0e8bf1696a140c1cef83f51352e5ba97de"} err="failed to get container status \"67477ab5bbc36f7a0ca8cd3adcb9bc0e8bf1696a140c1cef83f51352e5ba97de\": rpc error: code = NotFound desc = could not find container \"67477ab5bbc36f7a0ca8cd3adcb9bc0e8bf1696a140c1cef83f51352e5ba97de\": container with ID starting with 67477ab5bbc36f7a0ca8cd3adcb9bc0e8bf1696a140c1cef83f51352e5ba97de not found: ID does not exist" Nov 29 06:57:52 crc kubenswrapper[4947]: I1129 06:57:52.054038 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 06:57:52 crc kubenswrapper[4947]: I1129 06:57:52.105104 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 06:57:52 crc kubenswrapper[4947]: I1129 06:57:52.114298 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 29 06:57:52 crc kubenswrapper[4947]: I1129 06:57:52.132092 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 29 06:57:52 crc kubenswrapper[4947]: E1129 06:57:52.132742 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37e2a512-1c34-4400-8986-244e64410004" containerName="neutron-httpd" Nov 29 06:57:52 crc kubenswrapper[4947]: I1129 06:57:52.132771 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="37e2a512-1c34-4400-8986-244e64410004" containerName="neutron-httpd" Nov 29 06:57:52 crc kubenswrapper[4947]: E1129 06:57:52.132814 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37e2a512-1c34-4400-8986-244e64410004" containerName="neutron-api" Nov 29 06:57:52 crc kubenswrapper[4947]: I1129 06:57:52.132824 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="37e2a512-1c34-4400-8986-244e64410004" containerName="neutron-api" Nov 29 06:57:52 crc kubenswrapper[4947]: E1129 06:57:52.132838 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8a50b2c-5e30-48c2-82ba-6af920f73a6b" containerName="sg-core" Nov 29 06:57:52 crc kubenswrapper[4947]: I1129 06:57:52.132845 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8a50b2c-5e30-48c2-82ba-6af920f73a6b" containerName="sg-core" Nov 29 06:57:52 crc kubenswrapper[4947]: E1129 06:57:52.132856 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8a50b2c-5e30-48c2-82ba-6af920f73a6b" containerName="ceilometer-notification-agent" Nov 29 06:57:52 crc kubenswrapper[4947]: I1129 06:57:52.132863 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8a50b2c-5e30-48c2-82ba-6af920f73a6b" containerName="ceilometer-notification-agent" Nov 29 06:57:52 crc kubenswrapper[4947]: E1129 06:57:52.132874 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8a50b2c-5e30-48c2-82ba-6af920f73a6b" containerName="proxy-httpd" Nov 29 06:57:52 crc kubenswrapper[4947]: I1129 06:57:52.132881 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8a50b2c-5e30-48c2-82ba-6af920f73a6b" containerName="proxy-httpd" Nov 29 06:57:52 crc kubenswrapper[4947]: E1129 06:57:52.132895 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8a50b2c-5e30-48c2-82ba-6af920f73a6b" containerName="ceilometer-central-agent" Nov 29 06:57:52 crc kubenswrapper[4947]: I1129 06:57:52.132904 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8a50b2c-5e30-48c2-82ba-6af920f73a6b" containerName="ceilometer-central-agent" Nov 29 06:57:52 crc kubenswrapper[4947]: I1129 06:57:52.133134 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8a50b2c-5e30-48c2-82ba-6af920f73a6b" containerName="ceilometer-notification-agent" Nov 29 06:57:52 crc kubenswrapper[4947]: I1129 06:57:52.133157 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8a50b2c-5e30-48c2-82ba-6af920f73a6b" containerName="sg-core" Nov 29 06:57:52 crc kubenswrapper[4947]: I1129 06:57:52.133171 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="37e2a512-1c34-4400-8986-244e64410004" containerName="neutron-api" Nov 29 06:57:52 crc kubenswrapper[4947]: I1129 06:57:52.133189 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8a50b2c-5e30-48c2-82ba-6af920f73a6b" containerName="ceilometer-central-agent" Nov 29 06:57:52 crc kubenswrapper[4947]: I1129 06:57:52.133203 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="37e2a512-1c34-4400-8986-244e64410004" containerName="neutron-httpd" Nov 29 06:57:52 crc kubenswrapper[4947]: I1129 06:57:52.133217 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8a50b2c-5e30-48c2-82ba-6af920f73a6b" containerName="proxy-httpd" Nov 29 06:57:52 crc kubenswrapper[4947]: I1129 06:57:52.135798 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 06:57:52 crc kubenswrapper[4947]: I1129 06:57:52.138417 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 29 06:57:52 crc kubenswrapper[4947]: I1129 06:57:52.138660 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 29 06:57:52 crc kubenswrapper[4947]: I1129 06:57:52.154816 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 06:57:52 crc kubenswrapper[4947]: I1129 06:57:52.187625 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 06:57:52 crc kubenswrapper[4947]: E1129 06:57:52.188619 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data kube-api-access-44n5z log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[combined-ca-bundle config-data kube-api-access-44n5z log-httpd run-httpd scripts sg-core-conf-yaml]: context canceled" pod="openstack/ceilometer-0" podUID="55e0b72b-2489-4b90-a6a4-e299275d0b85" Nov 29 06:57:52 crc kubenswrapper[4947]: I1129 06:57:52.216033 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55e0b72b-2489-4b90-a6a4-e299275d0b85-run-httpd\") pod \"ceilometer-0\" (UID: \"55e0b72b-2489-4b90-a6a4-e299275d0b85\") " pod="openstack/ceilometer-0" Nov 29 06:57:52 crc kubenswrapper[4947]: I1129 06:57:52.216090 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55e0b72b-2489-4b90-a6a4-e299275d0b85-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"55e0b72b-2489-4b90-a6a4-e299275d0b85\") " pod="openstack/ceilometer-0" Nov 29 06:57:52 crc kubenswrapper[4947]: I1129 06:57:52.216164 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44n5z\" (UniqueName: \"kubernetes.io/projected/55e0b72b-2489-4b90-a6a4-e299275d0b85-kube-api-access-44n5z\") pod \"ceilometer-0\" (UID: \"55e0b72b-2489-4b90-a6a4-e299275d0b85\") " pod="openstack/ceilometer-0" Nov 29 06:57:52 crc kubenswrapper[4947]: I1129 06:57:52.216253 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/55e0b72b-2489-4b90-a6a4-e299275d0b85-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"55e0b72b-2489-4b90-a6a4-e299275d0b85\") " pod="openstack/ceilometer-0" Nov 29 06:57:52 crc kubenswrapper[4947]: I1129 06:57:52.216283 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55e0b72b-2489-4b90-a6a4-e299275d0b85-scripts\") pod \"ceilometer-0\" (UID: \"55e0b72b-2489-4b90-a6a4-e299275d0b85\") " pod="openstack/ceilometer-0" Nov 29 06:57:52 crc kubenswrapper[4947]: I1129 06:57:52.216307 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55e0b72b-2489-4b90-a6a4-e299275d0b85-config-data\") pod \"ceilometer-0\" (UID: \"55e0b72b-2489-4b90-a6a4-e299275d0b85\") " pod="openstack/ceilometer-0" Nov 29 06:57:52 crc kubenswrapper[4947]: I1129 06:57:52.216330 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55e0b72b-2489-4b90-a6a4-e299275d0b85-log-httpd\") pod \"ceilometer-0\" (UID: \"55e0b72b-2489-4b90-a6a4-e299275d0b85\") " pod="openstack/ceilometer-0" Nov 29 06:57:52 crc kubenswrapper[4947]: I1129 06:57:52.316939 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55e0b72b-2489-4b90-a6a4-e299275d0b85-run-httpd\") pod \"ceilometer-0\" (UID: \"55e0b72b-2489-4b90-a6a4-e299275d0b85\") " pod="openstack/ceilometer-0" Nov 29 06:57:52 crc kubenswrapper[4947]: I1129 06:57:52.317000 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55e0b72b-2489-4b90-a6a4-e299275d0b85-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"55e0b72b-2489-4b90-a6a4-e299275d0b85\") " pod="openstack/ceilometer-0" Nov 29 06:57:52 crc kubenswrapper[4947]: I1129 06:57:52.317074 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44n5z\" (UniqueName: \"kubernetes.io/projected/55e0b72b-2489-4b90-a6a4-e299275d0b85-kube-api-access-44n5z\") pod \"ceilometer-0\" (UID: \"55e0b72b-2489-4b90-a6a4-e299275d0b85\") " pod="openstack/ceilometer-0" Nov 29 06:57:52 crc kubenswrapper[4947]: I1129 06:57:52.317104 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/55e0b72b-2489-4b90-a6a4-e299275d0b85-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"55e0b72b-2489-4b90-a6a4-e299275d0b85\") " pod="openstack/ceilometer-0" Nov 29 06:57:52 crc kubenswrapper[4947]: I1129 06:57:52.317133 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55e0b72b-2489-4b90-a6a4-e299275d0b85-scripts\") pod \"ceilometer-0\" (UID: \"55e0b72b-2489-4b90-a6a4-e299275d0b85\") " pod="openstack/ceilometer-0" Nov 29 06:57:52 crc kubenswrapper[4947]: I1129 06:57:52.317158 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55e0b72b-2489-4b90-a6a4-e299275d0b85-config-data\") pod \"ceilometer-0\" (UID: \"55e0b72b-2489-4b90-a6a4-e299275d0b85\") " pod="openstack/ceilometer-0" Nov 29 06:57:52 crc kubenswrapper[4947]: I1129 06:57:52.317186 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55e0b72b-2489-4b90-a6a4-e299275d0b85-log-httpd\") pod \"ceilometer-0\" (UID: \"55e0b72b-2489-4b90-a6a4-e299275d0b85\") " pod="openstack/ceilometer-0" Nov 29 06:57:52 crc kubenswrapper[4947]: I1129 06:57:52.317476 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55e0b72b-2489-4b90-a6a4-e299275d0b85-run-httpd\") pod \"ceilometer-0\" (UID: \"55e0b72b-2489-4b90-a6a4-e299275d0b85\") " pod="openstack/ceilometer-0" Nov 29 06:57:52 crc kubenswrapper[4947]: I1129 06:57:52.317733 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55e0b72b-2489-4b90-a6a4-e299275d0b85-log-httpd\") pod \"ceilometer-0\" (UID: \"55e0b72b-2489-4b90-a6a4-e299275d0b85\") " pod="openstack/ceilometer-0" Nov 29 06:57:52 crc kubenswrapper[4947]: I1129 06:57:52.324717 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/55e0b72b-2489-4b90-a6a4-e299275d0b85-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"55e0b72b-2489-4b90-a6a4-e299275d0b85\") " pod="openstack/ceilometer-0" Nov 29 06:57:52 crc kubenswrapper[4947]: I1129 06:57:52.326052 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55e0b72b-2489-4b90-a6a4-e299275d0b85-scripts\") pod \"ceilometer-0\" (UID: \"55e0b72b-2489-4b90-a6a4-e299275d0b85\") " pod="openstack/ceilometer-0" Nov 29 06:57:52 crc kubenswrapper[4947]: I1129 06:57:52.326385 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55e0b72b-2489-4b90-a6a4-e299275d0b85-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"55e0b72b-2489-4b90-a6a4-e299275d0b85\") " pod="openstack/ceilometer-0" Nov 29 06:57:52 crc kubenswrapper[4947]: I1129 06:57:52.326566 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55e0b72b-2489-4b90-a6a4-e299275d0b85-config-data\") pod \"ceilometer-0\" (UID: \"55e0b72b-2489-4b90-a6a4-e299275d0b85\") " pod="openstack/ceilometer-0" Nov 29 06:57:52 crc kubenswrapper[4947]: I1129 06:57:52.342101 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44n5z\" (UniqueName: \"kubernetes.io/projected/55e0b72b-2489-4b90-a6a4-e299275d0b85-kube-api-access-44n5z\") pod \"ceilometer-0\" (UID: \"55e0b72b-2489-4b90-a6a4-e299275d0b85\") " pod="openstack/ceilometer-0" Nov 29 06:57:52 crc kubenswrapper[4947]: I1129 06:57:52.987496 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 06:57:52 crc kubenswrapper[4947]: I1129 06:57:52.987976 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 06:57:53 crc kubenswrapper[4947]: I1129 06:57:53.062971 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 06:57:53 crc kubenswrapper[4947]: I1129 06:57:53.084850 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 06:57:53 crc kubenswrapper[4947]: I1129 06:57:53.193012 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8a50b2c-5e30-48c2-82ba-6af920f73a6b" path="/var/lib/kubelet/pods/b8a50b2c-5e30-48c2-82ba-6af920f73a6b/volumes" Nov 29 06:57:53 crc kubenswrapper[4947]: I1129 06:57:53.234610 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55e0b72b-2489-4b90-a6a4-e299275d0b85-run-httpd\") pod \"55e0b72b-2489-4b90-a6a4-e299275d0b85\" (UID: \"55e0b72b-2489-4b90-a6a4-e299275d0b85\") " Nov 29 06:57:53 crc kubenswrapper[4947]: I1129 06:57:53.234680 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55e0b72b-2489-4b90-a6a4-e299275d0b85-combined-ca-bundle\") pod \"55e0b72b-2489-4b90-a6a4-e299275d0b85\" (UID: \"55e0b72b-2489-4b90-a6a4-e299275d0b85\") " Nov 29 06:57:53 crc kubenswrapper[4947]: I1129 06:57:53.234730 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44n5z\" (UniqueName: \"kubernetes.io/projected/55e0b72b-2489-4b90-a6a4-e299275d0b85-kube-api-access-44n5z\") pod \"55e0b72b-2489-4b90-a6a4-e299275d0b85\" (UID: \"55e0b72b-2489-4b90-a6a4-e299275d0b85\") " Nov 29 06:57:53 crc kubenswrapper[4947]: I1129 06:57:53.234769 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55e0b72b-2489-4b90-a6a4-e299275d0b85-config-data\") pod \"55e0b72b-2489-4b90-a6a4-e299275d0b85\" (UID: \"55e0b72b-2489-4b90-a6a4-e299275d0b85\") " Nov 29 06:57:53 crc kubenswrapper[4947]: I1129 06:57:53.234949 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55e0b72b-2489-4b90-a6a4-e299275d0b85-scripts\") pod \"55e0b72b-2489-4b90-a6a4-e299275d0b85\" (UID: \"55e0b72b-2489-4b90-a6a4-e299275d0b85\") " Nov 29 06:57:53 crc kubenswrapper[4947]: I1129 06:57:53.234993 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55e0b72b-2489-4b90-a6a4-e299275d0b85-log-httpd\") pod \"55e0b72b-2489-4b90-a6a4-e299275d0b85\" (UID: \"55e0b72b-2489-4b90-a6a4-e299275d0b85\") " Nov 29 06:57:53 crc kubenswrapper[4947]: I1129 06:57:53.235018 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/55e0b72b-2489-4b90-a6a4-e299275d0b85-sg-core-conf-yaml\") pod \"55e0b72b-2489-4b90-a6a4-e299275d0b85\" (UID: \"55e0b72b-2489-4b90-a6a4-e299275d0b85\") " Nov 29 06:57:53 crc kubenswrapper[4947]: I1129 06:57:53.235099 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55e0b72b-2489-4b90-a6a4-e299275d0b85-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "55e0b72b-2489-4b90-a6a4-e299275d0b85" (UID: "55e0b72b-2489-4b90-a6a4-e299275d0b85"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:57:53 crc kubenswrapper[4947]: I1129 06:57:53.235418 4947 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55e0b72b-2489-4b90-a6a4-e299275d0b85-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:53 crc kubenswrapper[4947]: I1129 06:57:53.235468 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55e0b72b-2489-4b90-a6a4-e299275d0b85-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "55e0b72b-2489-4b90-a6a4-e299275d0b85" (UID: "55e0b72b-2489-4b90-a6a4-e299275d0b85"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:57:53 crc kubenswrapper[4947]: I1129 06:57:53.241698 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55e0b72b-2489-4b90-a6a4-e299275d0b85-kube-api-access-44n5z" (OuterVolumeSpecName: "kube-api-access-44n5z") pod "55e0b72b-2489-4b90-a6a4-e299275d0b85" (UID: "55e0b72b-2489-4b90-a6a4-e299275d0b85"). InnerVolumeSpecName "kube-api-access-44n5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:57:53 crc kubenswrapper[4947]: I1129 06:57:53.242025 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55e0b72b-2489-4b90-a6a4-e299275d0b85-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "55e0b72b-2489-4b90-a6a4-e299275d0b85" (UID: "55e0b72b-2489-4b90-a6a4-e299275d0b85"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:57:53 crc kubenswrapper[4947]: I1129 06:57:53.242544 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55e0b72b-2489-4b90-a6a4-e299275d0b85-config-data" (OuterVolumeSpecName: "config-data") pod "55e0b72b-2489-4b90-a6a4-e299275d0b85" (UID: "55e0b72b-2489-4b90-a6a4-e299275d0b85"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:57:53 crc kubenswrapper[4947]: I1129 06:57:53.255419 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55e0b72b-2489-4b90-a6a4-e299275d0b85-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "55e0b72b-2489-4b90-a6a4-e299275d0b85" (UID: "55e0b72b-2489-4b90-a6a4-e299275d0b85"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:57:53 crc kubenswrapper[4947]: I1129 06:57:53.257826 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55e0b72b-2489-4b90-a6a4-e299275d0b85-scripts" (OuterVolumeSpecName: "scripts") pod "55e0b72b-2489-4b90-a6a4-e299275d0b85" (UID: "55e0b72b-2489-4b90-a6a4-e299275d0b85"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:57:53 crc kubenswrapper[4947]: I1129 06:57:53.337184 4947 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55e0b72b-2489-4b90-a6a4-e299275d0b85-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:53 crc kubenswrapper[4947]: I1129 06:57:53.337254 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44n5z\" (UniqueName: \"kubernetes.io/projected/55e0b72b-2489-4b90-a6a4-e299275d0b85-kube-api-access-44n5z\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:53 crc kubenswrapper[4947]: I1129 06:57:53.337273 4947 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55e0b72b-2489-4b90-a6a4-e299275d0b85-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:53 crc kubenswrapper[4947]: I1129 06:57:53.337287 4947 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55e0b72b-2489-4b90-a6a4-e299275d0b85-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:53 crc kubenswrapper[4947]: I1129 06:57:53.337301 4947 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55e0b72b-2489-4b90-a6a4-e299275d0b85-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:53 crc kubenswrapper[4947]: I1129 06:57:53.337316 4947 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/55e0b72b-2489-4b90-a6a4-e299275d0b85-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 29 06:57:54 crc kubenswrapper[4947]: I1129 06:57:54.072657 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 06:57:54 crc kubenswrapper[4947]: I1129 06:57:54.140848 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 06:57:54 crc kubenswrapper[4947]: I1129 06:57:54.163329 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 29 06:57:54 crc kubenswrapper[4947]: I1129 06:57:54.170028 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 29 06:57:54 crc kubenswrapper[4947]: I1129 06:57:54.175370 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 06:57:54 crc kubenswrapper[4947]: I1129 06:57:54.179195 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 06:57:54 crc kubenswrapper[4947]: I1129 06:57:54.182199 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 29 06:57:54 crc kubenswrapper[4947]: I1129 06:57:54.182540 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 29 06:57:54 crc kubenswrapper[4947]: I1129 06:57:54.260121 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5177fef-81ba-4f18-b118-53b5dfa4bc36-scripts\") pod \"ceilometer-0\" (UID: \"b5177fef-81ba-4f18-b118-53b5dfa4bc36\") " pod="openstack/ceilometer-0" Nov 29 06:57:54 crc kubenswrapper[4947]: I1129 06:57:54.260181 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skb5w\" (UniqueName: \"kubernetes.io/projected/b5177fef-81ba-4f18-b118-53b5dfa4bc36-kube-api-access-skb5w\") pod \"ceilometer-0\" (UID: \"b5177fef-81ba-4f18-b118-53b5dfa4bc36\") " pod="openstack/ceilometer-0" Nov 29 06:57:54 crc kubenswrapper[4947]: I1129 06:57:54.260211 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5177fef-81ba-4f18-b118-53b5dfa4bc36-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b5177fef-81ba-4f18-b118-53b5dfa4bc36\") " pod="openstack/ceilometer-0" Nov 29 06:57:54 crc kubenswrapper[4947]: I1129 06:57:54.260478 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b5177fef-81ba-4f18-b118-53b5dfa4bc36-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b5177fef-81ba-4f18-b118-53b5dfa4bc36\") " pod="openstack/ceilometer-0" Nov 29 06:57:54 crc kubenswrapper[4947]: I1129 06:57:54.260539 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5177fef-81ba-4f18-b118-53b5dfa4bc36-log-httpd\") pod \"ceilometer-0\" (UID: \"b5177fef-81ba-4f18-b118-53b5dfa4bc36\") " pod="openstack/ceilometer-0" Nov 29 06:57:54 crc kubenswrapper[4947]: I1129 06:57:54.260586 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5177fef-81ba-4f18-b118-53b5dfa4bc36-config-data\") pod \"ceilometer-0\" (UID: \"b5177fef-81ba-4f18-b118-53b5dfa4bc36\") " pod="openstack/ceilometer-0" Nov 29 06:57:54 crc kubenswrapper[4947]: I1129 06:57:54.260744 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5177fef-81ba-4f18-b118-53b5dfa4bc36-run-httpd\") pod \"ceilometer-0\" (UID: \"b5177fef-81ba-4f18-b118-53b5dfa4bc36\") " pod="openstack/ceilometer-0" Nov 29 06:57:54 crc kubenswrapper[4947]: I1129 06:57:54.363504 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5177fef-81ba-4f18-b118-53b5dfa4bc36-scripts\") pod \"ceilometer-0\" (UID: \"b5177fef-81ba-4f18-b118-53b5dfa4bc36\") " pod="openstack/ceilometer-0" Nov 29 06:57:54 crc kubenswrapper[4947]: I1129 06:57:54.363587 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5177fef-81ba-4f18-b118-53b5dfa4bc36-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b5177fef-81ba-4f18-b118-53b5dfa4bc36\") " pod="openstack/ceilometer-0" Nov 29 06:57:54 crc kubenswrapper[4947]: I1129 06:57:54.363625 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skb5w\" (UniqueName: \"kubernetes.io/projected/b5177fef-81ba-4f18-b118-53b5dfa4bc36-kube-api-access-skb5w\") pod \"ceilometer-0\" (UID: \"b5177fef-81ba-4f18-b118-53b5dfa4bc36\") " pod="openstack/ceilometer-0" Nov 29 06:57:54 crc kubenswrapper[4947]: I1129 06:57:54.363685 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b5177fef-81ba-4f18-b118-53b5dfa4bc36-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b5177fef-81ba-4f18-b118-53b5dfa4bc36\") " pod="openstack/ceilometer-0" Nov 29 06:57:54 crc kubenswrapper[4947]: I1129 06:57:54.363729 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5177fef-81ba-4f18-b118-53b5dfa4bc36-log-httpd\") pod \"ceilometer-0\" (UID: \"b5177fef-81ba-4f18-b118-53b5dfa4bc36\") " pod="openstack/ceilometer-0" Nov 29 06:57:54 crc kubenswrapper[4947]: I1129 06:57:54.363772 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5177fef-81ba-4f18-b118-53b5dfa4bc36-config-data\") pod \"ceilometer-0\" (UID: \"b5177fef-81ba-4f18-b118-53b5dfa4bc36\") " pod="openstack/ceilometer-0" Nov 29 06:57:54 crc kubenswrapper[4947]: I1129 06:57:54.363846 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5177fef-81ba-4f18-b118-53b5dfa4bc36-run-httpd\") pod \"ceilometer-0\" (UID: \"b5177fef-81ba-4f18-b118-53b5dfa4bc36\") " pod="openstack/ceilometer-0" Nov 29 06:57:54 crc kubenswrapper[4947]: I1129 06:57:54.364702 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5177fef-81ba-4f18-b118-53b5dfa4bc36-run-httpd\") pod \"ceilometer-0\" (UID: \"b5177fef-81ba-4f18-b118-53b5dfa4bc36\") " pod="openstack/ceilometer-0" Nov 29 06:57:54 crc kubenswrapper[4947]: I1129 06:57:54.365628 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5177fef-81ba-4f18-b118-53b5dfa4bc36-log-httpd\") pod \"ceilometer-0\" (UID: \"b5177fef-81ba-4f18-b118-53b5dfa4bc36\") " pod="openstack/ceilometer-0" Nov 29 06:57:54 crc kubenswrapper[4947]: I1129 06:57:54.370977 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5177fef-81ba-4f18-b118-53b5dfa4bc36-scripts\") pod \"ceilometer-0\" (UID: \"b5177fef-81ba-4f18-b118-53b5dfa4bc36\") " pod="openstack/ceilometer-0" Nov 29 06:57:54 crc kubenswrapper[4947]: I1129 06:57:54.371004 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b5177fef-81ba-4f18-b118-53b5dfa4bc36-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b5177fef-81ba-4f18-b118-53b5dfa4bc36\") " pod="openstack/ceilometer-0" Nov 29 06:57:54 crc kubenswrapper[4947]: I1129 06:57:54.371924 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5177fef-81ba-4f18-b118-53b5dfa4bc36-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b5177fef-81ba-4f18-b118-53b5dfa4bc36\") " pod="openstack/ceilometer-0" Nov 29 06:57:54 crc kubenswrapper[4947]: I1129 06:57:54.373640 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5177fef-81ba-4f18-b118-53b5dfa4bc36-config-data\") pod \"ceilometer-0\" (UID: \"b5177fef-81ba-4f18-b118-53b5dfa4bc36\") " pod="openstack/ceilometer-0" Nov 29 06:57:54 crc kubenswrapper[4947]: I1129 06:57:54.386710 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skb5w\" (UniqueName: \"kubernetes.io/projected/b5177fef-81ba-4f18-b118-53b5dfa4bc36-kube-api-access-skb5w\") pod \"ceilometer-0\" (UID: \"b5177fef-81ba-4f18-b118-53b5dfa4bc36\") " pod="openstack/ceilometer-0" Nov 29 06:57:54 crc kubenswrapper[4947]: I1129 06:57:54.499932 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 06:57:54 crc kubenswrapper[4947]: I1129 06:57:54.986281 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 06:57:54 crc kubenswrapper[4947]: W1129 06:57:54.997165 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb5177fef_81ba_4f18_b118_53b5dfa4bc36.slice/crio-8b237ebc6586b2b2d1f04f2a5435eeb76972dece7090a3a829f9efd4f3a6f592 WatchSource:0}: Error finding container 8b237ebc6586b2b2d1f04f2a5435eeb76972dece7090a3a829f9efd4f3a6f592: Status 404 returned error can't find the container with id 8b237ebc6586b2b2d1f04f2a5435eeb76972dece7090a3a829f9efd4f3a6f592 Nov 29 06:57:55 crc kubenswrapper[4947]: I1129 06:57:55.085113 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b5177fef-81ba-4f18-b118-53b5dfa4bc36","Type":"ContainerStarted","Data":"8b237ebc6586b2b2d1f04f2a5435eeb76972dece7090a3a829f9efd4f3a6f592"} Nov 29 06:57:55 crc kubenswrapper[4947]: I1129 06:57:55.192515 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55e0b72b-2489-4b90-a6a4-e299275d0b85" path="/var/lib/kubelet/pods/55e0b72b-2489-4b90-a6a4-e299275d0b85/volumes" Nov 29 06:57:56 crc kubenswrapper[4947]: I1129 06:57:56.097602 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b5177fef-81ba-4f18-b118-53b5dfa4bc36","Type":"ContainerStarted","Data":"d482f275afe3dfba197d7727361c5267a57f4c8724e6c9882417d5043c5fe1b6"} Nov 29 06:57:58 crc kubenswrapper[4947]: I1129 06:57:58.122005 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b5177fef-81ba-4f18-b118-53b5dfa4bc36","Type":"ContainerStarted","Data":"1a42893ec10eb5bf4f93766065a9c46249fe9d4d875e69bbda0121da907d0334"} Nov 29 06:57:58 crc kubenswrapper[4947]: I1129 06:57:58.125286 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b5177fef-81ba-4f18-b118-53b5dfa4bc36","Type":"ContainerStarted","Data":"78f0686bdd74c35d9dcab63d5dc07b703f13ae8a922b45ac0d8000120769d691"} Nov 29 06:58:00 crc kubenswrapper[4947]: I1129 06:58:00.143800 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b5177fef-81ba-4f18-b118-53b5dfa4bc36","Type":"ContainerStarted","Data":"5da9e2351cf0d67425719c7681fe1af903b1ea5416f3e21678a7293ee0b7d9e1"} Nov 29 06:58:00 crc kubenswrapper[4947]: I1129 06:58:00.144467 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 29 06:58:10 crc kubenswrapper[4947]: I1129 06:58:10.255653 4947 generic.go:334] "Generic (PLEG): container finished" podID="4d7c47be-5cbc-4cae-8eae-055a4693547c" containerID="89c1f2844aa41674ee039340c8f4908911aee71616ae7f37a62f133381644b56" exitCode=0 Nov 29 06:58:10 crc kubenswrapper[4947]: I1129 06:58:10.255698 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-7l78q" event={"ID":"4d7c47be-5cbc-4cae-8eae-055a4693547c","Type":"ContainerDied","Data":"89c1f2844aa41674ee039340c8f4908911aee71616ae7f37a62f133381644b56"} Nov 29 06:58:10 crc kubenswrapper[4947]: I1129 06:58:10.283602 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=12.25739947 podStartE2EDuration="16.283579098s" podCreationTimestamp="2025-11-29 06:57:54 +0000 UTC" firstStartedPulling="2025-11-29 06:57:55.003154 +0000 UTC m=+1426.047536081" lastFinishedPulling="2025-11-29 06:57:59.029333628 +0000 UTC m=+1430.073715709" observedRunningTime="2025-11-29 06:58:00.185987516 +0000 UTC m=+1431.230369597" watchObservedRunningTime="2025-11-29 06:58:10.283579098 +0000 UTC m=+1441.327961179" Nov 29 06:58:11 crc kubenswrapper[4947]: I1129 06:58:11.627268 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-7l78q" Nov 29 06:58:11 crc kubenswrapper[4947]: I1129 06:58:11.757000 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khlp4\" (UniqueName: \"kubernetes.io/projected/4d7c47be-5cbc-4cae-8eae-055a4693547c-kube-api-access-khlp4\") pod \"4d7c47be-5cbc-4cae-8eae-055a4693547c\" (UID: \"4d7c47be-5cbc-4cae-8eae-055a4693547c\") " Nov 29 06:58:11 crc kubenswrapper[4947]: I1129 06:58:11.757091 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d7c47be-5cbc-4cae-8eae-055a4693547c-combined-ca-bundle\") pod \"4d7c47be-5cbc-4cae-8eae-055a4693547c\" (UID: \"4d7c47be-5cbc-4cae-8eae-055a4693547c\") " Nov 29 06:58:11 crc kubenswrapper[4947]: I1129 06:58:11.757166 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d7c47be-5cbc-4cae-8eae-055a4693547c-config-data\") pod \"4d7c47be-5cbc-4cae-8eae-055a4693547c\" (UID: \"4d7c47be-5cbc-4cae-8eae-055a4693547c\") " Nov 29 06:58:11 crc kubenswrapper[4947]: I1129 06:58:11.757366 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d7c47be-5cbc-4cae-8eae-055a4693547c-scripts\") pod \"4d7c47be-5cbc-4cae-8eae-055a4693547c\" (UID: \"4d7c47be-5cbc-4cae-8eae-055a4693547c\") " Nov 29 06:58:11 crc kubenswrapper[4947]: I1129 06:58:11.765108 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d7c47be-5cbc-4cae-8eae-055a4693547c-kube-api-access-khlp4" (OuterVolumeSpecName: "kube-api-access-khlp4") pod "4d7c47be-5cbc-4cae-8eae-055a4693547c" (UID: "4d7c47be-5cbc-4cae-8eae-055a4693547c"). InnerVolumeSpecName "kube-api-access-khlp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:58:11 crc kubenswrapper[4947]: I1129 06:58:11.765422 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d7c47be-5cbc-4cae-8eae-055a4693547c-scripts" (OuterVolumeSpecName: "scripts") pod "4d7c47be-5cbc-4cae-8eae-055a4693547c" (UID: "4d7c47be-5cbc-4cae-8eae-055a4693547c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:58:11 crc kubenswrapper[4947]: E1129 06:58:11.785605 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d7c47be-5cbc-4cae-8eae-055a4693547c-config-data podName:4d7c47be-5cbc-4cae-8eae-055a4693547c nodeName:}" failed. No retries permitted until 2025-11-29 06:58:12.285564457 +0000 UTC m=+1443.329946538 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/4d7c47be-5cbc-4cae-8eae-055a4693547c-config-data") pod "4d7c47be-5cbc-4cae-8eae-055a4693547c" (UID: "4d7c47be-5cbc-4cae-8eae-055a4693547c") : error deleting /var/lib/kubelet/pods/4d7c47be-5cbc-4cae-8eae-055a4693547c/volume-subpaths: remove /var/lib/kubelet/pods/4d7c47be-5cbc-4cae-8eae-055a4693547c/volume-subpaths: no such file or directory Nov 29 06:58:11 crc kubenswrapper[4947]: I1129 06:58:11.789559 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d7c47be-5cbc-4cae-8eae-055a4693547c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4d7c47be-5cbc-4cae-8eae-055a4693547c" (UID: "4d7c47be-5cbc-4cae-8eae-055a4693547c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:58:11 crc kubenswrapper[4947]: I1129 06:58:11.860091 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khlp4\" (UniqueName: \"kubernetes.io/projected/4d7c47be-5cbc-4cae-8eae-055a4693547c-kube-api-access-khlp4\") on node \"crc\" DevicePath \"\"" Nov 29 06:58:11 crc kubenswrapper[4947]: I1129 06:58:11.860163 4947 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d7c47be-5cbc-4cae-8eae-055a4693547c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 06:58:11 crc kubenswrapper[4947]: I1129 06:58:11.860174 4947 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d7c47be-5cbc-4cae-8eae-055a4693547c-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 06:58:12 crc kubenswrapper[4947]: I1129 06:58:12.279054 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-7l78q" event={"ID":"4d7c47be-5cbc-4cae-8eae-055a4693547c","Type":"ContainerDied","Data":"6fb12f998247e26704db3f9fd9e56b382ec4bfcaaea27b568ab17b72eefa347c"} Nov 29 06:58:12 crc kubenswrapper[4947]: I1129 06:58:12.279104 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6fb12f998247e26704db3f9fd9e56b382ec4bfcaaea27b568ab17b72eefa347c" Nov 29 06:58:12 crc kubenswrapper[4947]: I1129 06:58:12.279147 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-7l78q" Nov 29 06:58:12 crc kubenswrapper[4947]: I1129 06:58:12.368390 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d7c47be-5cbc-4cae-8eae-055a4693547c-config-data\") pod \"4d7c47be-5cbc-4cae-8eae-055a4693547c\" (UID: \"4d7c47be-5cbc-4cae-8eae-055a4693547c\") " Nov 29 06:58:12 crc kubenswrapper[4947]: I1129 06:58:12.372536 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d7c47be-5cbc-4cae-8eae-055a4693547c-config-data" (OuterVolumeSpecName: "config-data") pod "4d7c47be-5cbc-4cae-8eae-055a4693547c" (UID: "4d7c47be-5cbc-4cae-8eae-055a4693547c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:58:12 crc kubenswrapper[4947]: I1129 06:58:12.402368 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 29 06:58:12 crc kubenswrapper[4947]: E1129 06:58:12.403310 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d7c47be-5cbc-4cae-8eae-055a4693547c" containerName="nova-cell0-conductor-db-sync" Nov 29 06:58:12 crc kubenswrapper[4947]: I1129 06:58:12.403415 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d7c47be-5cbc-4cae-8eae-055a4693547c" containerName="nova-cell0-conductor-db-sync" Nov 29 06:58:12 crc kubenswrapper[4947]: I1129 06:58:12.403781 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d7c47be-5cbc-4cae-8eae-055a4693547c" containerName="nova-cell0-conductor-db-sync" Nov 29 06:58:12 crc kubenswrapper[4947]: I1129 06:58:12.404680 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 29 06:58:12 crc kubenswrapper[4947]: I1129 06:58:12.417843 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 29 06:58:12 crc kubenswrapper[4947]: I1129 06:58:12.471163 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pkc2\" (UniqueName: \"kubernetes.io/projected/52e8eb08-68e9-4fac-b7bf-b7481388c22e-kube-api-access-9pkc2\") pod \"nova-cell0-conductor-0\" (UID: \"52e8eb08-68e9-4fac-b7bf-b7481388c22e\") " pod="openstack/nova-cell0-conductor-0" Nov 29 06:58:12 crc kubenswrapper[4947]: I1129 06:58:12.471453 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52e8eb08-68e9-4fac-b7bf-b7481388c22e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"52e8eb08-68e9-4fac-b7bf-b7481388c22e\") " pod="openstack/nova-cell0-conductor-0" Nov 29 06:58:12 crc kubenswrapper[4947]: I1129 06:58:12.471553 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52e8eb08-68e9-4fac-b7bf-b7481388c22e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"52e8eb08-68e9-4fac-b7bf-b7481388c22e\") " pod="openstack/nova-cell0-conductor-0" Nov 29 06:58:12 crc kubenswrapper[4947]: I1129 06:58:12.471831 4947 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d7c47be-5cbc-4cae-8eae-055a4693547c-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 06:58:12 crc kubenswrapper[4947]: I1129 06:58:12.573731 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pkc2\" (UniqueName: \"kubernetes.io/projected/52e8eb08-68e9-4fac-b7bf-b7481388c22e-kube-api-access-9pkc2\") pod \"nova-cell0-conductor-0\" (UID: \"52e8eb08-68e9-4fac-b7bf-b7481388c22e\") " pod="openstack/nova-cell0-conductor-0" Nov 29 06:58:12 crc kubenswrapper[4947]: I1129 06:58:12.574494 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52e8eb08-68e9-4fac-b7bf-b7481388c22e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"52e8eb08-68e9-4fac-b7bf-b7481388c22e\") " pod="openstack/nova-cell0-conductor-0" Nov 29 06:58:12 crc kubenswrapper[4947]: I1129 06:58:12.574526 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52e8eb08-68e9-4fac-b7bf-b7481388c22e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"52e8eb08-68e9-4fac-b7bf-b7481388c22e\") " pod="openstack/nova-cell0-conductor-0" Nov 29 06:58:12 crc kubenswrapper[4947]: I1129 06:58:12.579695 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52e8eb08-68e9-4fac-b7bf-b7481388c22e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"52e8eb08-68e9-4fac-b7bf-b7481388c22e\") " pod="openstack/nova-cell0-conductor-0" Nov 29 06:58:12 crc kubenswrapper[4947]: I1129 06:58:12.580161 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52e8eb08-68e9-4fac-b7bf-b7481388c22e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"52e8eb08-68e9-4fac-b7bf-b7481388c22e\") " pod="openstack/nova-cell0-conductor-0" Nov 29 06:58:12 crc kubenswrapper[4947]: I1129 06:58:12.594484 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pkc2\" (UniqueName: \"kubernetes.io/projected/52e8eb08-68e9-4fac-b7bf-b7481388c22e-kube-api-access-9pkc2\") pod \"nova-cell0-conductor-0\" (UID: \"52e8eb08-68e9-4fac-b7bf-b7481388c22e\") " pod="openstack/nova-cell0-conductor-0" Nov 29 06:58:12 crc kubenswrapper[4947]: I1129 06:58:12.748534 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 29 06:58:13 crc kubenswrapper[4947]: I1129 06:58:13.232165 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 29 06:58:13 crc kubenswrapper[4947]: W1129 06:58:13.235576 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52e8eb08_68e9_4fac_b7bf_b7481388c22e.slice/crio-faa115685b4f6ceb564215fa3f75055401a5d7c468bd7cb58576b9fa75c50e69 WatchSource:0}: Error finding container faa115685b4f6ceb564215fa3f75055401a5d7c468bd7cb58576b9fa75c50e69: Status 404 returned error can't find the container with id faa115685b4f6ceb564215fa3f75055401a5d7c468bd7cb58576b9fa75c50e69 Nov 29 06:58:13 crc kubenswrapper[4947]: I1129 06:58:13.293259 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"52e8eb08-68e9-4fac-b7bf-b7481388c22e","Type":"ContainerStarted","Data":"faa115685b4f6ceb564215fa3f75055401a5d7c468bd7cb58576b9fa75c50e69"} Nov 29 06:58:14 crc kubenswrapper[4947]: I1129 06:58:14.306736 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"52e8eb08-68e9-4fac-b7bf-b7481388c22e","Type":"ContainerStarted","Data":"4479e76d473794c8127e3ee3d0abf66cb4cd0b895ef8cbea7eabc96c2383804f"} Nov 29 06:58:14 crc kubenswrapper[4947]: I1129 06:58:14.307237 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 29 06:58:14 crc kubenswrapper[4947]: I1129 06:58:14.337447 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.33741089 podStartE2EDuration="2.33741089s" podCreationTimestamp="2025-11-29 06:58:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:58:14.330873015 +0000 UTC m=+1445.375255116" watchObservedRunningTime="2025-11-29 06:58:14.33741089 +0000 UTC m=+1445.381793011" Nov 29 06:58:22 crc kubenswrapper[4947]: I1129 06:58:22.782759 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 29 06:58:22 crc kubenswrapper[4947]: I1129 06:58:22.987837 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 06:58:22 crc kubenswrapper[4947]: I1129 06:58:22.987941 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 06:58:23 crc kubenswrapper[4947]: I1129 06:58:23.836981 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-7w6zr"] Nov 29 06:58:23 crc kubenswrapper[4947]: I1129 06:58:23.839960 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7w6zr" Nov 29 06:58:23 crc kubenswrapper[4947]: I1129 06:58:23.844446 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 29 06:58:23 crc kubenswrapper[4947]: I1129 06:58:23.845053 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 29 06:58:23 crc kubenswrapper[4947]: I1129 06:58:23.849255 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-7w6zr"] Nov 29 06:58:23 crc kubenswrapper[4947]: I1129 06:58:23.933184 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc39fbb3-cd61-46df-9549-9a75ff63206e-config-data\") pod \"nova-cell0-cell-mapping-7w6zr\" (UID: \"bc39fbb3-cd61-46df-9549-9a75ff63206e\") " pod="openstack/nova-cell0-cell-mapping-7w6zr" Nov 29 06:58:23 crc kubenswrapper[4947]: I1129 06:58:23.933640 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc39fbb3-cd61-46df-9549-9a75ff63206e-scripts\") pod \"nova-cell0-cell-mapping-7w6zr\" (UID: \"bc39fbb3-cd61-46df-9549-9a75ff63206e\") " pod="openstack/nova-cell0-cell-mapping-7w6zr" Nov 29 06:58:23 crc kubenswrapper[4947]: I1129 06:58:23.933678 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc39fbb3-cd61-46df-9549-9a75ff63206e-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7w6zr\" (UID: \"bc39fbb3-cd61-46df-9549-9a75ff63206e\") " pod="openstack/nova-cell0-cell-mapping-7w6zr" Nov 29 06:58:23 crc kubenswrapper[4947]: I1129 06:58:23.933819 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h66h\" (UniqueName: \"kubernetes.io/projected/bc39fbb3-cd61-46df-9549-9a75ff63206e-kube-api-access-5h66h\") pod \"nova-cell0-cell-mapping-7w6zr\" (UID: \"bc39fbb3-cd61-46df-9549-9a75ff63206e\") " pod="openstack/nova-cell0-cell-mapping-7w6zr" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.025543 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.027298 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.030603 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.035654 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc39fbb3-cd61-46df-9549-9a75ff63206e-scripts\") pod \"nova-cell0-cell-mapping-7w6zr\" (UID: \"bc39fbb3-cd61-46df-9549-9a75ff63206e\") " pod="openstack/nova-cell0-cell-mapping-7w6zr" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.035703 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc39fbb3-cd61-46df-9549-9a75ff63206e-config-data\") pod \"nova-cell0-cell-mapping-7w6zr\" (UID: \"bc39fbb3-cd61-46df-9549-9a75ff63206e\") " pod="openstack/nova-cell0-cell-mapping-7w6zr" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.035739 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc39fbb3-cd61-46df-9549-9a75ff63206e-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7w6zr\" (UID: \"bc39fbb3-cd61-46df-9549-9a75ff63206e\") " pod="openstack/nova-cell0-cell-mapping-7w6zr" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.035830 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5h66h\" (UniqueName: \"kubernetes.io/projected/bc39fbb3-cd61-46df-9549-9a75ff63206e-kube-api-access-5h66h\") pod \"nova-cell0-cell-mapping-7w6zr\" (UID: \"bc39fbb3-cd61-46df-9549-9a75ff63206e\") " pod="openstack/nova-cell0-cell-mapping-7w6zr" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.045105 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.051458 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc39fbb3-cd61-46df-9549-9a75ff63206e-config-data\") pod \"nova-cell0-cell-mapping-7w6zr\" (UID: \"bc39fbb3-cd61-46df-9549-9a75ff63206e\") " pod="openstack/nova-cell0-cell-mapping-7w6zr" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.052691 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc39fbb3-cd61-46df-9549-9a75ff63206e-scripts\") pod \"nova-cell0-cell-mapping-7w6zr\" (UID: \"bc39fbb3-cd61-46df-9549-9a75ff63206e\") " pod="openstack/nova-cell0-cell-mapping-7w6zr" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.067305 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc39fbb3-cd61-46df-9549-9a75ff63206e-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7w6zr\" (UID: \"bc39fbb3-cd61-46df-9549-9a75ff63206e\") " pod="openstack/nova-cell0-cell-mapping-7w6zr" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.071044 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h66h\" (UniqueName: \"kubernetes.io/projected/bc39fbb3-cd61-46df-9549-9a75ff63206e-kube-api-access-5h66h\") pod \"nova-cell0-cell-mapping-7w6zr\" (UID: \"bc39fbb3-cd61-46df-9549-9a75ff63206e\") " pod="openstack/nova-cell0-cell-mapping-7w6zr" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.137644 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/954ecbc2-8aeb-4852-a501-aa51793e1cf6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"954ecbc2-8aeb-4852-a501-aa51793e1cf6\") " pod="openstack/nova-api-0" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.137767 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwnn7\" (UniqueName: \"kubernetes.io/projected/954ecbc2-8aeb-4852-a501-aa51793e1cf6-kube-api-access-vwnn7\") pod \"nova-api-0\" (UID: \"954ecbc2-8aeb-4852-a501-aa51793e1cf6\") " pod="openstack/nova-api-0" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.137809 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/954ecbc2-8aeb-4852-a501-aa51793e1cf6-config-data\") pod \"nova-api-0\" (UID: \"954ecbc2-8aeb-4852-a501-aa51793e1cf6\") " pod="openstack/nova-api-0" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.137846 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/954ecbc2-8aeb-4852-a501-aa51793e1cf6-logs\") pod \"nova-api-0\" (UID: \"954ecbc2-8aeb-4852-a501-aa51793e1cf6\") " pod="openstack/nova-api-0" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.174829 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7w6zr" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.228370 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.230267 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.235397 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.239316 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/954ecbc2-8aeb-4852-a501-aa51793e1cf6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"954ecbc2-8aeb-4852-a501-aa51793e1cf6\") " pod="openstack/nova-api-0" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.239420 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwnn7\" (UniqueName: \"kubernetes.io/projected/954ecbc2-8aeb-4852-a501-aa51793e1cf6-kube-api-access-vwnn7\") pod \"nova-api-0\" (UID: \"954ecbc2-8aeb-4852-a501-aa51793e1cf6\") " pod="openstack/nova-api-0" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.239474 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/954ecbc2-8aeb-4852-a501-aa51793e1cf6-config-data\") pod \"nova-api-0\" (UID: \"954ecbc2-8aeb-4852-a501-aa51793e1cf6\") " pod="openstack/nova-api-0" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.239524 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/954ecbc2-8aeb-4852-a501-aa51793e1cf6-logs\") pod \"nova-api-0\" (UID: \"954ecbc2-8aeb-4852-a501-aa51793e1cf6\") " pod="openstack/nova-api-0" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.242746 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/954ecbc2-8aeb-4852-a501-aa51793e1cf6-logs\") pod \"nova-api-0\" (UID: \"954ecbc2-8aeb-4852-a501-aa51793e1cf6\") " pod="openstack/nova-api-0" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.253610 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/954ecbc2-8aeb-4852-a501-aa51793e1cf6-config-data\") pod \"nova-api-0\" (UID: \"954ecbc2-8aeb-4852-a501-aa51793e1cf6\") " pod="openstack/nova-api-0" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.258124 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/954ecbc2-8aeb-4852-a501-aa51793e1cf6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"954ecbc2-8aeb-4852-a501-aa51793e1cf6\") " pod="openstack/nova-api-0" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.273577 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.275368 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.279736 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.284763 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwnn7\" (UniqueName: \"kubernetes.io/projected/954ecbc2-8aeb-4852-a501-aa51793e1cf6-kube-api-access-vwnn7\") pod \"nova-api-0\" (UID: \"954ecbc2-8aeb-4852-a501-aa51793e1cf6\") " pod="openstack/nova-api-0" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.316994 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.343597 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b45492d-4070-4898-95e7-b15ffef7e11c-logs\") pod \"nova-metadata-0\" (UID: \"3b45492d-4070-4898-95e7-b15ffef7e11c\") " pod="openstack/nova-metadata-0" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.343713 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpqk6\" (UniqueName: \"kubernetes.io/projected/3b45492d-4070-4898-95e7-b15ffef7e11c-kube-api-access-gpqk6\") pod \"nova-metadata-0\" (UID: \"3b45492d-4070-4898-95e7-b15ffef7e11c\") " pod="openstack/nova-metadata-0" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.343759 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b45492d-4070-4898-95e7-b15ffef7e11c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3b45492d-4070-4898-95e7-b15ffef7e11c\") " pod="openstack/nova-metadata-0" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.343854 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b45492d-4070-4898-95e7-b15ffef7e11c-config-data\") pod \"nova-metadata-0\" (UID: \"3b45492d-4070-4898-95e7-b15ffef7e11c\") " pod="openstack/nova-metadata-0" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.390787 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.441502 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.442949 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.447118 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.448411 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b45492d-4070-4898-95e7-b15ffef7e11c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3b45492d-4070-4898-95e7-b15ffef7e11c\") " pod="openstack/nova-metadata-0" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.448456 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0549fba3-0ee0-400a-a8a2-545d91aa6a2b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0549fba3-0ee0-400a-a8a2-545d91aa6a2b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.448524 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bv8s\" (UniqueName: \"kubernetes.io/projected/0549fba3-0ee0-400a-a8a2-545d91aa6a2b-kube-api-access-5bv8s\") pod \"nova-cell1-novncproxy-0\" (UID: \"0549fba3-0ee0-400a-a8a2-545d91aa6a2b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.448571 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b45492d-4070-4898-95e7-b15ffef7e11c-config-data\") pod \"nova-metadata-0\" (UID: \"3b45492d-4070-4898-95e7-b15ffef7e11c\") " pod="openstack/nova-metadata-0" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.448602 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b45492d-4070-4898-95e7-b15ffef7e11c-logs\") pod \"nova-metadata-0\" (UID: \"3b45492d-4070-4898-95e7-b15ffef7e11c\") " pod="openstack/nova-metadata-0" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.448622 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0549fba3-0ee0-400a-a8a2-545d91aa6a2b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0549fba3-0ee0-400a-a8a2-545d91aa6a2b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.448679 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpqk6\" (UniqueName: \"kubernetes.io/projected/3b45492d-4070-4898-95e7-b15ffef7e11c-kube-api-access-gpqk6\") pod \"nova-metadata-0\" (UID: \"3b45492d-4070-4898-95e7-b15ffef7e11c\") " pod="openstack/nova-metadata-0" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.449812 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b45492d-4070-4898-95e7-b15ffef7e11c-logs\") pod \"nova-metadata-0\" (UID: \"3b45492d-4070-4898-95e7-b15ffef7e11c\") " pod="openstack/nova-metadata-0" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.455835 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b45492d-4070-4898-95e7-b15ffef7e11c-config-data\") pod \"nova-metadata-0\" (UID: \"3b45492d-4070-4898-95e7-b15ffef7e11c\") " pod="openstack/nova-metadata-0" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.470359 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-2kftb"] Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.471833 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.472675 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-2kftb" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.477602 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b45492d-4070-4898-95e7-b15ffef7e11c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3b45492d-4070-4898-95e7-b15ffef7e11c\") " pod="openstack/nova-metadata-0" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.479983 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpqk6\" (UniqueName: \"kubernetes.io/projected/3b45492d-4070-4898-95e7-b15ffef7e11c-kube-api-access-gpqk6\") pod \"nova-metadata-0\" (UID: \"3b45492d-4070-4898-95e7-b15ffef7e11c\") " pod="openstack/nova-metadata-0" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.511748 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.522856 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.538678 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-2kftb"] Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.553761 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tj6jq\" (UniqueName: \"kubernetes.io/projected/7185c3eb-f19f-4e6e-9625-77304f01c880-kube-api-access-tj6jq\") pod \"nova-scheduler-0\" (UID: \"7185c3eb-f19f-4e6e-9625-77304f01c880\") " pod="openstack/nova-scheduler-0" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.554795 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7185c3eb-f19f-4e6e-9625-77304f01c880-config-data\") pod \"nova-scheduler-0\" (UID: \"7185c3eb-f19f-4e6e-9625-77304f01c880\") " pod="openstack/nova-scheduler-0" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.554922 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0549fba3-0ee0-400a-a8a2-545d91aa6a2b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0549fba3-0ee0-400a-a8a2-545d91aa6a2b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.555099 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bv8s\" (UniqueName: \"kubernetes.io/projected/0549fba3-0ee0-400a-a8a2-545d91aa6a2b-kube-api-access-5bv8s\") pod \"nova-cell1-novncproxy-0\" (UID: \"0549fba3-0ee0-400a-a8a2-545d91aa6a2b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.555199 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7185c3eb-f19f-4e6e-9625-77304f01c880-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7185c3eb-f19f-4e6e-9625-77304f01c880\") " pod="openstack/nova-scheduler-0" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.555408 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0549fba3-0ee0-400a-a8a2-545d91aa6a2b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0549fba3-0ee0-400a-a8a2-545d91aa6a2b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.583807 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0549fba3-0ee0-400a-a8a2-545d91aa6a2b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0549fba3-0ee0-400a-a8a2-545d91aa6a2b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.595305 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bv8s\" (UniqueName: \"kubernetes.io/projected/0549fba3-0ee0-400a-a8a2-545d91aa6a2b-kube-api-access-5bv8s\") pod \"nova-cell1-novncproxy-0\" (UID: \"0549fba3-0ee0-400a-a8a2-545d91aa6a2b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.598940 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0549fba3-0ee0-400a-a8a2-545d91aa6a2b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0549fba3-0ee0-400a-a8a2-545d91aa6a2b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.657860 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bdbb0f6f-844f-420f-b832-87f194453bce-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-2kftb\" (UID: \"bdbb0f6f-844f-420f-b832-87f194453bce\") " pod="openstack/dnsmasq-dns-566b5b7845-2kftb" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.658329 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tj6jq\" (UniqueName: \"kubernetes.io/projected/7185c3eb-f19f-4e6e-9625-77304f01c880-kube-api-access-tj6jq\") pod \"nova-scheduler-0\" (UID: \"7185c3eb-f19f-4e6e-9625-77304f01c880\") " pod="openstack/nova-scheduler-0" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.658354 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7185c3eb-f19f-4e6e-9625-77304f01c880-config-data\") pod \"nova-scheduler-0\" (UID: \"7185c3eb-f19f-4e6e-9625-77304f01c880\") " pod="openstack/nova-scheduler-0" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.658405 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwvgd\" (UniqueName: \"kubernetes.io/projected/bdbb0f6f-844f-420f-b832-87f194453bce-kube-api-access-jwvgd\") pod \"dnsmasq-dns-566b5b7845-2kftb\" (UID: \"bdbb0f6f-844f-420f-b832-87f194453bce\") " pod="openstack/dnsmasq-dns-566b5b7845-2kftb" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.658453 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bdbb0f6f-844f-420f-b832-87f194453bce-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-2kftb\" (UID: \"bdbb0f6f-844f-420f-b832-87f194453bce\") " pod="openstack/dnsmasq-dns-566b5b7845-2kftb" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.658492 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7185c3eb-f19f-4e6e-9625-77304f01c880-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7185c3eb-f19f-4e6e-9625-77304f01c880\") " pod="openstack/nova-scheduler-0" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.658550 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdbb0f6f-844f-420f-b832-87f194453bce-config\") pod \"dnsmasq-dns-566b5b7845-2kftb\" (UID: \"bdbb0f6f-844f-420f-b832-87f194453bce\") " pod="openstack/dnsmasq-dns-566b5b7845-2kftb" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.658574 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bdbb0f6f-844f-420f-b832-87f194453bce-dns-svc\") pod \"dnsmasq-dns-566b5b7845-2kftb\" (UID: \"bdbb0f6f-844f-420f-b832-87f194453bce\") " pod="openstack/dnsmasq-dns-566b5b7845-2kftb" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.667323 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7185c3eb-f19f-4e6e-9625-77304f01c880-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7185c3eb-f19f-4e6e-9625-77304f01c880\") " pod="openstack/nova-scheduler-0" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.668473 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7185c3eb-f19f-4e6e-9625-77304f01c880-config-data\") pod \"nova-scheduler-0\" (UID: \"7185c3eb-f19f-4e6e-9625-77304f01c880\") " pod="openstack/nova-scheduler-0" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.682966 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tj6jq\" (UniqueName: \"kubernetes.io/projected/7185c3eb-f19f-4e6e-9625-77304f01c880-kube-api-access-tj6jq\") pod \"nova-scheduler-0\" (UID: \"7185c3eb-f19f-4e6e-9625-77304f01c880\") " pod="openstack/nova-scheduler-0" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.688776 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.722778 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.760894 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdbb0f6f-844f-420f-b832-87f194453bce-config\") pod \"dnsmasq-dns-566b5b7845-2kftb\" (UID: \"bdbb0f6f-844f-420f-b832-87f194453bce\") " pod="openstack/dnsmasq-dns-566b5b7845-2kftb" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.760949 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bdbb0f6f-844f-420f-b832-87f194453bce-dns-svc\") pod \"dnsmasq-dns-566b5b7845-2kftb\" (UID: \"bdbb0f6f-844f-420f-b832-87f194453bce\") " pod="openstack/dnsmasq-dns-566b5b7845-2kftb" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.761033 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bdbb0f6f-844f-420f-b832-87f194453bce-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-2kftb\" (UID: \"bdbb0f6f-844f-420f-b832-87f194453bce\") " pod="openstack/dnsmasq-dns-566b5b7845-2kftb" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.761075 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwvgd\" (UniqueName: \"kubernetes.io/projected/bdbb0f6f-844f-420f-b832-87f194453bce-kube-api-access-jwvgd\") pod \"dnsmasq-dns-566b5b7845-2kftb\" (UID: \"bdbb0f6f-844f-420f-b832-87f194453bce\") " pod="openstack/dnsmasq-dns-566b5b7845-2kftb" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.761118 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bdbb0f6f-844f-420f-b832-87f194453bce-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-2kftb\" (UID: \"bdbb0f6f-844f-420f-b832-87f194453bce\") " pod="openstack/dnsmasq-dns-566b5b7845-2kftb" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.762310 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdbb0f6f-844f-420f-b832-87f194453bce-config\") pod \"dnsmasq-dns-566b5b7845-2kftb\" (UID: \"bdbb0f6f-844f-420f-b832-87f194453bce\") " pod="openstack/dnsmasq-dns-566b5b7845-2kftb" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.762523 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bdbb0f6f-844f-420f-b832-87f194453bce-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-2kftb\" (UID: \"bdbb0f6f-844f-420f-b832-87f194453bce\") " pod="openstack/dnsmasq-dns-566b5b7845-2kftb" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.762727 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bdbb0f6f-844f-420f-b832-87f194453bce-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-2kftb\" (UID: \"bdbb0f6f-844f-420f-b832-87f194453bce\") " pod="openstack/dnsmasq-dns-566b5b7845-2kftb" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.763230 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bdbb0f6f-844f-420f-b832-87f194453bce-dns-svc\") pod \"dnsmasq-dns-566b5b7845-2kftb\" (UID: \"bdbb0f6f-844f-420f-b832-87f194453bce\") " pod="openstack/dnsmasq-dns-566b5b7845-2kftb" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.774882 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.788438 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwvgd\" (UniqueName: \"kubernetes.io/projected/bdbb0f6f-844f-420f-b832-87f194453bce-kube-api-access-jwvgd\") pod \"dnsmasq-dns-566b5b7845-2kftb\" (UID: \"bdbb0f6f-844f-420f-b832-87f194453bce\") " pod="openstack/dnsmasq-dns-566b5b7845-2kftb" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.800129 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-2kftb" Nov 29 06:58:24 crc kubenswrapper[4947]: I1129 06:58:24.923566 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-7w6zr"] Nov 29 06:58:24 crc kubenswrapper[4947]: W1129 06:58:24.958362 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc39fbb3_cd61_46df_9549_9a75ff63206e.slice/crio-861986d765691be9c314e7e37e21351f8e8e7a05b95f33ce7b5769f35dafb224 WatchSource:0}: Error finding container 861986d765691be9c314e7e37e21351f8e8e7a05b95f33ce7b5769f35dafb224: Status 404 returned error can't find the container with id 861986d765691be9c314e7e37e21351f8e8e7a05b95f33ce7b5769f35dafb224 Nov 29 06:58:25 crc kubenswrapper[4947]: I1129 06:58:25.096290 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 29 06:58:25 crc kubenswrapper[4947]: I1129 06:58:25.164780 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-dvx7w"] Nov 29 06:58:25 crc kubenswrapper[4947]: I1129 06:58:25.166110 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-dvx7w" Nov 29 06:58:25 crc kubenswrapper[4947]: I1129 06:58:25.169866 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 29 06:58:25 crc kubenswrapper[4947]: I1129 06:58:25.170069 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 29 06:58:25 crc kubenswrapper[4947]: I1129 06:58:25.224282 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-dvx7w"] Nov 29 06:58:25 crc kubenswrapper[4947]: I1129 06:58:25.274196 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrnkx\" (UniqueName: \"kubernetes.io/projected/fdacfe2a-30f5-443f-b368-9019ec66fb2e-kube-api-access-zrnkx\") pod \"nova-cell1-conductor-db-sync-dvx7w\" (UID: \"fdacfe2a-30f5-443f-b368-9019ec66fb2e\") " pod="openstack/nova-cell1-conductor-db-sync-dvx7w" Nov 29 06:58:25 crc kubenswrapper[4947]: I1129 06:58:25.274298 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdacfe2a-30f5-443f-b368-9019ec66fb2e-scripts\") pod \"nova-cell1-conductor-db-sync-dvx7w\" (UID: \"fdacfe2a-30f5-443f-b368-9019ec66fb2e\") " pod="openstack/nova-cell1-conductor-db-sync-dvx7w" Nov 29 06:58:25 crc kubenswrapper[4947]: I1129 06:58:25.274370 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdacfe2a-30f5-443f-b368-9019ec66fb2e-config-data\") pod \"nova-cell1-conductor-db-sync-dvx7w\" (UID: \"fdacfe2a-30f5-443f-b368-9019ec66fb2e\") " pod="openstack/nova-cell1-conductor-db-sync-dvx7w" Nov 29 06:58:25 crc kubenswrapper[4947]: I1129 06:58:25.274396 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdacfe2a-30f5-443f-b368-9019ec66fb2e-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-dvx7w\" (UID: \"fdacfe2a-30f5-443f-b368-9019ec66fb2e\") " pod="openstack/nova-cell1-conductor-db-sync-dvx7w" Nov 29 06:58:25 crc kubenswrapper[4947]: I1129 06:58:25.294719 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 06:58:25 crc kubenswrapper[4947]: I1129 06:58:25.377317 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrnkx\" (UniqueName: \"kubernetes.io/projected/fdacfe2a-30f5-443f-b368-9019ec66fb2e-kube-api-access-zrnkx\") pod \"nova-cell1-conductor-db-sync-dvx7w\" (UID: \"fdacfe2a-30f5-443f-b368-9019ec66fb2e\") " pod="openstack/nova-cell1-conductor-db-sync-dvx7w" Nov 29 06:58:25 crc kubenswrapper[4947]: I1129 06:58:25.378328 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdacfe2a-30f5-443f-b368-9019ec66fb2e-scripts\") pod \"nova-cell1-conductor-db-sync-dvx7w\" (UID: \"fdacfe2a-30f5-443f-b368-9019ec66fb2e\") " pod="openstack/nova-cell1-conductor-db-sync-dvx7w" Nov 29 06:58:25 crc kubenswrapper[4947]: I1129 06:58:25.378481 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdacfe2a-30f5-443f-b368-9019ec66fb2e-config-data\") pod \"nova-cell1-conductor-db-sync-dvx7w\" (UID: \"fdacfe2a-30f5-443f-b368-9019ec66fb2e\") " pod="openstack/nova-cell1-conductor-db-sync-dvx7w" Nov 29 06:58:25 crc kubenswrapper[4947]: I1129 06:58:25.378516 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdacfe2a-30f5-443f-b368-9019ec66fb2e-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-dvx7w\" (UID: \"fdacfe2a-30f5-443f-b368-9019ec66fb2e\") " pod="openstack/nova-cell1-conductor-db-sync-dvx7w" Nov 29 06:58:25 crc kubenswrapper[4947]: I1129 06:58:25.386832 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdacfe2a-30f5-443f-b368-9019ec66fb2e-scripts\") pod \"nova-cell1-conductor-db-sync-dvx7w\" (UID: \"fdacfe2a-30f5-443f-b368-9019ec66fb2e\") " pod="openstack/nova-cell1-conductor-db-sync-dvx7w" Nov 29 06:58:25 crc kubenswrapper[4947]: I1129 06:58:25.388620 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdacfe2a-30f5-443f-b368-9019ec66fb2e-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-dvx7w\" (UID: \"fdacfe2a-30f5-443f-b368-9019ec66fb2e\") " pod="openstack/nova-cell1-conductor-db-sync-dvx7w" Nov 29 06:58:25 crc kubenswrapper[4947]: I1129 06:58:25.408616 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdacfe2a-30f5-443f-b368-9019ec66fb2e-config-data\") pod \"nova-cell1-conductor-db-sync-dvx7w\" (UID: \"fdacfe2a-30f5-443f-b368-9019ec66fb2e\") " pod="openstack/nova-cell1-conductor-db-sync-dvx7w" Nov 29 06:58:25 crc kubenswrapper[4947]: I1129 06:58:25.415968 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrnkx\" (UniqueName: \"kubernetes.io/projected/fdacfe2a-30f5-443f-b368-9019ec66fb2e-kube-api-access-zrnkx\") pod \"nova-cell1-conductor-db-sync-dvx7w\" (UID: \"fdacfe2a-30f5-443f-b368-9019ec66fb2e\") " pod="openstack/nova-cell1-conductor-db-sync-dvx7w" Nov 29 06:58:25 crc kubenswrapper[4947]: I1129 06:58:25.436999 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3b45492d-4070-4898-95e7-b15ffef7e11c","Type":"ContainerStarted","Data":"40b7aa7112a49a5adb44b60f68d06179b32bbdf80d48cb069c74bcff982f3ad1"} Nov 29 06:58:25 crc kubenswrapper[4947]: I1129 06:58:25.438049 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"954ecbc2-8aeb-4852-a501-aa51793e1cf6","Type":"ContainerStarted","Data":"f6587b98e41dde67ecd6879c40bbea9c78a37485e018e6d5f3909d6b2ac6ad8e"} Nov 29 06:58:25 crc kubenswrapper[4947]: I1129 06:58:25.440317 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7w6zr" event={"ID":"bc39fbb3-cd61-46df-9549-9a75ff63206e","Type":"ContainerStarted","Data":"861986d765691be9c314e7e37e21351f8e8e7a05b95f33ce7b5769f35dafb224"} Nov 29 06:58:25 crc kubenswrapper[4947]: I1129 06:58:25.486428 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 29 06:58:25 crc kubenswrapper[4947]: W1129 06:58:25.489666 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0549fba3_0ee0_400a_a8a2_545d91aa6a2b.slice/crio-edd501b2b8bc7b72b6985f1b327951ed62944ba763033b1ad97ff4035144b6f5 WatchSource:0}: Error finding container edd501b2b8bc7b72b6985f1b327951ed62944ba763033b1ad97ff4035144b6f5: Status 404 returned error can't find the container with id edd501b2b8bc7b72b6985f1b327951ed62944ba763033b1ad97ff4035144b6f5 Nov 29 06:58:25 crc kubenswrapper[4947]: I1129 06:58:25.498573 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-2kftb"] Nov 29 06:58:25 crc kubenswrapper[4947]: W1129 06:58:25.506566 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdbb0f6f_844f_420f_b832_87f194453bce.slice/crio-4ba821c1db8a3d272e8faf0ab2faf7be875d89b205b6b4901cd7d63f39abeb9a WatchSource:0}: Error finding container 4ba821c1db8a3d272e8faf0ab2faf7be875d89b205b6b4901cd7d63f39abeb9a: Status 404 returned error can't find the container with id 4ba821c1db8a3d272e8faf0ab2faf7be875d89b205b6b4901cd7d63f39abeb9a Nov 29 06:58:25 crc kubenswrapper[4947]: I1129 06:58:25.514865 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 06:58:25 crc kubenswrapper[4947]: I1129 06:58:25.653263 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-dvx7w" Nov 29 06:58:26 crc kubenswrapper[4947]: I1129 06:58:26.188937 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-dvx7w"] Nov 29 06:58:26 crc kubenswrapper[4947]: W1129 06:58:26.217677 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdacfe2a_30f5_443f_b368_9019ec66fb2e.slice/crio-74874b1f177acab1c66227cc94d092a66327e34ea3aae0f4ece2cdd79928f141 WatchSource:0}: Error finding container 74874b1f177acab1c66227cc94d092a66327e34ea3aae0f4ece2cdd79928f141: Status 404 returned error can't find the container with id 74874b1f177acab1c66227cc94d092a66327e34ea3aae0f4ece2cdd79928f141 Nov 29 06:58:26 crc kubenswrapper[4947]: I1129 06:58:26.455200 4947 generic.go:334] "Generic (PLEG): container finished" podID="bdbb0f6f-844f-420f-b832-87f194453bce" containerID="caa9c510c9b79f3f4683e017d41493f057cbf1b0f029448aa427d2e5d9f9366d" exitCode=0 Nov 29 06:58:26 crc kubenswrapper[4947]: I1129 06:58:26.455800 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-2kftb" event={"ID":"bdbb0f6f-844f-420f-b832-87f194453bce","Type":"ContainerDied","Data":"caa9c510c9b79f3f4683e017d41493f057cbf1b0f029448aa427d2e5d9f9366d"} Nov 29 06:58:26 crc kubenswrapper[4947]: I1129 06:58:26.455849 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-2kftb" event={"ID":"bdbb0f6f-844f-420f-b832-87f194453bce","Type":"ContainerStarted","Data":"4ba821c1db8a3d272e8faf0ab2faf7be875d89b205b6b4901cd7d63f39abeb9a"} Nov 29 06:58:26 crc kubenswrapper[4947]: I1129 06:58:26.458454 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7185c3eb-f19f-4e6e-9625-77304f01c880","Type":"ContainerStarted","Data":"7250984409eebdbfd422627486a4ed36f73113c307cbd6b218cfefc61e59fbce"} Nov 29 06:58:26 crc kubenswrapper[4947]: I1129 06:58:26.463048 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-dvx7w" event={"ID":"fdacfe2a-30f5-443f-b368-9019ec66fb2e","Type":"ContainerStarted","Data":"74874b1f177acab1c66227cc94d092a66327e34ea3aae0f4ece2cdd79928f141"} Nov 29 06:58:26 crc kubenswrapper[4947]: I1129 06:58:26.468927 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0549fba3-0ee0-400a-a8a2-545d91aa6a2b","Type":"ContainerStarted","Data":"edd501b2b8bc7b72b6985f1b327951ed62944ba763033b1ad97ff4035144b6f5"} Nov 29 06:58:26 crc kubenswrapper[4947]: I1129 06:58:26.472627 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7w6zr" event={"ID":"bc39fbb3-cd61-46df-9549-9a75ff63206e","Type":"ContainerStarted","Data":"88301c4a6525dd3464076b30396bec71b2f89f07078bcb17eb60b208cd865eba"} Nov 29 06:58:26 crc kubenswrapper[4947]: I1129 06:58:26.527954 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-7w6zr" podStartSLOduration=3.527841746 podStartE2EDuration="3.527841746s" podCreationTimestamp="2025-11-29 06:58:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:58:26.52443966 +0000 UTC m=+1457.568821751" watchObservedRunningTime="2025-11-29 06:58:26.527841746 +0000 UTC m=+1457.572223827" Nov 29 06:58:27 crc kubenswrapper[4947]: I1129 06:58:27.493073 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-dvx7w" event={"ID":"fdacfe2a-30f5-443f-b368-9019ec66fb2e","Type":"ContainerStarted","Data":"9ad70e03fefbcb9419496af9cc1b1c36fb6799180c7e812b53b5325346d0143a"} Nov 29 06:58:27 crc kubenswrapper[4947]: I1129 06:58:27.496403 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-2kftb" event={"ID":"bdbb0f6f-844f-420f-b832-87f194453bce","Type":"ContainerStarted","Data":"b4f31d5dd03ce54fcb1f43df5ba0bc27a290fa319c3dfc294564a7bf76c0c71a"} Nov 29 06:58:27 crc kubenswrapper[4947]: I1129 06:58:27.496575 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-566b5b7845-2kftb" Nov 29 06:58:27 crc kubenswrapper[4947]: I1129 06:58:27.514319 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-dvx7w" podStartSLOduration=2.5142787049999997 podStartE2EDuration="2.514278705s" podCreationTimestamp="2025-11-29 06:58:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:58:27.512364727 +0000 UTC m=+1458.556746808" watchObservedRunningTime="2025-11-29 06:58:27.514278705 +0000 UTC m=+1458.558660786" Nov 29 06:58:27 crc kubenswrapper[4947]: I1129 06:58:27.550848 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-566b5b7845-2kftb" podStartSLOduration=3.550823254 podStartE2EDuration="3.550823254s" podCreationTimestamp="2025-11-29 06:58:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:58:27.539761326 +0000 UTC m=+1458.584143407" watchObservedRunningTime="2025-11-29 06:58:27.550823254 +0000 UTC m=+1458.595205335" Nov 29 06:58:27 crc kubenswrapper[4947]: I1129 06:58:27.805743 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 29 06:58:27 crc kubenswrapper[4947]: I1129 06:58:27.825625 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 06:58:30 crc kubenswrapper[4947]: I1129 06:58:30.301721 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 29 06:58:30 crc kubenswrapper[4947]: I1129 06:58:30.304281 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="cc514c2a-183d-404e-ba5d-7641695da78c" containerName="kube-state-metrics" containerID="cri-o://1b4e43ccec099e72145483dfd5425f13d8f76c9fab6b04efea0336625cdf037c" gracePeriod=30 Nov 29 06:58:30 crc kubenswrapper[4947]: I1129 06:58:30.543228 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3b45492d-4070-4898-95e7-b15ffef7e11c","Type":"ContainerStarted","Data":"dc5ae4f2a27f16dec9f3a6d76b66cf9965cfd88347f9939f55c5178c1b18c063"} Nov 29 06:58:30 crc kubenswrapper[4947]: I1129 06:58:30.543646 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3b45492d-4070-4898-95e7-b15ffef7e11c" containerName="nova-metadata-log" containerID="cri-o://7e2ce4a8b293b02dd386adfb738bc53fd5c8f38a3c48fb6e1f08ffd7a020b066" gracePeriod=30 Nov 29 06:58:30 crc kubenswrapper[4947]: I1129 06:58:30.543937 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3b45492d-4070-4898-95e7-b15ffef7e11c" containerName="nova-metadata-metadata" containerID="cri-o://dc5ae4f2a27f16dec9f3a6d76b66cf9965cfd88347f9939f55c5178c1b18c063" gracePeriod=30 Nov 29 06:58:30 crc kubenswrapper[4947]: I1129 06:58:30.543925 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3b45492d-4070-4898-95e7-b15ffef7e11c","Type":"ContainerStarted","Data":"7e2ce4a8b293b02dd386adfb738bc53fd5c8f38a3c48fb6e1f08ffd7a020b066"} Nov 29 06:58:30 crc kubenswrapper[4947]: I1129 06:58:30.556991 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"954ecbc2-8aeb-4852-a501-aa51793e1cf6","Type":"ContainerStarted","Data":"0fab570bcdbe21a6be7498564ecc73fe8965348180fe9d5b18f7958d7b4700f6"} Nov 29 06:58:30 crc kubenswrapper[4947]: I1129 06:58:30.557446 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"954ecbc2-8aeb-4852-a501-aa51793e1cf6","Type":"ContainerStarted","Data":"302f4aae5ead7b6804f23f02a4bf311ef4a162fff553d9b1f519be79ef1469e7"} Nov 29 06:58:30 crc kubenswrapper[4947]: I1129 06:58:30.563766 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7185c3eb-f19f-4e6e-9625-77304f01c880","Type":"ContainerStarted","Data":"8d0f40ca186354fe09edd525ffdf4a4f4af91c911c09d4dc71a85681e99a1c12"} Nov 29 06:58:30 crc kubenswrapper[4947]: I1129 06:58:30.571097 4947 generic.go:334] "Generic (PLEG): container finished" podID="cc514c2a-183d-404e-ba5d-7641695da78c" containerID="1b4e43ccec099e72145483dfd5425f13d8f76c9fab6b04efea0336625cdf037c" exitCode=2 Nov 29 06:58:30 crc kubenswrapper[4947]: I1129 06:58:30.571615 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cc514c2a-183d-404e-ba5d-7641695da78c","Type":"ContainerDied","Data":"1b4e43ccec099e72145483dfd5425f13d8f76c9fab6b04efea0336625cdf037c"} Nov 29 06:58:30 crc kubenswrapper[4947]: I1129 06:58:30.575753 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.729594448 podStartE2EDuration="6.575732089s" podCreationTimestamp="2025-11-29 06:58:24 +0000 UTC" firstStartedPulling="2025-11-29 06:58:25.29532086 +0000 UTC m=+1456.339702931" lastFinishedPulling="2025-11-29 06:58:29.141458491 +0000 UTC m=+1460.185840572" observedRunningTime="2025-11-29 06:58:30.564583949 +0000 UTC m=+1461.608966020" watchObservedRunningTime="2025-11-29 06:58:30.575732089 +0000 UTC m=+1461.620114170" Nov 29 06:58:30 crc kubenswrapper[4947]: I1129 06:58:30.583692 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0549fba3-0ee0-400a-a8a2-545d91aa6a2b","Type":"ContainerStarted","Data":"0256943bfa58997ba30045b6c097151a0b5ea3412963e6ffe9e53ec0572ba363"} Nov 29 06:58:30 crc kubenswrapper[4947]: I1129 06:58:30.583840 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="0549fba3-0ee0-400a-a8a2-545d91aa6a2b" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://0256943bfa58997ba30045b6c097151a0b5ea3412963e6ffe9e53ec0572ba363" gracePeriod=30 Nov 29 06:58:30 crc kubenswrapper[4947]: I1129 06:58:30.604212 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.564359004 podStartE2EDuration="6.604184215s" podCreationTimestamp="2025-11-29 06:58:24 +0000 UTC" firstStartedPulling="2025-11-29 06:58:25.104993885 +0000 UTC m=+1456.149375966" lastFinishedPulling="2025-11-29 06:58:29.144819076 +0000 UTC m=+1460.189201177" observedRunningTime="2025-11-29 06:58:30.59207485 +0000 UTC m=+1461.636456931" watchObservedRunningTime="2025-11-29 06:58:30.604184215 +0000 UTC m=+1461.648566296" Nov 29 06:58:30 crc kubenswrapper[4947]: I1129 06:58:30.652985 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.006437738 podStartE2EDuration="6.652953751s" podCreationTimestamp="2025-11-29 06:58:24 +0000 UTC" firstStartedPulling="2025-11-29 06:58:25.497996045 +0000 UTC m=+1456.542378126" lastFinishedPulling="2025-11-29 06:58:29.144512058 +0000 UTC m=+1460.188894139" observedRunningTime="2025-11-29 06:58:30.63504006 +0000 UTC m=+1461.679422141" watchObservedRunningTime="2025-11-29 06:58:30.652953751 +0000 UTC m=+1461.697335832" Nov 29 06:58:30 crc kubenswrapper[4947]: I1129 06:58:30.664581 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.016704826 podStartE2EDuration="6.664545262s" podCreationTimestamp="2025-11-29 06:58:24 +0000 UTC" firstStartedPulling="2025-11-29 06:58:25.497528614 +0000 UTC m=+1456.541910695" lastFinishedPulling="2025-11-29 06:58:29.14536905 +0000 UTC m=+1460.189751131" observedRunningTime="2025-11-29 06:58:30.658821758 +0000 UTC m=+1461.703203849" watchObservedRunningTime="2025-11-29 06:58:30.664545262 +0000 UTC m=+1461.708927343" Nov 29 06:58:30 crc kubenswrapper[4947]: I1129 06:58:30.913943 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 29 06:58:30 crc kubenswrapper[4947]: I1129 06:58:30.974640 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwlxg\" (UniqueName: \"kubernetes.io/projected/cc514c2a-183d-404e-ba5d-7641695da78c-kube-api-access-qwlxg\") pod \"cc514c2a-183d-404e-ba5d-7641695da78c\" (UID: \"cc514c2a-183d-404e-ba5d-7641695da78c\") " Nov 29 06:58:30 crc kubenswrapper[4947]: I1129 06:58:30.983893 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc514c2a-183d-404e-ba5d-7641695da78c-kube-api-access-qwlxg" (OuterVolumeSpecName: "kube-api-access-qwlxg") pod "cc514c2a-183d-404e-ba5d-7641695da78c" (UID: "cc514c2a-183d-404e-ba5d-7641695da78c"). InnerVolumeSpecName "kube-api-access-qwlxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:58:31 crc kubenswrapper[4947]: I1129 06:58:31.077259 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwlxg\" (UniqueName: \"kubernetes.io/projected/cc514c2a-183d-404e-ba5d-7641695da78c-kube-api-access-qwlxg\") on node \"crc\" DevicePath \"\"" Nov 29 06:58:31 crc kubenswrapper[4947]: I1129 06:58:31.599582 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 29 06:58:31 crc kubenswrapper[4947]: I1129 06:58:31.601010 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cc514c2a-183d-404e-ba5d-7641695da78c","Type":"ContainerDied","Data":"852273845a4fdfb4e96f5e5d493b67b2f729c09a6e6965b956777d5e731a83d1"} Nov 29 06:58:31 crc kubenswrapper[4947]: I1129 06:58:31.601081 4947 scope.go:117] "RemoveContainer" containerID="1b4e43ccec099e72145483dfd5425f13d8f76c9fab6b04efea0336625cdf037c" Nov 29 06:58:31 crc kubenswrapper[4947]: I1129 06:58:31.605566 4947 generic.go:334] "Generic (PLEG): container finished" podID="3b45492d-4070-4898-95e7-b15ffef7e11c" containerID="dc5ae4f2a27f16dec9f3a6d76b66cf9965cfd88347f9939f55c5178c1b18c063" exitCode=0 Nov 29 06:58:31 crc kubenswrapper[4947]: I1129 06:58:31.605615 4947 generic.go:334] "Generic (PLEG): container finished" podID="3b45492d-4070-4898-95e7-b15ffef7e11c" containerID="7e2ce4a8b293b02dd386adfb738bc53fd5c8f38a3c48fb6e1f08ffd7a020b066" exitCode=143 Nov 29 06:58:31 crc kubenswrapper[4947]: I1129 06:58:31.605969 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3b45492d-4070-4898-95e7-b15ffef7e11c","Type":"ContainerDied","Data":"dc5ae4f2a27f16dec9f3a6d76b66cf9965cfd88347f9939f55c5178c1b18c063"} Nov 29 06:58:31 crc kubenswrapper[4947]: I1129 06:58:31.606071 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3b45492d-4070-4898-95e7-b15ffef7e11c","Type":"ContainerDied","Data":"7e2ce4a8b293b02dd386adfb738bc53fd5c8f38a3c48fb6e1f08ffd7a020b066"} Nov 29 06:58:31 crc kubenswrapper[4947]: I1129 06:58:31.606106 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3b45492d-4070-4898-95e7-b15ffef7e11c","Type":"ContainerDied","Data":"40b7aa7112a49a5adb44b60f68d06179b32bbdf80d48cb069c74bcff982f3ad1"} Nov 29 06:58:31 crc kubenswrapper[4947]: I1129 06:58:31.606121 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40b7aa7112a49a5adb44b60f68d06179b32bbdf80d48cb069c74bcff982f3ad1" Nov 29 06:58:31 crc kubenswrapper[4947]: I1129 06:58:31.665668 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 06:58:31 crc kubenswrapper[4947]: I1129 06:58:31.684361 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 29 06:58:31 crc kubenswrapper[4947]: I1129 06:58:31.701743 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 29 06:58:31 crc kubenswrapper[4947]: I1129 06:58:31.713887 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 29 06:58:31 crc kubenswrapper[4947]: E1129 06:58:31.714383 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b45492d-4070-4898-95e7-b15ffef7e11c" containerName="nova-metadata-log" Nov 29 06:58:31 crc kubenswrapper[4947]: I1129 06:58:31.714402 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b45492d-4070-4898-95e7-b15ffef7e11c" containerName="nova-metadata-log" Nov 29 06:58:31 crc kubenswrapper[4947]: E1129 06:58:31.714452 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc514c2a-183d-404e-ba5d-7641695da78c" containerName="kube-state-metrics" Nov 29 06:58:31 crc kubenswrapper[4947]: I1129 06:58:31.714461 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc514c2a-183d-404e-ba5d-7641695da78c" containerName="kube-state-metrics" Nov 29 06:58:31 crc kubenswrapper[4947]: E1129 06:58:31.714470 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b45492d-4070-4898-95e7-b15ffef7e11c" containerName="nova-metadata-metadata" Nov 29 06:58:31 crc kubenswrapper[4947]: I1129 06:58:31.714478 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b45492d-4070-4898-95e7-b15ffef7e11c" containerName="nova-metadata-metadata" Nov 29 06:58:31 crc kubenswrapper[4947]: I1129 06:58:31.714683 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc514c2a-183d-404e-ba5d-7641695da78c" containerName="kube-state-metrics" Nov 29 06:58:31 crc kubenswrapper[4947]: I1129 06:58:31.714700 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b45492d-4070-4898-95e7-b15ffef7e11c" containerName="nova-metadata-metadata" Nov 29 06:58:31 crc kubenswrapper[4947]: I1129 06:58:31.714720 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b45492d-4070-4898-95e7-b15ffef7e11c" containerName="nova-metadata-log" Nov 29 06:58:31 crc kubenswrapper[4947]: I1129 06:58:31.715423 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 29 06:58:31 crc kubenswrapper[4947]: I1129 06:58:31.719597 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 29 06:58:31 crc kubenswrapper[4947]: I1129 06:58:31.719701 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 29 06:58:31 crc kubenswrapper[4947]: I1129 06:58:31.735127 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 29 06:58:31 crc kubenswrapper[4947]: I1129 06:58:31.805284 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b45492d-4070-4898-95e7-b15ffef7e11c-logs\") pod \"3b45492d-4070-4898-95e7-b15ffef7e11c\" (UID: \"3b45492d-4070-4898-95e7-b15ffef7e11c\") " Nov 29 06:58:31 crc kubenswrapper[4947]: I1129 06:58:31.805462 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpqk6\" (UniqueName: \"kubernetes.io/projected/3b45492d-4070-4898-95e7-b15ffef7e11c-kube-api-access-gpqk6\") pod \"3b45492d-4070-4898-95e7-b15ffef7e11c\" (UID: \"3b45492d-4070-4898-95e7-b15ffef7e11c\") " Nov 29 06:58:31 crc kubenswrapper[4947]: I1129 06:58:31.805665 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b45492d-4070-4898-95e7-b15ffef7e11c-config-data\") pod \"3b45492d-4070-4898-95e7-b15ffef7e11c\" (UID: \"3b45492d-4070-4898-95e7-b15ffef7e11c\") " Nov 29 06:58:31 crc kubenswrapper[4947]: I1129 06:58:31.805736 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b45492d-4070-4898-95e7-b15ffef7e11c-combined-ca-bundle\") pod \"3b45492d-4070-4898-95e7-b15ffef7e11c\" (UID: \"3b45492d-4070-4898-95e7-b15ffef7e11c\") " Nov 29 06:58:31 crc kubenswrapper[4947]: I1129 06:58:31.810372 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b45492d-4070-4898-95e7-b15ffef7e11c-logs" (OuterVolumeSpecName: "logs") pod "3b45492d-4070-4898-95e7-b15ffef7e11c" (UID: "3b45492d-4070-4898-95e7-b15ffef7e11c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:58:31 crc kubenswrapper[4947]: I1129 06:58:31.813916 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b45492d-4070-4898-95e7-b15ffef7e11c-kube-api-access-gpqk6" (OuterVolumeSpecName: "kube-api-access-gpqk6") pod "3b45492d-4070-4898-95e7-b15ffef7e11c" (UID: "3b45492d-4070-4898-95e7-b15ffef7e11c"). InnerVolumeSpecName "kube-api-access-gpqk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:58:31 crc kubenswrapper[4947]: I1129 06:58:31.844429 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b45492d-4070-4898-95e7-b15ffef7e11c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b45492d-4070-4898-95e7-b15ffef7e11c" (UID: "3b45492d-4070-4898-95e7-b15ffef7e11c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:58:31 crc kubenswrapper[4947]: I1129 06:58:31.855370 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b45492d-4070-4898-95e7-b15ffef7e11c-config-data" (OuterVolumeSpecName: "config-data") pod "3b45492d-4070-4898-95e7-b15ffef7e11c" (UID: "3b45492d-4070-4898-95e7-b15ffef7e11c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:58:31 crc kubenswrapper[4947]: I1129 06:58:31.907727 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b54f1c17-a225-4975-9d6f-100e5c5b92bd-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"b54f1c17-a225-4975-9d6f-100e5c5b92bd\") " pod="openstack/kube-state-metrics-0" Nov 29 06:58:31 crc kubenswrapper[4947]: I1129 06:58:31.907823 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6x5s\" (UniqueName: \"kubernetes.io/projected/b54f1c17-a225-4975-9d6f-100e5c5b92bd-kube-api-access-k6x5s\") pod \"kube-state-metrics-0\" (UID: \"b54f1c17-a225-4975-9d6f-100e5c5b92bd\") " pod="openstack/kube-state-metrics-0" Nov 29 06:58:31 crc kubenswrapper[4947]: I1129 06:58:31.907855 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/b54f1c17-a225-4975-9d6f-100e5c5b92bd-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"b54f1c17-a225-4975-9d6f-100e5c5b92bd\") " pod="openstack/kube-state-metrics-0" Nov 29 06:58:31 crc kubenswrapper[4947]: I1129 06:58:31.907919 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/b54f1c17-a225-4975-9d6f-100e5c5b92bd-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"b54f1c17-a225-4975-9d6f-100e5c5b92bd\") " pod="openstack/kube-state-metrics-0" Nov 29 06:58:31 crc kubenswrapper[4947]: I1129 06:58:31.908002 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpqk6\" (UniqueName: \"kubernetes.io/projected/3b45492d-4070-4898-95e7-b15ffef7e11c-kube-api-access-gpqk6\") on node \"crc\" DevicePath \"\"" Nov 29 06:58:31 crc kubenswrapper[4947]: I1129 06:58:31.908015 4947 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b45492d-4070-4898-95e7-b15ffef7e11c-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 06:58:31 crc kubenswrapper[4947]: I1129 06:58:31.908023 4947 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b45492d-4070-4898-95e7-b15ffef7e11c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 06:58:31 crc kubenswrapper[4947]: I1129 06:58:31.908033 4947 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b45492d-4070-4898-95e7-b15ffef7e11c-logs\") on node \"crc\" DevicePath \"\"" Nov 29 06:58:32 crc kubenswrapper[4947]: I1129 06:58:32.009347 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b54f1c17-a225-4975-9d6f-100e5c5b92bd-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"b54f1c17-a225-4975-9d6f-100e5c5b92bd\") " pod="openstack/kube-state-metrics-0" Nov 29 06:58:32 crc kubenswrapper[4947]: I1129 06:58:32.009431 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6x5s\" (UniqueName: \"kubernetes.io/projected/b54f1c17-a225-4975-9d6f-100e5c5b92bd-kube-api-access-k6x5s\") pod \"kube-state-metrics-0\" (UID: \"b54f1c17-a225-4975-9d6f-100e5c5b92bd\") " pod="openstack/kube-state-metrics-0" Nov 29 06:58:32 crc kubenswrapper[4947]: I1129 06:58:32.009459 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/b54f1c17-a225-4975-9d6f-100e5c5b92bd-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"b54f1c17-a225-4975-9d6f-100e5c5b92bd\") " pod="openstack/kube-state-metrics-0" Nov 29 06:58:32 crc kubenswrapper[4947]: I1129 06:58:32.009522 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/b54f1c17-a225-4975-9d6f-100e5c5b92bd-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"b54f1c17-a225-4975-9d6f-100e5c5b92bd\") " pod="openstack/kube-state-metrics-0" Nov 29 06:58:32 crc kubenswrapper[4947]: I1129 06:58:32.017024 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/b54f1c17-a225-4975-9d6f-100e5c5b92bd-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"b54f1c17-a225-4975-9d6f-100e5c5b92bd\") " pod="openstack/kube-state-metrics-0" Nov 29 06:58:32 crc kubenswrapper[4947]: I1129 06:58:32.018452 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 06:58:32 crc kubenswrapper[4947]: I1129 06:58:32.018583 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/b54f1c17-a225-4975-9d6f-100e5c5b92bd-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"b54f1c17-a225-4975-9d6f-100e5c5b92bd\") " pod="openstack/kube-state-metrics-0" Nov 29 06:58:32 crc kubenswrapper[4947]: I1129 06:58:32.019183 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b5177fef-81ba-4f18-b118-53b5dfa4bc36" containerName="ceilometer-central-agent" containerID="cri-o://d482f275afe3dfba197d7727361c5267a57f4c8724e6c9882417d5043c5fe1b6" gracePeriod=30 Nov 29 06:58:32 crc kubenswrapper[4947]: I1129 06:58:32.019801 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b5177fef-81ba-4f18-b118-53b5dfa4bc36" containerName="proxy-httpd" containerID="cri-o://5da9e2351cf0d67425719c7681fe1af903b1ea5416f3e21678a7293ee0b7d9e1" gracePeriod=30 Nov 29 06:58:32 crc kubenswrapper[4947]: I1129 06:58:32.020164 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b5177fef-81ba-4f18-b118-53b5dfa4bc36" containerName="ceilometer-notification-agent" containerID="cri-o://78f0686bdd74c35d9dcab63d5dc07b703f13ae8a922b45ac0d8000120769d691" gracePeriod=30 Nov 29 06:58:32 crc kubenswrapper[4947]: I1129 06:58:32.020325 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b5177fef-81ba-4f18-b118-53b5dfa4bc36" containerName="sg-core" containerID="cri-o://1a42893ec10eb5bf4f93766065a9c46249fe9d4d875e69bbda0121da907d0334" gracePeriod=30 Nov 29 06:58:32 crc kubenswrapper[4947]: I1129 06:58:32.027206 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b54f1c17-a225-4975-9d6f-100e5c5b92bd-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"b54f1c17-a225-4975-9d6f-100e5c5b92bd\") " pod="openstack/kube-state-metrics-0" Nov 29 06:58:32 crc kubenswrapper[4947]: I1129 06:58:32.059029 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6x5s\" (UniqueName: \"kubernetes.io/projected/b54f1c17-a225-4975-9d6f-100e5c5b92bd-kube-api-access-k6x5s\") pod \"kube-state-metrics-0\" (UID: \"b54f1c17-a225-4975-9d6f-100e5c5b92bd\") " pod="openstack/kube-state-metrics-0" Nov 29 06:58:32 crc kubenswrapper[4947]: I1129 06:58:32.081819 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 29 06:58:32 crc kubenswrapper[4947]: I1129 06:58:32.593270 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 29 06:58:32 crc kubenswrapper[4947]: I1129 06:58:32.621664 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b54f1c17-a225-4975-9d6f-100e5c5b92bd","Type":"ContainerStarted","Data":"58f3e95485bd0bc0ff778a944a636458c9e311c8711ea4c2c0522d2e1da10915"} Nov 29 06:58:32 crc kubenswrapper[4947]: I1129 06:58:32.630095 4947 generic.go:334] "Generic (PLEG): container finished" podID="b5177fef-81ba-4f18-b118-53b5dfa4bc36" containerID="5da9e2351cf0d67425719c7681fe1af903b1ea5416f3e21678a7293ee0b7d9e1" exitCode=0 Nov 29 06:58:32 crc kubenswrapper[4947]: I1129 06:58:32.630149 4947 generic.go:334] "Generic (PLEG): container finished" podID="b5177fef-81ba-4f18-b118-53b5dfa4bc36" containerID="1a42893ec10eb5bf4f93766065a9c46249fe9d4d875e69bbda0121da907d0334" exitCode=2 Nov 29 06:58:32 crc kubenswrapper[4947]: I1129 06:58:32.630169 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b5177fef-81ba-4f18-b118-53b5dfa4bc36","Type":"ContainerDied","Data":"5da9e2351cf0d67425719c7681fe1af903b1ea5416f3e21678a7293ee0b7d9e1"} Nov 29 06:58:32 crc kubenswrapper[4947]: I1129 06:58:32.630259 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 06:58:32 crc kubenswrapper[4947]: I1129 06:58:32.630269 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b5177fef-81ba-4f18-b118-53b5dfa4bc36","Type":"ContainerDied","Data":"1a42893ec10eb5bf4f93766065a9c46249fe9d4d875e69bbda0121da907d0334"} Nov 29 06:58:32 crc kubenswrapper[4947]: I1129 06:58:32.732646 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 06:58:32 crc kubenswrapper[4947]: I1129 06:58:32.764172 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 06:58:32 crc kubenswrapper[4947]: I1129 06:58:32.779339 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 29 06:58:32 crc kubenswrapper[4947]: I1129 06:58:32.781590 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 06:58:32 crc kubenswrapper[4947]: I1129 06:58:32.788690 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 29 06:58:32 crc kubenswrapper[4947]: I1129 06:58:32.788734 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 29 06:58:32 crc kubenswrapper[4947]: I1129 06:58:32.789301 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 06:58:32 crc kubenswrapper[4947]: I1129 06:58:32.933787 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzfrn\" (UniqueName: \"kubernetes.io/projected/4d894021-7cc3-48ae-bfa0-1f7f51de32dc-kube-api-access-hzfrn\") pod \"nova-metadata-0\" (UID: \"4d894021-7cc3-48ae-bfa0-1f7f51de32dc\") " pod="openstack/nova-metadata-0" Nov 29 06:58:32 crc kubenswrapper[4947]: I1129 06:58:32.934102 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d894021-7cc3-48ae-bfa0-1f7f51de32dc-logs\") pod \"nova-metadata-0\" (UID: \"4d894021-7cc3-48ae-bfa0-1f7f51de32dc\") " pod="openstack/nova-metadata-0" Nov 29 06:58:32 crc kubenswrapper[4947]: I1129 06:58:32.934367 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d894021-7cc3-48ae-bfa0-1f7f51de32dc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4d894021-7cc3-48ae-bfa0-1f7f51de32dc\") " pod="openstack/nova-metadata-0" Nov 29 06:58:32 crc kubenswrapper[4947]: I1129 06:58:32.934512 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d894021-7cc3-48ae-bfa0-1f7f51de32dc-config-data\") pod \"nova-metadata-0\" (UID: \"4d894021-7cc3-48ae-bfa0-1f7f51de32dc\") " pod="openstack/nova-metadata-0" Nov 29 06:58:32 crc kubenswrapper[4947]: I1129 06:58:32.934625 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d894021-7cc3-48ae-bfa0-1f7f51de32dc-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4d894021-7cc3-48ae-bfa0-1f7f51de32dc\") " pod="openstack/nova-metadata-0" Nov 29 06:58:33 crc kubenswrapper[4947]: I1129 06:58:33.036229 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d894021-7cc3-48ae-bfa0-1f7f51de32dc-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4d894021-7cc3-48ae-bfa0-1f7f51de32dc\") " pod="openstack/nova-metadata-0" Nov 29 06:58:33 crc kubenswrapper[4947]: I1129 06:58:33.036472 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzfrn\" (UniqueName: \"kubernetes.io/projected/4d894021-7cc3-48ae-bfa0-1f7f51de32dc-kube-api-access-hzfrn\") pod \"nova-metadata-0\" (UID: \"4d894021-7cc3-48ae-bfa0-1f7f51de32dc\") " pod="openstack/nova-metadata-0" Nov 29 06:58:33 crc kubenswrapper[4947]: I1129 06:58:33.036574 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d894021-7cc3-48ae-bfa0-1f7f51de32dc-logs\") pod \"nova-metadata-0\" (UID: \"4d894021-7cc3-48ae-bfa0-1f7f51de32dc\") " pod="openstack/nova-metadata-0" Nov 29 06:58:33 crc kubenswrapper[4947]: I1129 06:58:33.036658 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d894021-7cc3-48ae-bfa0-1f7f51de32dc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4d894021-7cc3-48ae-bfa0-1f7f51de32dc\") " pod="openstack/nova-metadata-0" Nov 29 06:58:33 crc kubenswrapper[4947]: I1129 06:58:33.036743 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d894021-7cc3-48ae-bfa0-1f7f51de32dc-config-data\") pod \"nova-metadata-0\" (UID: \"4d894021-7cc3-48ae-bfa0-1f7f51de32dc\") " pod="openstack/nova-metadata-0" Nov 29 06:58:33 crc kubenswrapper[4947]: I1129 06:58:33.037389 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d894021-7cc3-48ae-bfa0-1f7f51de32dc-logs\") pod \"nova-metadata-0\" (UID: \"4d894021-7cc3-48ae-bfa0-1f7f51de32dc\") " pod="openstack/nova-metadata-0" Nov 29 06:58:33 crc kubenswrapper[4947]: I1129 06:58:33.046204 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d894021-7cc3-48ae-bfa0-1f7f51de32dc-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4d894021-7cc3-48ae-bfa0-1f7f51de32dc\") " pod="openstack/nova-metadata-0" Nov 29 06:58:33 crc kubenswrapper[4947]: I1129 06:58:33.046354 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d894021-7cc3-48ae-bfa0-1f7f51de32dc-config-data\") pod \"nova-metadata-0\" (UID: \"4d894021-7cc3-48ae-bfa0-1f7f51de32dc\") " pod="openstack/nova-metadata-0" Nov 29 06:58:33 crc kubenswrapper[4947]: I1129 06:58:33.050906 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d894021-7cc3-48ae-bfa0-1f7f51de32dc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4d894021-7cc3-48ae-bfa0-1f7f51de32dc\") " pod="openstack/nova-metadata-0" Nov 29 06:58:33 crc kubenswrapper[4947]: I1129 06:58:33.058616 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzfrn\" (UniqueName: \"kubernetes.io/projected/4d894021-7cc3-48ae-bfa0-1f7f51de32dc-kube-api-access-hzfrn\") pod \"nova-metadata-0\" (UID: \"4d894021-7cc3-48ae-bfa0-1f7f51de32dc\") " pod="openstack/nova-metadata-0" Nov 29 06:58:33 crc kubenswrapper[4947]: I1129 06:58:33.115025 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 06:58:33 crc kubenswrapper[4947]: I1129 06:58:33.203893 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b45492d-4070-4898-95e7-b15ffef7e11c" path="/var/lib/kubelet/pods/3b45492d-4070-4898-95e7-b15ffef7e11c/volumes" Nov 29 06:58:33 crc kubenswrapper[4947]: I1129 06:58:33.204939 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc514c2a-183d-404e-ba5d-7641695da78c" path="/var/lib/kubelet/pods/cc514c2a-183d-404e-ba5d-7641695da78c/volumes" Nov 29 06:58:33 crc kubenswrapper[4947]: I1129 06:58:33.663717 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 06:58:33 crc kubenswrapper[4947]: I1129 06:58:33.666342 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4d894021-7cc3-48ae-bfa0-1f7f51de32dc","Type":"ContainerStarted","Data":"fbdaecec4837af2b845905a1ab009203eea7aa9e40a205658186e90cdbfe4c75"} Nov 29 06:58:33 crc kubenswrapper[4947]: I1129 06:58:33.671781 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b54f1c17-a225-4975-9d6f-100e5c5b92bd","Type":"ContainerStarted","Data":"2f6c386c1fccd4bd1dc3de1f2dde141b0016926e284dfb98212b78e56a09c106"} Nov 29 06:58:33 crc kubenswrapper[4947]: I1129 06:58:33.673083 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 29 06:58:33 crc kubenswrapper[4947]: I1129 06:58:33.679129 4947 generic.go:334] "Generic (PLEG): container finished" podID="b5177fef-81ba-4f18-b118-53b5dfa4bc36" containerID="d482f275afe3dfba197d7727361c5267a57f4c8724e6c9882417d5043c5fe1b6" exitCode=0 Nov 29 06:58:33 crc kubenswrapper[4947]: I1129 06:58:33.679174 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b5177fef-81ba-4f18-b118-53b5dfa4bc36","Type":"ContainerDied","Data":"d482f275afe3dfba197d7727361c5267a57f4c8724e6c9882417d5043c5fe1b6"} Nov 29 06:58:33 crc kubenswrapper[4947]: I1129 06:58:33.701267 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.268728531 podStartE2EDuration="2.701234784s" podCreationTimestamp="2025-11-29 06:58:31 +0000 UTC" firstStartedPulling="2025-11-29 06:58:32.602199474 +0000 UTC m=+1463.646581555" lastFinishedPulling="2025-11-29 06:58:33.034705727 +0000 UTC m=+1464.079087808" observedRunningTime="2025-11-29 06:58:33.696634118 +0000 UTC m=+1464.741016209" watchObservedRunningTime="2025-11-29 06:58:33.701234784 +0000 UTC m=+1464.745616865" Nov 29 06:58:34 crc kubenswrapper[4947]: I1129 06:58:34.473312 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 29 06:58:34 crc kubenswrapper[4947]: I1129 06:58:34.474066 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 29 06:58:34 crc kubenswrapper[4947]: I1129 06:58:34.696851 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4d894021-7cc3-48ae-bfa0-1f7f51de32dc","Type":"ContainerStarted","Data":"67195c622d14dc48ea0aafa4bbd28013f2b2630c5016b6b5d088293f27d02b15"} Nov 29 06:58:34 crc kubenswrapper[4947]: I1129 06:58:34.696933 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4d894021-7cc3-48ae-bfa0-1f7f51de32dc","Type":"ContainerStarted","Data":"97e090b1ce6bcb728e6a620ea0ee172975f853922f25f6699e6d29ca1f3c012e"} Nov 29 06:58:34 crc kubenswrapper[4947]: I1129 06:58:34.711035 4947 generic.go:334] "Generic (PLEG): container finished" podID="b5177fef-81ba-4f18-b118-53b5dfa4bc36" containerID="78f0686bdd74c35d9dcab63d5dc07b703f13ae8a922b45ac0d8000120769d691" exitCode=0 Nov 29 06:58:34 crc kubenswrapper[4947]: I1129 06:58:34.711344 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b5177fef-81ba-4f18-b118-53b5dfa4bc36","Type":"ContainerDied","Data":"78f0686bdd74c35d9dcab63d5dc07b703f13ae8a922b45ac0d8000120769d691"} Nov 29 06:58:34 crc kubenswrapper[4947]: I1129 06:58:34.729158 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 29 06:58:34 crc kubenswrapper[4947]: I1129 06:58:34.734333 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.734195093 podStartE2EDuration="2.734195093s" podCreationTimestamp="2025-11-29 06:58:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:58:34.729491814 +0000 UTC m=+1465.773873895" watchObservedRunningTime="2025-11-29 06:58:34.734195093 +0000 UTC m=+1465.778577174" Nov 29 06:58:34 crc kubenswrapper[4947]: I1129 06:58:34.775334 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 29 06:58:34 crc kubenswrapper[4947]: I1129 06:58:34.775402 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 29 06:58:34 crc kubenswrapper[4947]: I1129 06:58:34.803555 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-566b5b7845-2kftb" Nov 29 06:58:34 crc kubenswrapper[4947]: I1129 06:58:34.849521 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 29 06:58:34 crc kubenswrapper[4947]: I1129 06:58:34.927917 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-cksns"] Nov 29 06:58:34 crc kubenswrapper[4947]: I1129 06:58:34.928590 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d97fcdd8f-cksns" podUID="a07539ac-ab47-4fb2-b397-1dba22e18c65" containerName="dnsmasq-dns" containerID="cri-o://3cdcdfa65126ac936671c2cddaab29c8f8cbad6987cd5665779e497743701b64" gracePeriod=10 Nov 29 06:58:34 crc kubenswrapper[4947]: I1129 06:58:34.966367 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.102410 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5177fef-81ba-4f18-b118-53b5dfa4bc36-run-httpd\") pod \"b5177fef-81ba-4f18-b118-53b5dfa4bc36\" (UID: \"b5177fef-81ba-4f18-b118-53b5dfa4bc36\") " Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.102503 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5177fef-81ba-4f18-b118-53b5dfa4bc36-log-httpd\") pod \"b5177fef-81ba-4f18-b118-53b5dfa4bc36\" (UID: \"b5177fef-81ba-4f18-b118-53b5dfa4bc36\") " Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.102696 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5177fef-81ba-4f18-b118-53b5dfa4bc36-combined-ca-bundle\") pod \"b5177fef-81ba-4f18-b118-53b5dfa4bc36\" (UID: \"b5177fef-81ba-4f18-b118-53b5dfa4bc36\") " Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.102749 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5177fef-81ba-4f18-b118-53b5dfa4bc36-scripts\") pod \"b5177fef-81ba-4f18-b118-53b5dfa4bc36\" (UID: \"b5177fef-81ba-4f18-b118-53b5dfa4bc36\") " Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.102807 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b5177fef-81ba-4f18-b118-53b5dfa4bc36-sg-core-conf-yaml\") pod \"b5177fef-81ba-4f18-b118-53b5dfa4bc36\" (UID: \"b5177fef-81ba-4f18-b118-53b5dfa4bc36\") " Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.102859 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5177fef-81ba-4f18-b118-53b5dfa4bc36-config-data\") pod \"b5177fef-81ba-4f18-b118-53b5dfa4bc36\" (UID: \"b5177fef-81ba-4f18-b118-53b5dfa4bc36\") " Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.103087 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skb5w\" (UniqueName: \"kubernetes.io/projected/b5177fef-81ba-4f18-b118-53b5dfa4bc36-kube-api-access-skb5w\") pod \"b5177fef-81ba-4f18-b118-53b5dfa4bc36\" (UID: \"b5177fef-81ba-4f18-b118-53b5dfa4bc36\") " Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.104611 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5177fef-81ba-4f18-b118-53b5dfa4bc36-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b5177fef-81ba-4f18-b118-53b5dfa4bc36" (UID: "b5177fef-81ba-4f18-b118-53b5dfa4bc36"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.104986 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5177fef-81ba-4f18-b118-53b5dfa4bc36-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b5177fef-81ba-4f18-b118-53b5dfa4bc36" (UID: "b5177fef-81ba-4f18-b118-53b5dfa4bc36"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.116979 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5177fef-81ba-4f18-b118-53b5dfa4bc36-scripts" (OuterVolumeSpecName: "scripts") pod "b5177fef-81ba-4f18-b118-53b5dfa4bc36" (UID: "b5177fef-81ba-4f18-b118-53b5dfa4bc36"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.131854 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5177fef-81ba-4f18-b118-53b5dfa4bc36-kube-api-access-skb5w" (OuterVolumeSpecName: "kube-api-access-skb5w") pod "b5177fef-81ba-4f18-b118-53b5dfa4bc36" (UID: "b5177fef-81ba-4f18-b118-53b5dfa4bc36"). InnerVolumeSpecName "kube-api-access-skb5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.145511 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5177fef-81ba-4f18-b118-53b5dfa4bc36-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b5177fef-81ba-4f18-b118-53b5dfa4bc36" (UID: "b5177fef-81ba-4f18-b118-53b5dfa4bc36"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.210534 4947 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5177fef-81ba-4f18-b118-53b5dfa4bc36-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.210564 4947 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b5177fef-81ba-4f18-b118-53b5dfa4bc36-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.210577 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skb5w\" (UniqueName: \"kubernetes.io/projected/b5177fef-81ba-4f18-b118-53b5dfa4bc36-kube-api-access-skb5w\") on node \"crc\" DevicePath \"\"" Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.210588 4947 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5177fef-81ba-4f18-b118-53b5dfa4bc36-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.210597 4947 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5177fef-81ba-4f18-b118-53b5dfa4bc36-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.265606 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5177fef-81ba-4f18-b118-53b5dfa4bc36-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b5177fef-81ba-4f18-b118-53b5dfa4bc36" (UID: "b5177fef-81ba-4f18-b118-53b5dfa4bc36"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.295453 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5177fef-81ba-4f18-b118-53b5dfa4bc36-config-data" (OuterVolumeSpecName: "config-data") pod "b5177fef-81ba-4f18-b118-53b5dfa4bc36" (UID: "b5177fef-81ba-4f18-b118-53b5dfa4bc36"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.313443 4947 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5177fef-81ba-4f18-b118-53b5dfa4bc36-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.313520 4947 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5177fef-81ba-4f18-b118-53b5dfa4bc36-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.555563 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="954ecbc2-8aeb-4852-a501-aa51793e1cf6" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.167:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.555595 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="954ecbc2-8aeb-4852-a501-aa51793e1cf6" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.167:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.560382 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-cksns" Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.721176 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a07539ac-ab47-4fb2-b397-1dba22e18c65-ovsdbserver-nb\") pod \"a07539ac-ab47-4fb2-b397-1dba22e18c65\" (UID: \"a07539ac-ab47-4fb2-b397-1dba22e18c65\") " Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.721261 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2287w\" (UniqueName: \"kubernetes.io/projected/a07539ac-ab47-4fb2-b397-1dba22e18c65-kube-api-access-2287w\") pod \"a07539ac-ab47-4fb2-b397-1dba22e18c65\" (UID: \"a07539ac-ab47-4fb2-b397-1dba22e18c65\") " Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.721373 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a07539ac-ab47-4fb2-b397-1dba22e18c65-dns-svc\") pod \"a07539ac-ab47-4fb2-b397-1dba22e18c65\" (UID: \"a07539ac-ab47-4fb2-b397-1dba22e18c65\") " Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.721462 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a07539ac-ab47-4fb2-b397-1dba22e18c65-ovsdbserver-sb\") pod \"a07539ac-ab47-4fb2-b397-1dba22e18c65\" (UID: \"a07539ac-ab47-4fb2-b397-1dba22e18c65\") " Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.721503 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a07539ac-ab47-4fb2-b397-1dba22e18c65-config\") pod \"a07539ac-ab47-4fb2-b397-1dba22e18c65\" (UID: \"a07539ac-ab47-4fb2-b397-1dba22e18c65\") " Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.740723 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b5177fef-81ba-4f18-b118-53b5dfa4bc36","Type":"ContainerDied","Data":"8b237ebc6586b2b2d1f04f2a5435eeb76972dece7090a3a829f9efd4f3a6f592"} Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.740825 4947 scope.go:117] "RemoveContainer" containerID="5da9e2351cf0d67425719c7681fe1af903b1ea5416f3e21678a7293ee0b7d9e1" Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.740838 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.740725 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a07539ac-ab47-4fb2-b397-1dba22e18c65-kube-api-access-2287w" (OuterVolumeSpecName: "kube-api-access-2287w") pod "a07539ac-ab47-4fb2-b397-1dba22e18c65" (UID: "a07539ac-ab47-4fb2-b397-1dba22e18c65"). InnerVolumeSpecName "kube-api-access-2287w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.746381 4947 generic.go:334] "Generic (PLEG): container finished" podID="bc39fbb3-cd61-46df-9549-9a75ff63206e" containerID="88301c4a6525dd3464076b30396bec71b2f89f07078bcb17eb60b208cd865eba" exitCode=0 Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.746460 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7w6zr" event={"ID":"bc39fbb3-cd61-46df-9549-9a75ff63206e","Type":"ContainerDied","Data":"88301c4a6525dd3464076b30396bec71b2f89f07078bcb17eb60b208cd865eba"} Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.749057 4947 generic.go:334] "Generic (PLEG): container finished" podID="a07539ac-ab47-4fb2-b397-1dba22e18c65" containerID="3cdcdfa65126ac936671c2cddaab29c8f8cbad6987cd5665779e497743701b64" exitCode=0 Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.750190 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-cksns" Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.750201 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-cksns" event={"ID":"a07539ac-ab47-4fb2-b397-1dba22e18c65","Type":"ContainerDied","Data":"3cdcdfa65126ac936671c2cddaab29c8f8cbad6987cd5665779e497743701b64"} Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.751763 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-cksns" event={"ID":"a07539ac-ab47-4fb2-b397-1dba22e18c65","Type":"ContainerDied","Data":"04c2aa1f187dc05e8704b8b7fe6a2f872b3aef7e8642eb0941a300153d08eb21"} Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.792882 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.806968 4947 scope.go:117] "RemoveContainer" containerID="1a42893ec10eb5bf4f93766065a9c46249fe9d4d875e69bbda0121da907d0334" Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.817915 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.820458 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a07539ac-ab47-4fb2-b397-1dba22e18c65-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a07539ac-ab47-4fb2-b397-1dba22e18c65" (UID: "a07539ac-ab47-4fb2-b397-1dba22e18c65"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.835287 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a07539ac-ab47-4fb2-b397-1dba22e18c65-config" (OuterVolumeSpecName: "config") pod "a07539ac-ab47-4fb2-b397-1dba22e18c65" (UID: "a07539ac-ab47-4fb2-b397-1dba22e18c65"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.844473 4947 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a07539ac-ab47-4fb2-b397-1dba22e18c65-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.844527 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2287w\" (UniqueName: \"kubernetes.io/projected/a07539ac-ab47-4fb2-b397-1dba22e18c65-kube-api-access-2287w\") on node \"crc\" DevicePath \"\"" Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.844549 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a07539ac-ab47-4fb2-b397-1dba22e18c65-config\") on node \"crc\" DevicePath \"\"" Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.863767 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a07539ac-ab47-4fb2-b397-1dba22e18c65-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a07539ac-ab47-4fb2-b397-1dba22e18c65" (UID: "a07539ac-ab47-4fb2-b397-1dba22e18c65"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.873089 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a07539ac-ab47-4fb2-b397-1dba22e18c65-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a07539ac-ab47-4fb2-b397-1dba22e18c65" (UID: "a07539ac-ab47-4fb2-b397-1dba22e18c65"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.877618 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.888544 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 29 06:58:35 crc kubenswrapper[4947]: E1129 06:58:35.889388 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5177fef-81ba-4f18-b118-53b5dfa4bc36" containerName="ceilometer-central-agent" Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.889415 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5177fef-81ba-4f18-b118-53b5dfa4bc36" containerName="ceilometer-central-agent" Nov 29 06:58:35 crc kubenswrapper[4947]: E1129 06:58:35.889438 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a07539ac-ab47-4fb2-b397-1dba22e18c65" containerName="dnsmasq-dns" Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.889446 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="a07539ac-ab47-4fb2-b397-1dba22e18c65" containerName="dnsmasq-dns" Nov 29 06:58:35 crc kubenswrapper[4947]: E1129 06:58:35.889456 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5177fef-81ba-4f18-b118-53b5dfa4bc36" containerName="sg-core" Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.889465 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5177fef-81ba-4f18-b118-53b5dfa4bc36" containerName="sg-core" Nov 29 06:58:35 crc kubenswrapper[4947]: E1129 06:58:35.889474 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5177fef-81ba-4f18-b118-53b5dfa4bc36" containerName="ceilometer-notification-agent" Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.889481 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5177fef-81ba-4f18-b118-53b5dfa4bc36" containerName="ceilometer-notification-agent" Nov 29 06:58:35 crc kubenswrapper[4947]: E1129 06:58:35.889520 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a07539ac-ab47-4fb2-b397-1dba22e18c65" containerName="init" Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.889529 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="a07539ac-ab47-4fb2-b397-1dba22e18c65" containerName="init" Nov 29 06:58:35 crc kubenswrapper[4947]: E1129 06:58:35.889550 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5177fef-81ba-4f18-b118-53b5dfa4bc36" containerName="proxy-httpd" Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.889557 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5177fef-81ba-4f18-b118-53b5dfa4bc36" containerName="proxy-httpd" Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.889820 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5177fef-81ba-4f18-b118-53b5dfa4bc36" containerName="ceilometer-notification-agent" Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.889844 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="a07539ac-ab47-4fb2-b397-1dba22e18c65" containerName="dnsmasq-dns" Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.889867 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5177fef-81ba-4f18-b118-53b5dfa4bc36" containerName="sg-core" Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.889885 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5177fef-81ba-4f18-b118-53b5dfa4bc36" containerName="ceilometer-central-agent" Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.889896 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5177fef-81ba-4f18-b118-53b5dfa4bc36" containerName="proxy-httpd" Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.891957 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.900179 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.900891 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.902654 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.906651 4947 scope.go:117] "RemoveContainer" containerID="78f0686bdd74c35d9dcab63d5dc07b703f13ae8a922b45ac0d8000120769d691" Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.911676 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.942554 4947 scope.go:117] "RemoveContainer" containerID="d482f275afe3dfba197d7727361c5267a57f4c8724e6c9882417d5043c5fe1b6" Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.947421 4947 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a07539ac-ab47-4fb2-b397-1dba22e18c65-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.947477 4947 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a07539ac-ab47-4fb2-b397-1dba22e18c65-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 06:58:35 crc kubenswrapper[4947]: I1129 06:58:35.994421 4947 scope.go:117] "RemoveContainer" containerID="3cdcdfa65126ac936671c2cddaab29c8f8cbad6987cd5665779e497743701b64" Nov 29 06:58:36 crc kubenswrapper[4947]: I1129 06:58:36.019440 4947 scope.go:117] "RemoveContainer" containerID="d109548ef1a1996edd368d9cefa4f837b55738749cfc6a7f8bceeb3c93414e8f" Nov 29 06:58:36 crc kubenswrapper[4947]: I1129 06:58:36.048185 4947 scope.go:117] "RemoveContainer" containerID="3cdcdfa65126ac936671c2cddaab29c8f8cbad6987cd5665779e497743701b64" Nov 29 06:58:36 crc kubenswrapper[4947]: E1129 06:58:36.048838 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cdcdfa65126ac936671c2cddaab29c8f8cbad6987cd5665779e497743701b64\": container with ID starting with 3cdcdfa65126ac936671c2cddaab29c8f8cbad6987cd5665779e497743701b64 not found: ID does not exist" containerID="3cdcdfa65126ac936671c2cddaab29c8f8cbad6987cd5665779e497743701b64" Nov 29 06:58:36 crc kubenswrapper[4947]: I1129 06:58:36.048873 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cdcdfa65126ac936671c2cddaab29c8f8cbad6987cd5665779e497743701b64"} err="failed to get container status \"3cdcdfa65126ac936671c2cddaab29c8f8cbad6987cd5665779e497743701b64\": rpc error: code = NotFound desc = could not find container \"3cdcdfa65126ac936671c2cddaab29c8f8cbad6987cd5665779e497743701b64\": container with ID starting with 3cdcdfa65126ac936671c2cddaab29c8f8cbad6987cd5665779e497743701b64 not found: ID does not exist" Nov 29 06:58:36 crc kubenswrapper[4947]: I1129 06:58:36.048898 4947 scope.go:117] "RemoveContainer" containerID="d109548ef1a1996edd368d9cefa4f837b55738749cfc6a7f8bceeb3c93414e8f" Nov 29 06:58:36 crc kubenswrapper[4947]: E1129 06:58:36.049483 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d109548ef1a1996edd368d9cefa4f837b55738749cfc6a7f8bceeb3c93414e8f\": container with ID starting with d109548ef1a1996edd368d9cefa4f837b55738749cfc6a7f8bceeb3c93414e8f not found: ID does not exist" containerID="d109548ef1a1996edd368d9cefa4f837b55738749cfc6a7f8bceeb3c93414e8f" Nov 29 06:58:36 crc kubenswrapper[4947]: I1129 06:58:36.049504 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d109548ef1a1996edd368d9cefa4f837b55738749cfc6a7f8bceeb3c93414e8f"} err="failed to get container status \"d109548ef1a1996edd368d9cefa4f837b55738749cfc6a7f8bceeb3c93414e8f\": rpc error: code = NotFound desc = could not find container \"d109548ef1a1996edd368d9cefa4f837b55738749cfc6a7f8bceeb3c93414e8f\": container with ID starting with d109548ef1a1996edd368d9cefa4f837b55738749cfc6a7f8bceeb3c93414e8f not found: ID does not exist" Nov 29 06:58:36 crc kubenswrapper[4947]: I1129 06:58:36.060197 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/222c733d-f870-4232-b9d9-d2a9c738927f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"222c733d-f870-4232-b9d9-d2a9c738927f\") " pod="openstack/ceilometer-0" Nov 29 06:58:36 crc kubenswrapper[4947]: I1129 06:58:36.060399 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/222c733d-f870-4232-b9d9-d2a9c738927f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"222c733d-f870-4232-b9d9-d2a9c738927f\") " pod="openstack/ceilometer-0" Nov 29 06:58:36 crc kubenswrapper[4947]: I1129 06:58:36.060449 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/222c733d-f870-4232-b9d9-d2a9c738927f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"222c733d-f870-4232-b9d9-d2a9c738927f\") " pod="openstack/ceilometer-0" Nov 29 06:58:36 crc kubenswrapper[4947]: I1129 06:58:36.060524 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/222c733d-f870-4232-b9d9-d2a9c738927f-config-data\") pod \"ceilometer-0\" (UID: \"222c733d-f870-4232-b9d9-d2a9c738927f\") " pod="openstack/ceilometer-0" Nov 29 06:58:36 crc kubenswrapper[4947]: I1129 06:58:36.060711 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/222c733d-f870-4232-b9d9-d2a9c738927f-scripts\") pod \"ceilometer-0\" (UID: \"222c733d-f870-4232-b9d9-d2a9c738927f\") " pod="openstack/ceilometer-0" Nov 29 06:58:36 crc kubenswrapper[4947]: I1129 06:58:36.060737 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/222c733d-f870-4232-b9d9-d2a9c738927f-run-httpd\") pod \"ceilometer-0\" (UID: \"222c733d-f870-4232-b9d9-d2a9c738927f\") " pod="openstack/ceilometer-0" Nov 29 06:58:36 crc kubenswrapper[4947]: I1129 06:58:36.060766 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/222c733d-f870-4232-b9d9-d2a9c738927f-log-httpd\") pod \"ceilometer-0\" (UID: \"222c733d-f870-4232-b9d9-d2a9c738927f\") " pod="openstack/ceilometer-0" Nov 29 06:58:36 crc kubenswrapper[4947]: I1129 06:58:36.060888 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z4mh\" (UniqueName: \"kubernetes.io/projected/222c733d-f870-4232-b9d9-d2a9c738927f-kube-api-access-7z4mh\") pod \"ceilometer-0\" (UID: \"222c733d-f870-4232-b9d9-d2a9c738927f\") " pod="openstack/ceilometer-0" Nov 29 06:58:36 crc kubenswrapper[4947]: I1129 06:58:36.098608 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-cksns"] Nov 29 06:58:36 crc kubenswrapper[4947]: I1129 06:58:36.108114 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-cksns"] Nov 29 06:58:36 crc kubenswrapper[4947]: I1129 06:58:36.163483 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/222c733d-f870-4232-b9d9-d2a9c738927f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"222c733d-f870-4232-b9d9-d2a9c738927f\") " pod="openstack/ceilometer-0" Nov 29 06:58:36 crc kubenswrapper[4947]: I1129 06:58:36.163564 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/222c733d-f870-4232-b9d9-d2a9c738927f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"222c733d-f870-4232-b9d9-d2a9c738927f\") " pod="openstack/ceilometer-0" Nov 29 06:58:36 crc kubenswrapper[4947]: I1129 06:58:36.163594 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/222c733d-f870-4232-b9d9-d2a9c738927f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"222c733d-f870-4232-b9d9-d2a9c738927f\") " pod="openstack/ceilometer-0" Nov 29 06:58:36 crc kubenswrapper[4947]: I1129 06:58:36.163634 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/222c733d-f870-4232-b9d9-d2a9c738927f-config-data\") pod \"ceilometer-0\" (UID: \"222c733d-f870-4232-b9d9-d2a9c738927f\") " pod="openstack/ceilometer-0" Nov 29 06:58:36 crc kubenswrapper[4947]: I1129 06:58:36.163671 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/222c733d-f870-4232-b9d9-d2a9c738927f-scripts\") pod \"ceilometer-0\" (UID: \"222c733d-f870-4232-b9d9-d2a9c738927f\") " pod="openstack/ceilometer-0" Nov 29 06:58:36 crc kubenswrapper[4947]: I1129 06:58:36.163691 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/222c733d-f870-4232-b9d9-d2a9c738927f-run-httpd\") pod \"ceilometer-0\" (UID: \"222c733d-f870-4232-b9d9-d2a9c738927f\") " pod="openstack/ceilometer-0" Nov 29 06:58:36 crc kubenswrapper[4947]: I1129 06:58:36.163711 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/222c733d-f870-4232-b9d9-d2a9c738927f-log-httpd\") pod \"ceilometer-0\" (UID: \"222c733d-f870-4232-b9d9-d2a9c738927f\") " pod="openstack/ceilometer-0" Nov 29 06:58:36 crc kubenswrapper[4947]: I1129 06:58:36.163740 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z4mh\" (UniqueName: \"kubernetes.io/projected/222c733d-f870-4232-b9d9-d2a9c738927f-kube-api-access-7z4mh\") pod \"ceilometer-0\" (UID: \"222c733d-f870-4232-b9d9-d2a9c738927f\") " pod="openstack/ceilometer-0" Nov 29 06:58:36 crc kubenswrapper[4947]: I1129 06:58:36.165017 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/222c733d-f870-4232-b9d9-d2a9c738927f-log-httpd\") pod \"ceilometer-0\" (UID: \"222c733d-f870-4232-b9d9-d2a9c738927f\") " pod="openstack/ceilometer-0" Nov 29 06:58:36 crc kubenswrapper[4947]: I1129 06:58:36.165203 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/222c733d-f870-4232-b9d9-d2a9c738927f-run-httpd\") pod \"ceilometer-0\" (UID: \"222c733d-f870-4232-b9d9-d2a9c738927f\") " pod="openstack/ceilometer-0" Nov 29 06:58:36 crc kubenswrapper[4947]: I1129 06:58:36.173787 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/222c733d-f870-4232-b9d9-d2a9c738927f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"222c733d-f870-4232-b9d9-d2a9c738927f\") " pod="openstack/ceilometer-0" Nov 29 06:58:36 crc kubenswrapper[4947]: I1129 06:58:36.174946 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/222c733d-f870-4232-b9d9-d2a9c738927f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"222c733d-f870-4232-b9d9-d2a9c738927f\") " pod="openstack/ceilometer-0" Nov 29 06:58:36 crc kubenswrapper[4947]: I1129 06:58:36.175149 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/222c733d-f870-4232-b9d9-d2a9c738927f-scripts\") pod \"ceilometer-0\" (UID: \"222c733d-f870-4232-b9d9-d2a9c738927f\") " pod="openstack/ceilometer-0" Nov 29 06:58:36 crc kubenswrapper[4947]: I1129 06:58:36.182524 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/222c733d-f870-4232-b9d9-d2a9c738927f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"222c733d-f870-4232-b9d9-d2a9c738927f\") " pod="openstack/ceilometer-0" Nov 29 06:58:36 crc kubenswrapper[4947]: I1129 06:58:36.182665 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/222c733d-f870-4232-b9d9-d2a9c738927f-config-data\") pod \"ceilometer-0\" (UID: \"222c733d-f870-4232-b9d9-d2a9c738927f\") " pod="openstack/ceilometer-0" Nov 29 06:58:36 crc kubenswrapper[4947]: I1129 06:58:36.199201 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z4mh\" (UniqueName: \"kubernetes.io/projected/222c733d-f870-4232-b9d9-d2a9c738927f-kube-api-access-7z4mh\") pod \"ceilometer-0\" (UID: \"222c733d-f870-4232-b9d9-d2a9c738927f\") " pod="openstack/ceilometer-0" Nov 29 06:58:36 crc kubenswrapper[4947]: I1129 06:58:36.218174 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 06:58:36 crc kubenswrapper[4947]: I1129 06:58:36.727690 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 06:58:36 crc kubenswrapper[4947]: I1129 06:58:36.765407 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"222c733d-f870-4232-b9d9-d2a9c738927f","Type":"ContainerStarted","Data":"e549908914759ed275ca7c8e949e0659350022de8b3779c97844c28f320317d6"} Nov 29 06:58:37 crc kubenswrapper[4947]: I1129 06:58:37.197100 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a07539ac-ab47-4fb2-b397-1dba22e18c65" path="/var/lib/kubelet/pods/a07539ac-ab47-4fb2-b397-1dba22e18c65/volumes" Nov 29 06:58:37 crc kubenswrapper[4947]: I1129 06:58:37.198936 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5177fef-81ba-4f18-b118-53b5dfa4bc36" path="/var/lib/kubelet/pods/b5177fef-81ba-4f18-b118-53b5dfa4bc36/volumes" Nov 29 06:58:37 crc kubenswrapper[4947]: I1129 06:58:37.213791 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7w6zr" Nov 29 06:58:37 crc kubenswrapper[4947]: I1129 06:58:37.401046 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc39fbb3-cd61-46df-9549-9a75ff63206e-config-data\") pod \"bc39fbb3-cd61-46df-9549-9a75ff63206e\" (UID: \"bc39fbb3-cd61-46df-9549-9a75ff63206e\") " Nov 29 06:58:37 crc kubenswrapper[4947]: I1129 06:58:37.401252 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc39fbb3-cd61-46df-9549-9a75ff63206e-scripts\") pod \"bc39fbb3-cd61-46df-9549-9a75ff63206e\" (UID: \"bc39fbb3-cd61-46df-9549-9a75ff63206e\") " Nov 29 06:58:37 crc kubenswrapper[4947]: I1129 06:58:37.401380 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5h66h\" (UniqueName: \"kubernetes.io/projected/bc39fbb3-cd61-46df-9549-9a75ff63206e-kube-api-access-5h66h\") pod \"bc39fbb3-cd61-46df-9549-9a75ff63206e\" (UID: \"bc39fbb3-cd61-46df-9549-9a75ff63206e\") " Nov 29 06:58:37 crc kubenswrapper[4947]: I1129 06:58:37.401476 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc39fbb3-cd61-46df-9549-9a75ff63206e-combined-ca-bundle\") pod \"bc39fbb3-cd61-46df-9549-9a75ff63206e\" (UID: \"bc39fbb3-cd61-46df-9549-9a75ff63206e\") " Nov 29 06:58:37 crc kubenswrapper[4947]: I1129 06:58:37.429781 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc39fbb3-cd61-46df-9549-9a75ff63206e-kube-api-access-5h66h" (OuterVolumeSpecName: "kube-api-access-5h66h") pod "bc39fbb3-cd61-46df-9549-9a75ff63206e" (UID: "bc39fbb3-cd61-46df-9549-9a75ff63206e"). InnerVolumeSpecName "kube-api-access-5h66h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:58:37 crc kubenswrapper[4947]: I1129 06:58:37.456884 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc39fbb3-cd61-46df-9549-9a75ff63206e-scripts" (OuterVolumeSpecName: "scripts") pod "bc39fbb3-cd61-46df-9549-9a75ff63206e" (UID: "bc39fbb3-cd61-46df-9549-9a75ff63206e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:58:37 crc kubenswrapper[4947]: I1129 06:58:37.497439 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc39fbb3-cd61-46df-9549-9a75ff63206e-config-data" (OuterVolumeSpecName: "config-data") pod "bc39fbb3-cd61-46df-9549-9a75ff63206e" (UID: "bc39fbb3-cd61-46df-9549-9a75ff63206e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:58:37 crc kubenswrapper[4947]: I1129 06:58:37.504836 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5h66h\" (UniqueName: \"kubernetes.io/projected/bc39fbb3-cd61-46df-9549-9a75ff63206e-kube-api-access-5h66h\") on node \"crc\" DevicePath \"\"" Nov 29 06:58:37 crc kubenswrapper[4947]: I1129 06:58:37.504898 4947 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc39fbb3-cd61-46df-9549-9a75ff63206e-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 06:58:37 crc kubenswrapper[4947]: I1129 06:58:37.504911 4947 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc39fbb3-cd61-46df-9549-9a75ff63206e-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 06:58:37 crc kubenswrapper[4947]: I1129 06:58:37.515005 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc39fbb3-cd61-46df-9549-9a75ff63206e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc39fbb3-cd61-46df-9549-9a75ff63206e" (UID: "bc39fbb3-cd61-46df-9549-9a75ff63206e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:58:37 crc kubenswrapper[4947]: I1129 06:58:37.607562 4947 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc39fbb3-cd61-46df-9549-9a75ff63206e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 06:58:37 crc kubenswrapper[4947]: I1129 06:58:37.786262 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7w6zr" Nov 29 06:58:37 crc kubenswrapper[4947]: I1129 06:58:37.786261 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7w6zr" event={"ID":"bc39fbb3-cd61-46df-9549-9a75ff63206e","Type":"ContainerDied","Data":"861986d765691be9c314e7e37e21351f8e8e7a05b95f33ce7b5769f35dafb224"} Nov 29 06:58:37 crc kubenswrapper[4947]: I1129 06:58:37.790098 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="861986d765691be9c314e7e37e21351f8e8e7a05b95f33ce7b5769f35dafb224" Nov 29 06:58:37 crc kubenswrapper[4947]: I1129 06:58:37.994507 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 06:58:37 crc kubenswrapper[4947]: I1129 06:58:37.995356 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="7185c3eb-f19f-4e6e-9625-77304f01c880" containerName="nova-scheduler-scheduler" containerID="cri-o://8d0f40ca186354fe09edd525ffdf4a4f4af91c911c09d4dc71a85681e99a1c12" gracePeriod=30 Nov 29 06:58:38 crc kubenswrapper[4947]: I1129 06:58:38.007992 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 29 06:58:38 crc kubenswrapper[4947]: I1129 06:58:38.008609 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="954ecbc2-8aeb-4852-a501-aa51793e1cf6" containerName="nova-api-api" containerID="cri-o://0fab570bcdbe21a6be7498564ecc73fe8965348180fe9d5b18f7958d7b4700f6" gracePeriod=30 Nov 29 06:58:38 crc kubenswrapper[4947]: I1129 06:58:38.008903 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="954ecbc2-8aeb-4852-a501-aa51793e1cf6" containerName="nova-api-log" containerID="cri-o://302f4aae5ead7b6804f23f02a4bf311ef4a162fff553d9b1f519be79ef1469e7" gracePeriod=30 Nov 29 06:58:38 crc kubenswrapper[4947]: I1129 06:58:38.024415 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 06:58:38 crc kubenswrapper[4947]: I1129 06:58:38.024816 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4d894021-7cc3-48ae-bfa0-1f7f51de32dc" containerName="nova-metadata-log" containerID="cri-o://97e090b1ce6bcb728e6a620ea0ee172975f853922f25f6699e6d29ca1f3c012e" gracePeriod=30 Nov 29 06:58:38 crc kubenswrapper[4947]: I1129 06:58:38.025057 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4d894021-7cc3-48ae-bfa0-1f7f51de32dc" containerName="nova-metadata-metadata" containerID="cri-o://67195c622d14dc48ea0aafa4bbd28013f2b2630c5016b6b5d088293f27d02b15" gracePeriod=30 Nov 29 06:58:38 crc kubenswrapper[4947]: I1129 06:58:38.116101 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 29 06:58:38 crc kubenswrapper[4947]: I1129 06:58:38.118359 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 29 06:58:38 crc kubenswrapper[4947]: I1129 06:58:38.738720 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 06:58:38 crc kubenswrapper[4947]: I1129 06:58:38.825595 4947 generic.go:334] "Generic (PLEG): container finished" podID="954ecbc2-8aeb-4852-a501-aa51793e1cf6" containerID="302f4aae5ead7b6804f23f02a4bf311ef4a162fff553d9b1f519be79ef1469e7" exitCode=143 Nov 29 06:58:38 crc kubenswrapper[4947]: I1129 06:58:38.825791 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"954ecbc2-8aeb-4852-a501-aa51793e1cf6","Type":"ContainerDied","Data":"302f4aae5ead7b6804f23f02a4bf311ef4a162fff553d9b1f519be79ef1469e7"} Nov 29 06:58:38 crc kubenswrapper[4947]: I1129 06:58:38.832995 4947 generic.go:334] "Generic (PLEG): container finished" podID="4d894021-7cc3-48ae-bfa0-1f7f51de32dc" containerID="67195c622d14dc48ea0aafa4bbd28013f2b2630c5016b6b5d088293f27d02b15" exitCode=0 Nov 29 06:58:38 crc kubenswrapper[4947]: I1129 06:58:38.833035 4947 generic.go:334] "Generic (PLEG): container finished" podID="4d894021-7cc3-48ae-bfa0-1f7f51de32dc" containerID="97e090b1ce6bcb728e6a620ea0ee172975f853922f25f6699e6d29ca1f3c012e" exitCode=143 Nov 29 06:58:38 crc kubenswrapper[4947]: I1129 06:58:38.833081 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 06:58:38 crc kubenswrapper[4947]: I1129 06:58:38.833121 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4d894021-7cc3-48ae-bfa0-1f7f51de32dc","Type":"ContainerDied","Data":"67195c622d14dc48ea0aafa4bbd28013f2b2630c5016b6b5d088293f27d02b15"} Nov 29 06:58:38 crc kubenswrapper[4947]: I1129 06:58:38.833162 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4d894021-7cc3-48ae-bfa0-1f7f51de32dc","Type":"ContainerDied","Data":"97e090b1ce6bcb728e6a620ea0ee172975f853922f25f6699e6d29ca1f3c012e"} Nov 29 06:58:38 crc kubenswrapper[4947]: I1129 06:58:38.833175 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4d894021-7cc3-48ae-bfa0-1f7f51de32dc","Type":"ContainerDied","Data":"fbdaecec4837af2b845905a1ab009203eea7aa9e40a205658186e90cdbfe4c75"} Nov 29 06:58:38 crc kubenswrapper[4947]: I1129 06:58:38.833211 4947 scope.go:117] "RemoveContainer" containerID="67195c622d14dc48ea0aafa4bbd28013f2b2630c5016b6b5d088293f27d02b15" Nov 29 06:58:38 crc kubenswrapper[4947]: I1129 06:58:38.842280 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"222c733d-f870-4232-b9d9-d2a9c738927f","Type":"ContainerStarted","Data":"ab257a326615ba400b6aabdcefccf2c813646047394625533dff7162ce23e0c1"} Nov 29 06:58:38 crc kubenswrapper[4947]: I1129 06:58:38.842360 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"222c733d-f870-4232-b9d9-d2a9c738927f","Type":"ContainerStarted","Data":"46e0be66f82a12264ad3c18bad009d8802e669b6c4f7bf497631fae941b3657c"} Nov 29 06:58:38 crc kubenswrapper[4947]: I1129 06:58:38.861720 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d894021-7cc3-48ae-bfa0-1f7f51de32dc-config-data\") pod \"4d894021-7cc3-48ae-bfa0-1f7f51de32dc\" (UID: \"4d894021-7cc3-48ae-bfa0-1f7f51de32dc\") " Nov 29 06:58:38 crc kubenswrapper[4947]: I1129 06:58:38.862176 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d894021-7cc3-48ae-bfa0-1f7f51de32dc-nova-metadata-tls-certs\") pod \"4d894021-7cc3-48ae-bfa0-1f7f51de32dc\" (UID: \"4d894021-7cc3-48ae-bfa0-1f7f51de32dc\") " Nov 29 06:58:38 crc kubenswrapper[4947]: I1129 06:58:38.862713 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzfrn\" (UniqueName: \"kubernetes.io/projected/4d894021-7cc3-48ae-bfa0-1f7f51de32dc-kube-api-access-hzfrn\") pod \"4d894021-7cc3-48ae-bfa0-1f7f51de32dc\" (UID: \"4d894021-7cc3-48ae-bfa0-1f7f51de32dc\") " Nov 29 06:58:38 crc kubenswrapper[4947]: I1129 06:58:38.863394 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d894021-7cc3-48ae-bfa0-1f7f51de32dc-logs\") pod \"4d894021-7cc3-48ae-bfa0-1f7f51de32dc\" (UID: \"4d894021-7cc3-48ae-bfa0-1f7f51de32dc\") " Nov 29 06:58:38 crc kubenswrapper[4947]: I1129 06:58:38.863516 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d894021-7cc3-48ae-bfa0-1f7f51de32dc-combined-ca-bundle\") pod \"4d894021-7cc3-48ae-bfa0-1f7f51de32dc\" (UID: \"4d894021-7cc3-48ae-bfa0-1f7f51de32dc\") " Nov 29 06:58:38 crc kubenswrapper[4947]: I1129 06:58:38.863989 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d894021-7cc3-48ae-bfa0-1f7f51de32dc-logs" (OuterVolumeSpecName: "logs") pod "4d894021-7cc3-48ae-bfa0-1f7f51de32dc" (UID: "4d894021-7cc3-48ae-bfa0-1f7f51de32dc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:58:38 crc kubenswrapper[4947]: I1129 06:58:38.864548 4947 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d894021-7cc3-48ae-bfa0-1f7f51de32dc-logs\") on node \"crc\" DevicePath \"\"" Nov 29 06:58:38 crc kubenswrapper[4947]: I1129 06:58:38.866005 4947 scope.go:117] "RemoveContainer" containerID="97e090b1ce6bcb728e6a620ea0ee172975f853922f25f6699e6d29ca1f3c012e" Nov 29 06:58:38 crc kubenswrapper[4947]: I1129 06:58:38.868698 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d894021-7cc3-48ae-bfa0-1f7f51de32dc-kube-api-access-hzfrn" (OuterVolumeSpecName: "kube-api-access-hzfrn") pod "4d894021-7cc3-48ae-bfa0-1f7f51de32dc" (UID: "4d894021-7cc3-48ae-bfa0-1f7f51de32dc"). InnerVolumeSpecName "kube-api-access-hzfrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:58:38 crc kubenswrapper[4947]: I1129 06:58:38.898033 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d894021-7cc3-48ae-bfa0-1f7f51de32dc-config-data" (OuterVolumeSpecName: "config-data") pod "4d894021-7cc3-48ae-bfa0-1f7f51de32dc" (UID: "4d894021-7cc3-48ae-bfa0-1f7f51de32dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:58:38 crc kubenswrapper[4947]: I1129 06:58:38.900726 4947 scope.go:117] "RemoveContainer" containerID="67195c622d14dc48ea0aafa4bbd28013f2b2630c5016b6b5d088293f27d02b15" Nov 29 06:58:38 crc kubenswrapper[4947]: E1129 06:58:38.901329 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67195c622d14dc48ea0aafa4bbd28013f2b2630c5016b6b5d088293f27d02b15\": container with ID starting with 67195c622d14dc48ea0aafa4bbd28013f2b2630c5016b6b5d088293f27d02b15 not found: ID does not exist" containerID="67195c622d14dc48ea0aafa4bbd28013f2b2630c5016b6b5d088293f27d02b15" Nov 29 06:58:38 crc kubenswrapper[4947]: I1129 06:58:38.901403 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67195c622d14dc48ea0aafa4bbd28013f2b2630c5016b6b5d088293f27d02b15"} err="failed to get container status \"67195c622d14dc48ea0aafa4bbd28013f2b2630c5016b6b5d088293f27d02b15\": rpc error: code = NotFound desc = could not find container \"67195c622d14dc48ea0aafa4bbd28013f2b2630c5016b6b5d088293f27d02b15\": container with ID starting with 67195c622d14dc48ea0aafa4bbd28013f2b2630c5016b6b5d088293f27d02b15 not found: ID does not exist" Nov 29 06:58:38 crc kubenswrapper[4947]: I1129 06:58:38.901437 4947 scope.go:117] "RemoveContainer" containerID="97e090b1ce6bcb728e6a620ea0ee172975f853922f25f6699e6d29ca1f3c012e" Nov 29 06:58:38 crc kubenswrapper[4947]: E1129 06:58:38.901791 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97e090b1ce6bcb728e6a620ea0ee172975f853922f25f6699e6d29ca1f3c012e\": container with ID starting with 97e090b1ce6bcb728e6a620ea0ee172975f853922f25f6699e6d29ca1f3c012e not found: ID does not exist" containerID="97e090b1ce6bcb728e6a620ea0ee172975f853922f25f6699e6d29ca1f3c012e" Nov 29 06:58:38 crc kubenswrapper[4947]: I1129 06:58:38.901843 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97e090b1ce6bcb728e6a620ea0ee172975f853922f25f6699e6d29ca1f3c012e"} err="failed to get container status \"97e090b1ce6bcb728e6a620ea0ee172975f853922f25f6699e6d29ca1f3c012e\": rpc error: code = NotFound desc = could not find container \"97e090b1ce6bcb728e6a620ea0ee172975f853922f25f6699e6d29ca1f3c012e\": container with ID starting with 97e090b1ce6bcb728e6a620ea0ee172975f853922f25f6699e6d29ca1f3c012e not found: ID does not exist" Nov 29 06:58:38 crc kubenswrapper[4947]: I1129 06:58:38.901868 4947 scope.go:117] "RemoveContainer" containerID="67195c622d14dc48ea0aafa4bbd28013f2b2630c5016b6b5d088293f27d02b15" Nov 29 06:58:38 crc kubenswrapper[4947]: I1129 06:58:38.902357 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67195c622d14dc48ea0aafa4bbd28013f2b2630c5016b6b5d088293f27d02b15"} err="failed to get container status \"67195c622d14dc48ea0aafa4bbd28013f2b2630c5016b6b5d088293f27d02b15\": rpc error: code = NotFound desc = could not find container \"67195c622d14dc48ea0aafa4bbd28013f2b2630c5016b6b5d088293f27d02b15\": container with ID starting with 67195c622d14dc48ea0aafa4bbd28013f2b2630c5016b6b5d088293f27d02b15 not found: ID does not exist" Nov 29 06:58:38 crc kubenswrapper[4947]: I1129 06:58:38.902383 4947 scope.go:117] "RemoveContainer" containerID="97e090b1ce6bcb728e6a620ea0ee172975f853922f25f6699e6d29ca1f3c012e" Nov 29 06:58:38 crc kubenswrapper[4947]: I1129 06:58:38.903162 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97e090b1ce6bcb728e6a620ea0ee172975f853922f25f6699e6d29ca1f3c012e"} err="failed to get container status \"97e090b1ce6bcb728e6a620ea0ee172975f853922f25f6699e6d29ca1f3c012e\": rpc error: code = NotFound desc = could not find container \"97e090b1ce6bcb728e6a620ea0ee172975f853922f25f6699e6d29ca1f3c012e\": container with ID starting with 97e090b1ce6bcb728e6a620ea0ee172975f853922f25f6699e6d29ca1f3c012e not found: ID does not exist" Nov 29 06:58:38 crc kubenswrapper[4947]: I1129 06:58:38.910935 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d894021-7cc3-48ae-bfa0-1f7f51de32dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4d894021-7cc3-48ae-bfa0-1f7f51de32dc" (UID: "4d894021-7cc3-48ae-bfa0-1f7f51de32dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:58:38 crc kubenswrapper[4947]: I1129 06:58:38.928115 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d894021-7cc3-48ae-bfa0-1f7f51de32dc-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "4d894021-7cc3-48ae-bfa0-1f7f51de32dc" (UID: "4d894021-7cc3-48ae-bfa0-1f7f51de32dc"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:58:38 crc kubenswrapper[4947]: I1129 06:58:38.974632 4947 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d894021-7cc3-48ae-bfa0-1f7f51de32dc-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 06:58:38 crc kubenswrapper[4947]: I1129 06:58:38.974697 4947 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d894021-7cc3-48ae-bfa0-1f7f51de32dc-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 29 06:58:38 crc kubenswrapper[4947]: I1129 06:58:38.974720 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzfrn\" (UniqueName: \"kubernetes.io/projected/4d894021-7cc3-48ae-bfa0-1f7f51de32dc-kube-api-access-hzfrn\") on node \"crc\" DevicePath \"\"" Nov 29 06:58:38 crc kubenswrapper[4947]: I1129 06:58:38.974744 4947 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d894021-7cc3-48ae-bfa0-1f7f51de32dc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 06:58:39 crc kubenswrapper[4947]: I1129 06:58:39.173095 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 06:58:39 crc kubenswrapper[4947]: I1129 06:58:39.193167 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 06:58:39 crc kubenswrapper[4947]: I1129 06:58:39.205336 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 29 06:58:39 crc kubenswrapper[4947]: E1129 06:58:39.205824 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc39fbb3-cd61-46df-9549-9a75ff63206e" containerName="nova-manage" Nov 29 06:58:39 crc kubenswrapper[4947]: I1129 06:58:39.205854 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc39fbb3-cd61-46df-9549-9a75ff63206e" containerName="nova-manage" Nov 29 06:58:39 crc kubenswrapper[4947]: E1129 06:58:39.205876 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d894021-7cc3-48ae-bfa0-1f7f51de32dc" containerName="nova-metadata-log" Nov 29 06:58:39 crc kubenswrapper[4947]: I1129 06:58:39.205887 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d894021-7cc3-48ae-bfa0-1f7f51de32dc" containerName="nova-metadata-log" Nov 29 06:58:39 crc kubenswrapper[4947]: E1129 06:58:39.205897 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d894021-7cc3-48ae-bfa0-1f7f51de32dc" containerName="nova-metadata-metadata" Nov 29 06:58:39 crc kubenswrapper[4947]: I1129 06:58:39.205905 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d894021-7cc3-48ae-bfa0-1f7f51de32dc" containerName="nova-metadata-metadata" Nov 29 06:58:39 crc kubenswrapper[4947]: I1129 06:58:39.206094 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc39fbb3-cd61-46df-9549-9a75ff63206e" containerName="nova-manage" Nov 29 06:58:39 crc kubenswrapper[4947]: I1129 06:58:39.206118 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d894021-7cc3-48ae-bfa0-1f7f51de32dc" containerName="nova-metadata-metadata" Nov 29 06:58:39 crc kubenswrapper[4947]: I1129 06:58:39.206127 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d894021-7cc3-48ae-bfa0-1f7f51de32dc" containerName="nova-metadata-log" Nov 29 06:58:39 crc kubenswrapper[4947]: I1129 06:58:39.207203 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 06:58:39 crc kubenswrapper[4947]: I1129 06:58:39.210441 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 29 06:58:39 crc kubenswrapper[4947]: I1129 06:58:39.210647 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 29 06:58:39 crc kubenswrapper[4947]: I1129 06:58:39.227118 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 06:58:39 crc kubenswrapper[4947]: I1129 06:58:39.281582 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7845fd7b-f71a-4974-ae50-17ce9451207f-logs\") pod \"nova-metadata-0\" (UID: \"7845fd7b-f71a-4974-ae50-17ce9451207f\") " pod="openstack/nova-metadata-0" Nov 29 06:58:39 crc kubenswrapper[4947]: I1129 06:58:39.281679 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7845fd7b-f71a-4974-ae50-17ce9451207f-config-data\") pod \"nova-metadata-0\" (UID: \"7845fd7b-f71a-4974-ae50-17ce9451207f\") " pod="openstack/nova-metadata-0" Nov 29 06:58:39 crc kubenswrapper[4947]: I1129 06:58:39.281718 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tcxp\" (UniqueName: \"kubernetes.io/projected/7845fd7b-f71a-4974-ae50-17ce9451207f-kube-api-access-8tcxp\") pod \"nova-metadata-0\" (UID: \"7845fd7b-f71a-4974-ae50-17ce9451207f\") " pod="openstack/nova-metadata-0" Nov 29 06:58:39 crc kubenswrapper[4947]: I1129 06:58:39.281756 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7845fd7b-f71a-4974-ae50-17ce9451207f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7845fd7b-f71a-4974-ae50-17ce9451207f\") " pod="openstack/nova-metadata-0" Nov 29 06:58:39 crc kubenswrapper[4947]: I1129 06:58:39.281871 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7845fd7b-f71a-4974-ae50-17ce9451207f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7845fd7b-f71a-4974-ae50-17ce9451207f\") " pod="openstack/nova-metadata-0" Nov 29 06:58:39 crc kubenswrapper[4947]: I1129 06:58:39.383614 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7845fd7b-f71a-4974-ae50-17ce9451207f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7845fd7b-f71a-4974-ae50-17ce9451207f\") " pod="openstack/nova-metadata-0" Nov 29 06:58:39 crc kubenswrapper[4947]: I1129 06:58:39.383714 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7845fd7b-f71a-4974-ae50-17ce9451207f-logs\") pod \"nova-metadata-0\" (UID: \"7845fd7b-f71a-4974-ae50-17ce9451207f\") " pod="openstack/nova-metadata-0" Nov 29 06:58:39 crc kubenswrapper[4947]: I1129 06:58:39.383763 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7845fd7b-f71a-4974-ae50-17ce9451207f-config-data\") pod \"nova-metadata-0\" (UID: \"7845fd7b-f71a-4974-ae50-17ce9451207f\") " pod="openstack/nova-metadata-0" Nov 29 06:58:39 crc kubenswrapper[4947]: I1129 06:58:39.383808 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tcxp\" (UniqueName: \"kubernetes.io/projected/7845fd7b-f71a-4974-ae50-17ce9451207f-kube-api-access-8tcxp\") pod \"nova-metadata-0\" (UID: \"7845fd7b-f71a-4974-ae50-17ce9451207f\") " pod="openstack/nova-metadata-0" Nov 29 06:58:39 crc kubenswrapper[4947]: I1129 06:58:39.383846 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7845fd7b-f71a-4974-ae50-17ce9451207f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7845fd7b-f71a-4974-ae50-17ce9451207f\") " pod="openstack/nova-metadata-0" Nov 29 06:58:39 crc kubenswrapper[4947]: I1129 06:58:39.389456 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7845fd7b-f71a-4974-ae50-17ce9451207f-logs\") pod \"nova-metadata-0\" (UID: \"7845fd7b-f71a-4974-ae50-17ce9451207f\") " pod="openstack/nova-metadata-0" Nov 29 06:58:39 crc kubenswrapper[4947]: I1129 06:58:39.389898 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7845fd7b-f71a-4974-ae50-17ce9451207f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7845fd7b-f71a-4974-ae50-17ce9451207f\") " pod="openstack/nova-metadata-0" Nov 29 06:58:39 crc kubenswrapper[4947]: I1129 06:58:39.395353 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7845fd7b-f71a-4974-ae50-17ce9451207f-config-data\") pod \"nova-metadata-0\" (UID: \"7845fd7b-f71a-4974-ae50-17ce9451207f\") " pod="openstack/nova-metadata-0" Nov 29 06:58:39 crc kubenswrapper[4947]: I1129 06:58:39.400143 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7845fd7b-f71a-4974-ae50-17ce9451207f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7845fd7b-f71a-4974-ae50-17ce9451207f\") " pod="openstack/nova-metadata-0" Nov 29 06:58:39 crc kubenswrapper[4947]: I1129 06:58:39.414124 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tcxp\" (UniqueName: \"kubernetes.io/projected/7845fd7b-f71a-4974-ae50-17ce9451207f-kube-api-access-8tcxp\") pod \"nova-metadata-0\" (UID: \"7845fd7b-f71a-4974-ae50-17ce9451207f\") " pod="openstack/nova-metadata-0" Nov 29 06:58:39 crc kubenswrapper[4947]: I1129 06:58:39.527525 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 06:58:39 crc kubenswrapper[4947]: E1129 06:58:39.778001 4947 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8d0f40ca186354fe09edd525ffdf4a4f4af91c911c09d4dc71a85681e99a1c12" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 29 06:58:39 crc kubenswrapper[4947]: E1129 06:58:39.787795 4947 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8d0f40ca186354fe09edd525ffdf4a4f4af91c911c09d4dc71a85681e99a1c12" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 29 06:58:39 crc kubenswrapper[4947]: E1129 06:58:39.795689 4947 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8d0f40ca186354fe09edd525ffdf4a4f4af91c911c09d4dc71a85681e99a1c12" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 29 06:58:39 crc kubenswrapper[4947]: E1129 06:58:39.795804 4947 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="7185c3eb-f19f-4e6e-9625-77304f01c880" containerName="nova-scheduler-scheduler" Nov 29 06:58:39 crc kubenswrapper[4947]: I1129 06:58:39.867321 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"222c733d-f870-4232-b9d9-d2a9c738927f","Type":"ContainerStarted","Data":"9ff2ae046700848d2b08ddcd201293d0059cb6918dec496bac3564bee1933bd9"} Nov 29 06:58:40 crc kubenswrapper[4947]: W1129 06:58:40.084611 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7845fd7b_f71a_4974_ae50_17ce9451207f.slice/crio-7304c05d88c1961ea09b914d4a6f84e2ab7e95c63fc008f6685333f6bed0dd5a WatchSource:0}: Error finding container 7304c05d88c1961ea09b914d4a6f84e2ab7e95c63fc008f6685333f6bed0dd5a: Status 404 returned error can't find the container with id 7304c05d88c1961ea09b914d4a6f84e2ab7e95c63fc008f6685333f6bed0dd5a Nov 29 06:58:40 crc kubenswrapper[4947]: I1129 06:58:40.085657 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 06:58:40 crc kubenswrapper[4947]: I1129 06:58:40.880571 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7845fd7b-f71a-4974-ae50-17ce9451207f","Type":"ContainerStarted","Data":"48c0f8eea5f305cda56e8cb0a04ff0c65cd0e03f7d23246573a49adfd3e8a6fc"} Nov 29 06:58:40 crc kubenswrapper[4947]: I1129 06:58:40.881012 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7845fd7b-f71a-4974-ae50-17ce9451207f","Type":"ContainerStarted","Data":"0156d25edea207b6a6e347008a9998faa722816370e6a383eb5b8b311e8b51b6"} Nov 29 06:58:40 crc kubenswrapper[4947]: I1129 06:58:40.881026 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7845fd7b-f71a-4974-ae50-17ce9451207f","Type":"ContainerStarted","Data":"7304c05d88c1961ea09b914d4a6f84e2ab7e95c63fc008f6685333f6bed0dd5a"} Nov 29 06:58:40 crc kubenswrapper[4947]: I1129 06:58:40.924371 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.924341401 podStartE2EDuration="1.924341401s" podCreationTimestamp="2025-11-29 06:58:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:58:40.907944959 +0000 UTC m=+1471.952327040" watchObservedRunningTime="2025-11-29 06:58:40.924341401 +0000 UTC m=+1471.968723492" Nov 29 06:58:41 crc kubenswrapper[4947]: I1129 06:58:41.193671 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d894021-7cc3-48ae-bfa0-1f7f51de32dc" path="/var/lib/kubelet/pods/4d894021-7cc3-48ae-bfa0-1f7f51de32dc/volumes" Nov 29 06:58:41 crc kubenswrapper[4947]: I1129 06:58:41.898632 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"222c733d-f870-4232-b9d9-d2a9c738927f","Type":"ContainerStarted","Data":"5757b49b55088a093bd1c1509a030821dc46b673412467c680cfd40b2f2ac60c"} Nov 29 06:58:41 crc kubenswrapper[4947]: I1129 06:58:41.898740 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 29 06:58:41 crc kubenswrapper[4947]: I1129 06:58:41.928984 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.702648069 podStartE2EDuration="6.928949257s" podCreationTimestamp="2025-11-29 06:58:35 +0000 UTC" firstStartedPulling="2025-11-29 06:58:36.73731914 +0000 UTC m=+1467.781701221" lastFinishedPulling="2025-11-29 06:58:40.963620328 +0000 UTC m=+1472.008002409" observedRunningTime="2025-11-29 06:58:41.92669909 +0000 UTC m=+1472.971081191" watchObservedRunningTime="2025-11-29 06:58:41.928949257 +0000 UTC m=+1472.973331338" Nov 29 06:58:42 crc kubenswrapper[4947]: I1129 06:58:42.093619 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 29 06:58:42 crc kubenswrapper[4947]: I1129 06:58:42.321804 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zggzl"] Nov 29 06:58:42 crc kubenswrapper[4947]: I1129 06:58:42.324005 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zggzl" Nov 29 06:58:42 crc kubenswrapper[4947]: I1129 06:58:42.340548 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zggzl"] Nov 29 06:58:42 crc kubenswrapper[4947]: I1129 06:58:42.460379 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6h9z\" (UniqueName: \"kubernetes.io/projected/5544ed49-9bfd-43ed-bd78-0e54f833ec68-kube-api-access-x6h9z\") pod \"redhat-operators-zggzl\" (UID: \"5544ed49-9bfd-43ed-bd78-0e54f833ec68\") " pod="openshift-marketplace/redhat-operators-zggzl" Nov 29 06:58:42 crc kubenswrapper[4947]: I1129 06:58:42.460462 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5544ed49-9bfd-43ed-bd78-0e54f833ec68-utilities\") pod \"redhat-operators-zggzl\" (UID: \"5544ed49-9bfd-43ed-bd78-0e54f833ec68\") " pod="openshift-marketplace/redhat-operators-zggzl" Nov 29 06:58:42 crc kubenswrapper[4947]: I1129 06:58:42.461077 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5544ed49-9bfd-43ed-bd78-0e54f833ec68-catalog-content\") pod \"redhat-operators-zggzl\" (UID: \"5544ed49-9bfd-43ed-bd78-0e54f833ec68\") " pod="openshift-marketplace/redhat-operators-zggzl" Nov 29 06:58:42 crc kubenswrapper[4947]: I1129 06:58:42.563811 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5544ed49-9bfd-43ed-bd78-0e54f833ec68-catalog-content\") pod \"redhat-operators-zggzl\" (UID: \"5544ed49-9bfd-43ed-bd78-0e54f833ec68\") " pod="openshift-marketplace/redhat-operators-zggzl" Nov 29 06:58:42 crc kubenswrapper[4947]: I1129 06:58:42.563942 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6h9z\" (UniqueName: \"kubernetes.io/projected/5544ed49-9bfd-43ed-bd78-0e54f833ec68-kube-api-access-x6h9z\") pod \"redhat-operators-zggzl\" (UID: \"5544ed49-9bfd-43ed-bd78-0e54f833ec68\") " pod="openshift-marketplace/redhat-operators-zggzl" Nov 29 06:58:42 crc kubenswrapper[4947]: I1129 06:58:42.563979 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5544ed49-9bfd-43ed-bd78-0e54f833ec68-utilities\") pod \"redhat-operators-zggzl\" (UID: \"5544ed49-9bfd-43ed-bd78-0e54f833ec68\") " pod="openshift-marketplace/redhat-operators-zggzl" Nov 29 06:58:42 crc kubenswrapper[4947]: I1129 06:58:42.564787 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5544ed49-9bfd-43ed-bd78-0e54f833ec68-utilities\") pod \"redhat-operators-zggzl\" (UID: \"5544ed49-9bfd-43ed-bd78-0e54f833ec68\") " pod="openshift-marketplace/redhat-operators-zggzl" Nov 29 06:58:42 crc kubenswrapper[4947]: I1129 06:58:42.564877 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5544ed49-9bfd-43ed-bd78-0e54f833ec68-catalog-content\") pod \"redhat-operators-zggzl\" (UID: \"5544ed49-9bfd-43ed-bd78-0e54f833ec68\") " pod="openshift-marketplace/redhat-operators-zggzl" Nov 29 06:58:42 crc kubenswrapper[4947]: I1129 06:58:42.588915 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6h9z\" (UniqueName: \"kubernetes.io/projected/5544ed49-9bfd-43ed-bd78-0e54f833ec68-kube-api-access-x6h9z\") pod \"redhat-operators-zggzl\" (UID: \"5544ed49-9bfd-43ed-bd78-0e54f833ec68\") " pod="openshift-marketplace/redhat-operators-zggzl" Nov 29 06:58:42 crc kubenswrapper[4947]: I1129 06:58:42.665647 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zggzl" Nov 29 06:58:42 crc kubenswrapper[4947]: I1129 06:58:42.920406 4947 generic.go:334] "Generic (PLEG): container finished" podID="7185c3eb-f19f-4e6e-9625-77304f01c880" containerID="8d0f40ca186354fe09edd525ffdf4a4f4af91c911c09d4dc71a85681e99a1c12" exitCode=0 Nov 29 06:58:42 crc kubenswrapper[4947]: I1129 06:58:42.920947 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7185c3eb-f19f-4e6e-9625-77304f01c880","Type":"ContainerDied","Data":"8d0f40ca186354fe09edd525ffdf4a4f4af91c911c09d4dc71a85681e99a1c12"} Nov 29 06:58:42 crc kubenswrapper[4947]: I1129 06:58:42.943625 4947 generic.go:334] "Generic (PLEG): container finished" podID="954ecbc2-8aeb-4852-a501-aa51793e1cf6" containerID="0fab570bcdbe21a6be7498564ecc73fe8965348180fe9d5b18f7958d7b4700f6" exitCode=0 Nov 29 06:58:42 crc kubenswrapper[4947]: I1129 06:58:42.943937 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"954ecbc2-8aeb-4852-a501-aa51793e1cf6","Type":"ContainerDied","Data":"0fab570bcdbe21a6be7498564ecc73fe8965348180fe9d5b18f7958d7b4700f6"} Nov 29 06:58:43 crc kubenswrapper[4947]: I1129 06:58:43.016065 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 06:58:43 crc kubenswrapper[4947]: I1129 06:58:43.180609 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/954ecbc2-8aeb-4852-a501-aa51793e1cf6-logs\") pod \"954ecbc2-8aeb-4852-a501-aa51793e1cf6\" (UID: \"954ecbc2-8aeb-4852-a501-aa51793e1cf6\") " Nov 29 06:58:43 crc kubenswrapper[4947]: I1129 06:58:43.181284 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/954ecbc2-8aeb-4852-a501-aa51793e1cf6-config-data\") pod \"954ecbc2-8aeb-4852-a501-aa51793e1cf6\" (UID: \"954ecbc2-8aeb-4852-a501-aa51793e1cf6\") " Nov 29 06:58:43 crc kubenswrapper[4947]: I1129 06:58:43.181412 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwnn7\" (UniqueName: \"kubernetes.io/projected/954ecbc2-8aeb-4852-a501-aa51793e1cf6-kube-api-access-vwnn7\") pod \"954ecbc2-8aeb-4852-a501-aa51793e1cf6\" (UID: \"954ecbc2-8aeb-4852-a501-aa51793e1cf6\") " Nov 29 06:58:43 crc kubenswrapper[4947]: I1129 06:58:43.181498 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/954ecbc2-8aeb-4852-a501-aa51793e1cf6-combined-ca-bundle\") pod \"954ecbc2-8aeb-4852-a501-aa51793e1cf6\" (UID: \"954ecbc2-8aeb-4852-a501-aa51793e1cf6\") " Nov 29 06:58:43 crc kubenswrapper[4947]: I1129 06:58:43.185576 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/954ecbc2-8aeb-4852-a501-aa51793e1cf6-logs" (OuterVolumeSpecName: "logs") pod "954ecbc2-8aeb-4852-a501-aa51793e1cf6" (UID: "954ecbc2-8aeb-4852-a501-aa51793e1cf6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:58:43 crc kubenswrapper[4947]: I1129 06:58:43.226835 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/954ecbc2-8aeb-4852-a501-aa51793e1cf6-kube-api-access-vwnn7" (OuterVolumeSpecName: "kube-api-access-vwnn7") pod "954ecbc2-8aeb-4852-a501-aa51793e1cf6" (UID: "954ecbc2-8aeb-4852-a501-aa51793e1cf6"). InnerVolumeSpecName "kube-api-access-vwnn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:58:43 crc kubenswrapper[4947]: I1129 06:58:43.264922 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/954ecbc2-8aeb-4852-a501-aa51793e1cf6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "954ecbc2-8aeb-4852-a501-aa51793e1cf6" (UID: "954ecbc2-8aeb-4852-a501-aa51793e1cf6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:58:43 crc kubenswrapper[4947]: I1129 06:58:43.271526 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 29 06:58:43 crc kubenswrapper[4947]: I1129 06:58:43.275066 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/954ecbc2-8aeb-4852-a501-aa51793e1cf6-config-data" (OuterVolumeSpecName: "config-data") pod "954ecbc2-8aeb-4852-a501-aa51793e1cf6" (UID: "954ecbc2-8aeb-4852-a501-aa51793e1cf6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:58:43 crc kubenswrapper[4947]: I1129 06:58:43.284343 4947 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/954ecbc2-8aeb-4852-a501-aa51793e1cf6-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 06:58:43 crc kubenswrapper[4947]: I1129 06:58:43.284393 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwnn7\" (UniqueName: \"kubernetes.io/projected/954ecbc2-8aeb-4852-a501-aa51793e1cf6-kube-api-access-vwnn7\") on node \"crc\" DevicePath \"\"" Nov 29 06:58:43 crc kubenswrapper[4947]: I1129 06:58:43.284405 4947 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/954ecbc2-8aeb-4852-a501-aa51793e1cf6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 06:58:43 crc kubenswrapper[4947]: I1129 06:58:43.284418 4947 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/954ecbc2-8aeb-4852-a501-aa51793e1cf6-logs\") on node \"crc\" DevicePath \"\"" Nov 29 06:58:43 crc kubenswrapper[4947]: I1129 06:58:43.386099 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7185c3eb-f19f-4e6e-9625-77304f01c880-combined-ca-bundle\") pod \"7185c3eb-f19f-4e6e-9625-77304f01c880\" (UID: \"7185c3eb-f19f-4e6e-9625-77304f01c880\") " Nov 29 06:58:43 crc kubenswrapper[4947]: I1129 06:58:43.386186 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tj6jq\" (UniqueName: \"kubernetes.io/projected/7185c3eb-f19f-4e6e-9625-77304f01c880-kube-api-access-tj6jq\") pod \"7185c3eb-f19f-4e6e-9625-77304f01c880\" (UID: \"7185c3eb-f19f-4e6e-9625-77304f01c880\") " Nov 29 06:58:43 crc kubenswrapper[4947]: I1129 06:58:43.386313 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7185c3eb-f19f-4e6e-9625-77304f01c880-config-data\") pod \"7185c3eb-f19f-4e6e-9625-77304f01c880\" (UID: \"7185c3eb-f19f-4e6e-9625-77304f01c880\") " Nov 29 06:58:43 crc kubenswrapper[4947]: I1129 06:58:43.402743 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7185c3eb-f19f-4e6e-9625-77304f01c880-kube-api-access-tj6jq" (OuterVolumeSpecName: "kube-api-access-tj6jq") pod "7185c3eb-f19f-4e6e-9625-77304f01c880" (UID: "7185c3eb-f19f-4e6e-9625-77304f01c880"). InnerVolumeSpecName "kube-api-access-tj6jq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:58:43 crc kubenswrapper[4947]: I1129 06:58:43.490032 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tj6jq\" (UniqueName: \"kubernetes.io/projected/7185c3eb-f19f-4e6e-9625-77304f01c880-kube-api-access-tj6jq\") on node \"crc\" DevicePath \"\"" Nov 29 06:58:43 crc kubenswrapper[4947]: I1129 06:58:43.511569 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7185c3eb-f19f-4e6e-9625-77304f01c880-config-data" (OuterVolumeSpecName: "config-data") pod "7185c3eb-f19f-4e6e-9625-77304f01c880" (UID: "7185c3eb-f19f-4e6e-9625-77304f01c880"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:58:43 crc kubenswrapper[4947]: I1129 06:58:43.565081 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zggzl"] Nov 29 06:58:43 crc kubenswrapper[4947]: I1129 06:58:43.574188 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7185c3eb-f19f-4e6e-9625-77304f01c880-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7185c3eb-f19f-4e6e-9625-77304f01c880" (UID: "7185c3eb-f19f-4e6e-9625-77304f01c880"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:58:43 crc kubenswrapper[4947]: I1129 06:58:43.592885 4947 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7185c3eb-f19f-4e6e-9625-77304f01c880-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 06:58:43 crc kubenswrapper[4947]: I1129 06:58:43.592927 4947 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7185c3eb-f19f-4e6e-9625-77304f01c880-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 06:58:43 crc kubenswrapper[4947]: I1129 06:58:43.958728 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"954ecbc2-8aeb-4852-a501-aa51793e1cf6","Type":"ContainerDied","Data":"f6587b98e41dde67ecd6879c40bbea9c78a37485e018e6d5f3909d6b2ac6ad8e"} Nov 29 06:58:43 crc kubenswrapper[4947]: I1129 06:58:43.959241 4947 scope.go:117] "RemoveContainer" containerID="0fab570bcdbe21a6be7498564ecc73fe8965348180fe9d5b18f7958d7b4700f6" Nov 29 06:58:43 crc kubenswrapper[4947]: I1129 06:58:43.959292 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 06:58:43 crc kubenswrapper[4947]: I1129 06:58:43.964892 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7185c3eb-f19f-4e6e-9625-77304f01c880","Type":"ContainerDied","Data":"7250984409eebdbfd422627486a4ed36f73113c307cbd6b218cfefc61e59fbce"} Nov 29 06:58:43 crc kubenswrapper[4947]: I1129 06:58:43.965107 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 29 06:58:43 crc kubenswrapper[4947]: I1129 06:58:43.980628 4947 generic.go:334] "Generic (PLEG): container finished" podID="5544ed49-9bfd-43ed-bd78-0e54f833ec68" containerID="1b7ca7c2a3a409687e2ec8c650dfd7cc9ce71ca66fc9790a50aa22f2d83947b5" exitCode=0 Nov 29 06:58:43 crc kubenswrapper[4947]: I1129 06:58:43.996074 4947 scope.go:117] "RemoveContainer" containerID="302f4aae5ead7b6804f23f02a4bf311ef4a162fff553d9b1f519be79ef1469e7" Nov 29 06:58:44 crc kubenswrapper[4947]: I1129 06:58:43.982471 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zggzl" event={"ID":"5544ed49-9bfd-43ed-bd78-0e54f833ec68","Type":"ContainerDied","Data":"1b7ca7c2a3a409687e2ec8c650dfd7cc9ce71ca66fc9790a50aa22f2d83947b5"} Nov 29 06:58:44 crc kubenswrapper[4947]: I1129 06:58:44.000500 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zggzl" event={"ID":"5544ed49-9bfd-43ed-bd78-0e54f833ec68","Type":"ContainerStarted","Data":"bc6424768ab5325a40bdcab3c24484969fa79bdc97862bbb56beaa72294b6f7f"} Nov 29 06:58:44 crc kubenswrapper[4947]: I1129 06:58:44.061566 4947 scope.go:117] "RemoveContainer" containerID="8d0f40ca186354fe09edd525ffdf4a4f4af91c911c09d4dc71a85681e99a1c12" Nov 29 06:58:44 crc kubenswrapper[4947]: I1129 06:58:44.134593 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 29 06:58:44 crc kubenswrapper[4947]: I1129 06:58:44.147318 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 29 06:58:44 crc kubenswrapper[4947]: I1129 06:58:44.167977 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 06:58:44 crc kubenswrapper[4947]: I1129 06:58:44.184590 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 06:58:44 crc kubenswrapper[4947]: I1129 06:58:44.192011 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 29 06:58:44 crc kubenswrapper[4947]: E1129 06:58:44.192559 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7185c3eb-f19f-4e6e-9625-77304f01c880" containerName="nova-scheduler-scheduler" Nov 29 06:58:44 crc kubenswrapper[4947]: I1129 06:58:44.192577 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="7185c3eb-f19f-4e6e-9625-77304f01c880" containerName="nova-scheduler-scheduler" Nov 29 06:58:44 crc kubenswrapper[4947]: E1129 06:58:44.192608 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="954ecbc2-8aeb-4852-a501-aa51793e1cf6" containerName="nova-api-api" Nov 29 06:58:44 crc kubenswrapper[4947]: I1129 06:58:44.192616 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="954ecbc2-8aeb-4852-a501-aa51793e1cf6" containerName="nova-api-api" Nov 29 06:58:44 crc kubenswrapper[4947]: E1129 06:58:44.192636 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="954ecbc2-8aeb-4852-a501-aa51793e1cf6" containerName="nova-api-log" Nov 29 06:58:44 crc kubenswrapper[4947]: I1129 06:58:44.192642 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="954ecbc2-8aeb-4852-a501-aa51793e1cf6" containerName="nova-api-log" Nov 29 06:58:44 crc kubenswrapper[4947]: I1129 06:58:44.192831 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="954ecbc2-8aeb-4852-a501-aa51793e1cf6" containerName="nova-api-log" Nov 29 06:58:44 crc kubenswrapper[4947]: I1129 06:58:44.192860 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="7185c3eb-f19f-4e6e-9625-77304f01c880" containerName="nova-scheduler-scheduler" Nov 29 06:58:44 crc kubenswrapper[4947]: I1129 06:58:44.192873 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="954ecbc2-8aeb-4852-a501-aa51793e1cf6" containerName="nova-api-api" Nov 29 06:58:44 crc kubenswrapper[4947]: I1129 06:58:44.194088 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 06:58:44 crc kubenswrapper[4947]: I1129 06:58:44.200021 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 06:58:44 crc kubenswrapper[4947]: I1129 06:58:44.200372 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 29 06:58:44 crc kubenswrapper[4947]: I1129 06:58:44.201536 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 29 06:58:44 crc kubenswrapper[4947]: I1129 06:58:44.204876 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 29 06:58:44 crc kubenswrapper[4947]: I1129 06:58:44.210299 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 29 06:58:44 crc kubenswrapper[4947]: I1129 06:58:44.220562 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 06:58:44 crc kubenswrapper[4947]: I1129 06:58:44.309133 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62b04c16-c119-4709-a694-441964ae360a-logs\") pod \"nova-api-0\" (UID: \"62b04c16-c119-4709-a694-441964ae360a\") " pod="openstack/nova-api-0" Nov 29 06:58:44 crc kubenswrapper[4947]: I1129 06:58:44.309256 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgcr9\" (UniqueName: \"kubernetes.io/projected/eb7e3676-7ede-4882-984c-4f2e68c73420-kube-api-access-mgcr9\") pod \"nova-scheduler-0\" (UID: \"eb7e3676-7ede-4882-984c-4f2e68c73420\") " pod="openstack/nova-scheduler-0" Nov 29 06:58:44 crc kubenswrapper[4947]: I1129 06:58:44.309303 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb7e3676-7ede-4882-984c-4f2e68c73420-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"eb7e3676-7ede-4882-984c-4f2e68c73420\") " pod="openstack/nova-scheduler-0" Nov 29 06:58:44 crc kubenswrapper[4947]: I1129 06:58:44.309335 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s77x2\" (UniqueName: \"kubernetes.io/projected/62b04c16-c119-4709-a694-441964ae360a-kube-api-access-s77x2\") pod \"nova-api-0\" (UID: \"62b04c16-c119-4709-a694-441964ae360a\") " pod="openstack/nova-api-0" Nov 29 06:58:44 crc kubenswrapper[4947]: I1129 06:58:44.309374 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62b04c16-c119-4709-a694-441964ae360a-config-data\") pod \"nova-api-0\" (UID: \"62b04c16-c119-4709-a694-441964ae360a\") " pod="openstack/nova-api-0" Nov 29 06:58:44 crc kubenswrapper[4947]: I1129 06:58:44.309398 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62b04c16-c119-4709-a694-441964ae360a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"62b04c16-c119-4709-a694-441964ae360a\") " pod="openstack/nova-api-0" Nov 29 06:58:44 crc kubenswrapper[4947]: I1129 06:58:44.309712 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb7e3676-7ede-4882-984c-4f2e68c73420-config-data\") pod \"nova-scheduler-0\" (UID: \"eb7e3676-7ede-4882-984c-4f2e68c73420\") " pod="openstack/nova-scheduler-0" Nov 29 06:58:44 crc kubenswrapper[4947]: I1129 06:58:44.411550 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb7e3676-7ede-4882-984c-4f2e68c73420-config-data\") pod \"nova-scheduler-0\" (UID: \"eb7e3676-7ede-4882-984c-4f2e68c73420\") " pod="openstack/nova-scheduler-0" Nov 29 06:58:44 crc kubenswrapper[4947]: I1129 06:58:44.411626 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62b04c16-c119-4709-a694-441964ae360a-logs\") pod \"nova-api-0\" (UID: \"62b04c16-c119-4709-a694-441964ae360a\") " pod="openstack/nova-api-0" Nov 29 06:58:44 crc kubenswrapper[4947]: I1129 06:58:44.411651 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgcr9\" (UniqueName: \"kubernetes.io/projected/eb7e3676-7ede-4882-984c-4f2e68c73420-kube-api-access-mgcr9\") pod \"nova-scheduler-0\" (UID: \"eb7e3676-7ede-4882-984c-4f2e68c73420\") " pod="openstack/nova-scheduler-0" Nov 29 06:58:44 crc kubenswrapper[4947]: I1129 06:58:44.411674 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb7e3676-7ede-4882-984c-4f2e68c73420-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"eb7e3676-7ede-4882-984c-4f2e68c73420\") " pod="openstack/nova-scheduler-0" Nov 29 06:58:44 crc kubenswrapper[4947]: I1129 06:58:44.411697 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s77x2\" (UniqueName: \"kubernetes.io/projected/62b04c16-c119-4709-a694-441964ae360a-kube-api-access-s77x2\") pod \"nova-api-0\" (UID: \"62b04c16-c119-4709-a694-441964ae360a\") " pod="openstack/nova-api-0" Nov 29 06:58:44 crc kubenswrapper[4947]: I1129 06:58:44.411716 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62b04c16-c119-4709-a694-441964ae360a-config-data\") pod \"nova-api-0\" (UID: \"62b04c16-c119-4709-a694-441964ae360a\") " pod="openstack/nova-api-0" Nov 29 06:58:44 crc kubenswrapper[4947]: I1129 06:58:44.411736 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62b04c16-c119-4709-a694-441964ae360a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"62b04c16-c119-4709-a694-441964ae360a\") " pod="openstack/nova-api-0" Nov 29 06:58:44 crc kubenswrapper[4947]: I1129 06:58:44.413864 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62b04c16-c119-4709-a694-441964ae360a-logs\") pod \"nova-api-0\" (UID: \"62b04c16-c119-4709-a694-441964ae360a\") " pod="openstack/nova-api-0" Nov 29 06:58:44 crc kubenswrapper[4947]: I1129 06:58:44.428645 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62b04c16-c119-4709-a694-441964ae360a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"62b04c16-c119-4709-a694-441964ae360a\") " pod="openstack/nova-api-0" Nov 29 06:58:44 crc kubenswrapper[4947]: I1129 06:58:44.440211 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62b04c16-c119-4709-a694-441964ae360a-config-data\") pod \"nova-api-0\" (UID: \"62b04c16-c119-4709-a694-441964ae360a\") " pod="openstack/nova-api-0" Nov 29 06:58:44 crc kubenswrapper[4947]: I1129 06:58:44.443994 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb7e3676-7ede-4882-984c-4f2e68c73420-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"eb7e3676-7ede-4882-984c-4f2e68c73420\") " pod="openstack/nova-scheduler-0" Nov 29 06:58:44 crc kubenswrapper[4947]: I1129 06:58:44.447170 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgcr9\" (UniqueName: \"kubernetes.io/projected/eb7e3676-7ede-4882-984c-4f2e68c73420-kube-api-access-mgcr9\") pod \"nova-scheduler-0\" (UID: \"eb7e3676-7ede-4882-984c-4f2e68c73420\") " pod="openstack/nova-scheduler-0" Nov 29 06:58:44 crc kubenswrapper[4947]: I1129 06:58:44.450842 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb7e3676-7ede-4882-984c-4f2e68c73420-config-data\") pod \"nova-scheduler-0\" (UID: \"eb7e3676-7ede-4882-984c-4f2e68c73420\") " pod="openstack/nova-scheduler-0" Nov 29 06:58:44 crc kubenswrapper[4947]: I1129 06:58:44.481297 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s77x2\" (UniqueName: \"kubernetes.io/projected/62b04c16-c119-4709-a694-441964ae360a-kube-api-access-s77x2\") pod \"nova-api-0\" (UID: \"62b04c16-c119-4709-a694-441964ae360a\") " pod="openstack/nova-api-0" Nov 29 06:58:44 crc kubenswrapper[4947]: I1129 06:58:44.525983 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 06:58:44 crc kubenswrapper[4947]: I1129 06:58:44.530345 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 29 06:58:44 crc kubenswrapper[4947]: I1129 06:58:44.533066 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 29 06:58:44 crc kubenswrapper[4947]: I1129 06:58:44.556304 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 29 06:58:45 crc kubenswrapper[4947]: I1129 06:58:45.434192 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7185c3eb-f19f-4e6e-9625-77304f01c880" path="/var/lib/kubelet/pods/7185c3eb-f19f-4e6e-9625-77304f01c880/volumes" Nov 29 06:58:45 crc kubenswrapper[4947]: I1129 06:58:45.436842 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="954ecbc2-8aeb-4852-a501-aa51793e1cf6" path="/var/lib/kubelet/pods/954ecbc2-8aeb-4852-a501-aa51793e1cf6/volumes" Nov 29 06:58:45 crc kubenswrapper[4947]: I1129 06:58:45.840861 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 06:58:45 crc kubenswrapper[4947]: W1129 06:58:45.848014 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb7e3676_7ede_4882_984c_4f2e68c73420.slice/crio-cc150f53865e730a423176da4e5df3db1b394478c8ca3aef88f0682974932700 WatchSource:0}: Error finding container cc150f53865e730a423176da4e5df3db1b394478c8ca3aef88f0682974932700: Status 404 returned error can't find the container with id cc150f53865e730a423176da4e5df3db1b394478c8ca3aef88f0682974932700 Nov 29 06:58:45 crc kubenswrapper[4947]: I1129 06:58:45.985598 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 29 06:58:45 crc kubenswrapper[4947]: W1129 06:58:45.996991 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62b04c16_c119_4709_a694_441964ae360a.slice/crio-649a8a2ff291382e0b00452d31ba3e58045ca7439b6445e0c4c26699300a7234 WatchSource:0}: Error finding container 649a8a2ff291382e0b00452d31ba3e58045ca7439b6445e0c4c26699300a7234: Status 404 returned error can't find the container with id 649a8a2ff291382e0b00452d31ba3e58045ca7439b6445e0c4c26699300a7234 Nov 29 06:58:46 crc kubenswrapper[4947]: I1129 06:58:46.496442 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"eb7e3676-7ede-4882-984c-4f2e68c73420","Type":"ContainerStarted","Data":"f00a8fe203a625eaac6622ebc6f9c8c85707ed2a66ebf16e16ab703139d741e7"} Nov 29 06:58:46 crc kubenswrapper[4947]: I1129 06:58:46.496508 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"eb7e3676-7ede-4882-984c-4f2e68c73420","Type":"ContainerStarted","Data":"cc150f53865e730a423176da4e5df3db1b394478c8ca3aef88f0682974932700"} Nov 29 06:58:46 crc kubenswrapper[4947]: I1129 06:58:46.504775 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"62b04c16-c119-4709-a694-441964ae360a","Type":"ContainerStarted","Data":"c85c026a3f6f16ddabaad780b40696d87275eae6448aed3f1fe1139afc81af8b"} Nov 29 06:58:46 crc kubenswrapper[4947]: I1129 06:58:46.504854 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"62b04c16-c119-4709-a694-441964ae360a","Type":"ContainerStarted","Data":"649a8a2ff291382e0b00452d31ba3e58045ca7439b6445e0c4c26699300a7234"} Nov 29 06:58:46 crc kubenswrapper[4947]: I1129 06:58:46.508056 4947 generic.go:334] "Generic (PLEG): container finished" podID="5544ed49-9bfd-43ed-bd78-0e54f833ec68" containerID="5ad375efedb3967269790be87a19c40dad1d8088e196507aae7f602543b5df79" exitCode=0 Nov 29 06:58:46 crc kubenswrapper[4947]: I1129 06:58:46.508310 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zggzl" event={"ID":"5544ed49-9bfd-43ed-bd78-0e54f833ec68","Type":"ContainerDied","Data":"5ad375efedb3967269790be87a19c40dad1d8088e196507aae7f602543b5df79"} Nov 29 06:58:46 crc kubenswrapper[4947]: I1129 06:58:46.523834 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.52380065 podStartE2EDuration="2.52380065s" podCreationTimestamp="2025-11-29 06:58:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:58:46.518103117 +0000 UTC m=+1477.562485198" watchObservedRunningTime="2025-11-29 06:58:46.52380065 +0000 UTC m=+1477.568182731" Nov 29 06:58:47 crc kubenswrapper[4947]: I1129 06:58:47.523773 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"62b04c16-c119-4709-a694-441964ae360a","Type":"ContainerStarted","Data":"c916c3331e7fa6684b961e29efe870c572a05ebfdc1e68a00668230675c750b4"} Nov 29 06:58:47 crc kubenswrapper[4947]: I1129 06:58:47.532047 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zggzl" event={"ID":"5544ed49-9bfd-43ed-bd78-0e54f833ec68","Type":"ContainerStarted","Data":"8ebea970572d81c420d2ce82bba12f5121aa96280ee71bf414e008c5f7cd3d77"} Nov 29 06:58:47 crc kubenswrapper[4947]: I1129 06:58:47.556898 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.556871902 podStartE2EDuration="3.556871902s" podCreationTimestamp="2025-11-29 06:58:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:58:47.550674636 +0000 UTC m=+1478.595056737" watchObservedRunningTime="2025-11-29 06:58:47.556871902 +0000 UTC m=+1478.601253973" Nov 29 06:58:47 crc kubenswrapper[4947]: I1129 06:58:47.583596 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zggzl" podStartSLOduration=2.56436083 podStartE2EDuration="5.583566583s" podCreationTimestamp="2025-11-29 06:58:42 +0000 UTC" firstStartedPulling="2025-11-29 06:58:43.996502604 +0000 UTC m=+1475.040884685" lastFinishedPulling="2025-11-29 06:58:47.015708357 +0000 UTC m=+1478.060090438" observedRunningTime="2025-11-29 06:58:47.579037209 +0000 UTC m=+1478.623419290" watchObservedRunningTime="2025-11-29 06:58:47.583566583 +0000 UTC m=+1478.627948664" Nov 29 06:58:48 crc kubenswrapper[4947]: I1129 06:58:48.546780 4947 generic.go:334] "Generic (PLEG): container finished" podID="fdacfe2a-30f5-443f-b368-9019ec66fb2e" containerID="9ad70e03fefbcb9419496af9cc1b1c36fb6799180c7e812b53b5325346d0143a" exitCode=0 Nov 29 06:58:48 crc kubenswrapper[4947]: I1129 06:58:48.546879 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-dvx7w" event={"ID":"fdacfe2a-30f5-443f-b368-9019ec66fb2e","Type":"ContainerDied","Data":"9ad70e03fefbcb9419496af9cc1b1c36fb6799180c7e812b53b5325346d0143a"} Nov 29 06:58:49 crc kubenswrapper[4947]: I1129 06:58:49.527935 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 29 06:58:49 crc kubenswrapper[4947]: I1129 06:58:49.528437 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 29 06:58:49 crc kubenswrapper[4947]: I1129 06:58:49.557836 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 29 06:58:50 crc kubenswrapper[4947]: I1129 06:58:50.053708 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-dvx7w" Nov 29 06:58:50 crc kubenswrapper[4947]: I1129 06:58:50.103036 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrnkx\" (UniqueName: \"kubernetes.io/projected/fdacfe2a-30f5-443f-b368-9019ec66fb2e-kube-api-access-zrnkx\") pod \"fdacfe2a-30f5-443f-b368-9019ec66fb2e\" (UID: \"fdacfe2a-30f5-443f-b368-9019ec66fb2e\") " Nov 29 06:58:50 crc kubenswrapper[4947]: I1129 06:58:50.103502 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdacfe2a-30f5-443f-b368-9019ec66fb2e-combined-ca-bundle\") pod \"fdacfe2a-30f5-443f-b368-9019ec66fb2e\" (UID: \"fdacfe2a-30f5-443f-b368-9019ec66fb2e\") " Nov 29 06:58:50 crc kubenswrapper[4947]: I1129 06:58:50.103643 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdacfe2a-30f5-443f-b368-9019ec66fb2e-config-data\") pod \"fdacfe2a-30f5-443f-b368-9019ec66fb2e\" (UID: \"fdacfe2a-30f5-443f-b368-9019ec66fb2e\") " Nov 29 06:58:50 crc kubenswrapper[4947]: I1129 06:58:50.104719 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdacfe2a-30f5-443f-b368-9019ec66fb2e-scripts\") pod \"fdacfe2a-30f5-443f-b368-9019ec66fb2e\" (UID: \"fdacfe2a-30f5-443f-b368-9019ec66fb2e\") " Nov 29 06:58:50 crc kubenswrapper[4947]: I1129 06:58:50.112770 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdacfe2a-30f5-443f-b368-9019ec66fb2e-kube-api-access-zrnkx" (OuterVolumeSpecName: "kube-api-access-zrnkx") pod "fdacfe2a-30f5-443f-b368-9019ec66fb2e" (UID: "fdacfe2a-30f5-443f-b368-9019ec66fb2e"). InnerVolumeSpecName "kube-api-access-zrnkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:58:50 crc kubenswrapper[4947]: I1129 06:58:50.134161 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdacfe2a-30f5-443f-b368-9019ec66fb2e-scripts" (OuterVolumeSpecName: "scripts") pod "fdacfe2a-30f5-443f-b368-9019ec66fb2e" (UID: "fdacfe2a-30f5-443f-b368-9019ec66fb2e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:58:50 crc kubenswrapper[4947]: I1129 06:58:50.158883 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdacfe2a-30f5-443f-b368-9019ec66fb2e-config-data" (OuterVolumeSpecName: "config-data") pod "fdacfe2a-30f5-443f-b368-9019ec66fb2e" (UID: "fdacfe2a-30f5-443f-b368-9019ec66fb2e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:58:50 crc kubenswrapper[4947]: I1129 06:58:50.173569 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdacfe2a-30f5-443f-b368-9019ec66fb2e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fdacfe2a-30f5-443f-b368-9019ec66fb2e" (UID: "fdacfe2a-30f5-443f-b368-9019ec66fb2e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:58:50 crc kubenswrapper[4947]: I1129 06:58:50.206792 4947 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdacfe2a-30f5-443f-b368-9019ec66fb2e-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 06:58:50 crc kubenswrapper[4947]: I1129 06:58:50.206851 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrnkx\" (UniqueName: \"kubernetes.io/projected/fdacfe2a-30f5-443f-b368-9019ec66fb2e-kube-api-access-zrnkx\") on node \"crc\" DevicePath \"\"" Nov 29 06:58:50 crc kubenswrapper[4947]: I1129 06:58:50.206870 4947 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdacfe2a-30f5-443f-b368-9019ec66fb2e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 06:58:50 crc kubenswrapper[4947]: I1129 06:58:50.206884 4947 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdacfe2a-30f5-443f-b368-9019ec66fb2e-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 06:58:50 crc kubenswrapper[4947]: I1129 06:58:50.543537 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7845fd7b-f71a-4974-ae50-17ce9451207f" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.176:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 29 06:58:50 crc kubenswrapper[4947]: I1129 06:58:50.543602 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7845fd7b-f71a-4974-ae50-17ce9451207f" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.176:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 29 06:58:50 crc kubenswrapper[4947]: I1129 06:58:50.569895 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-dvx7w" event={"ID":"fdacfe2a-30f5-443f-b368-9019ec66fb2e","Type":"ContainerDied","Data":"74874b1f177acab1c66227cc94d092a66327e34ea3aae0f4ece2cdd79928f141"} Nov 29 06:58:50 crc kubenswrapper[4947]: I1129 06:58:50.570016 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74874b1f177acab1c66227cc94d092a66327e34ea3aae0f4ece2cdd79928f141" Nov 29 06:58:50 crc kubenswrapper[4947]: I1129 06:58:50.570950 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-dvx7w" Nov 29 06:58:50 crc kubenswrapper[4947]: I1129 06:58:50.704537 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 29 06:58:50 crc kubenswrapper[4947]: E1129 06:58:50.705099 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdacfe2a-30f5-443f-b368-9019ec66fb2e" containerName="nova-cell1-conductor-db-sync" Nov 29 06:58:50 crc kubenswrapper[4947]: I1129 06:58:50.705124 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdacfe2a-30f5-443f-b368-9019ec66fb2e" containerName="nova-cell1-conductor-db-sync" Nov 29 06:58:50 crc kubenswrapper[4947]: I1129 06:58:50.705433 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdacfe2a-30f5-443f-b368-9019ec66fb2e" containerName="nova-cell1-conductor-db-sync" Nov 29 06:58:50 crc kubenswrapper[4947]: I1129 06:58:50.706431 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 29 06:58:50 crc kubenswrapper[4947]: I1129 06:58:50.709505 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 29 06:58:50 crc kubenswrapper[4947]: I1129 06:58:50.717728 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 29 06:58:50 crc kubenswrapper[4947]: I1129 06:58:50.820137 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmwbn\" (UniqueName: \"kubernetes.io/projected/c41e44dd-b8f5-41e1-b296-abf1ed9bfda1-kube-api-access-jmwbn\") pod \"nova-cell1-conductor-0\" (UID: \"c41e44dd-b8f5-41e1-b296-abf1ed9bfda1\") " pod="openstack/nova-cell1-conductor-0" Nov 29 06:58:50 crc kubenswrapper[4947]: I1129 06:58:50.820638 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c41e44dd-b8f5-41e1-b296-abf1ed9bfda1-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"c41e44dd-b8f5-41e1-b296-abf1ed9bfda1\") " pod="openstack/nova-cell1-conductor-0" Nov 29 06:58:50 crc kubenswrapper[4947]: I1129 06:58:50.820689 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c41e44dd-b8f5-41e1-b296-abf1ed9bfda1-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"c41e44dd-b8f5-41e1-b296-abf1ed9bfda1\") " pod="openstack/nova-cell1-conductor-0" Nov 29 06:58:50 crc kubenswrapper[4947]: I1129 06:58:50.922585 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmwbn\" (UniqueName: \"kubernetes.io/projected/c41e44dd-b8f5-41e1-b296-abf1ed9bfda1-kube-api-access-jmwbn\") pod \"nova-cell1-conductor-0\" (UID: \"c41e44dd-b8f5-41e1-b296-abf1ed9bfda1\") " pod="openstack/nova-cell1-conductor-0" Nov 29 06:58:50 crc kubenswrapper[4947]: I1129 06:58:50.923050 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c41e44dd-b8f5-41e1-b296-abf1ed9bfda1-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"c41e44dd-b8f5-41e1-b296-abf1ed9bfda1\") " pod="openstack/nova-cell1-conductor-0" Nov 29 06:58:50 crc kubenswrapper[4947]: I1129 06:58:50.923196 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c41e44dd-b8f5-41e1-b296-abf1ed9bfda1-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"c41e44dd-b8f5-41e1-b296-abf1ed9bfda1\") " pod="openstack/nova-cell1-conductor-0" Nov 29 06:58:50 crc kubenswrapper[4947]: I1129 06:58:50.929232 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c41e44dd-b8f5-41e1-b296-abf1ed9bfda1-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"c41e44dd-b8f5-41e1-b296-abf1ed9bfda1\") " pod="openstack/nova-cell1-conductor-0" Nov 29 06:58:50 crc kubenswrapper[4947]: I1129 06:58:50.929864 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c41e44dd-b8f5-41e1-b296-abf1ed9bfda1-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"c41e44dd-b8f5-41e1-b296-abf1ed9bfda1\") " pod="openstack/nova-cell1-conductor-0" Nov 29 06:58:50 crc kubenswrapper[4947]: I1129 06:58:50.943804 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmwbn\" (UniqueName: \"kubernetes.io/projected/c41e44dd-b8f5-41e1-b296-abf1ed9bfda1-kube-api-access-jmwbn\") pod \"nova-cell1-conductor-0\" (UID: \"c41e44dd-b8f5-41e1-b296-abf1ed9bfda1\") " pod="openstack/nova-cell1-conductor-0" Nov 29 06:58:51 crc kubenswrapper[4947]: I1129 06:58:51.040739 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 29 06:58:51 crc kubenswrapper[4947]: W1129 06:58:51.590787 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc41e44dd_b8f5_41e1_b296_abf1ed9bfda1.slice/crio-307cff85ec8b02c80794e5447613f27e19d3ee8cde40c47550c0a756c5a91f00 WatchSource:0}: Error finding container 307cff85ec8b02c80794e5447613f27e19d3ee8cde40c47550c0a756c5a91f00: Status 404 returned error can't find the container with id 307cff85ec8b02c80794e5447613f27e19d3ee8cde40c47550c0a756c5a91f00 Nov 29 06:58:51 crc kubenswrapper[4947]: I1129 06:58:51.590863 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 29 06:58:52 crc kubenswrapper[4947]: I1129 06:58:52.601588 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"c41e44dd-b8f5-41e1-b296-abf1ed9bfda1","Type":"ContainerStarted","Data":"c091f4a1d8a1884d61a9ed4b409ab6b100992a515ede19cec2514559d9bee7b7"} Nov 29 06:58:52 crc kubenswrapper[4947]: I1129 06:58:52.602050 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"c41e44dd-b8f5-41e1-b296-abf1ed9bfda1","Type":"ContainerStarted","Data":"307cff85ec8b02c80794e5447613f27e19d3ee8cde40c47550c0a756c5a91f00"} Nov 29 06:58:52 crc kubenswrapper[4947]: I1129 06:58:52.603772 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 29 06:58:52 crc kubenswrapper[4947]: I1129 06:58:52.632290 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.632259247 podStartE2EDuration="2.632259247s" podCreationTimestamp="2025-11-29 06:58:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:58:52.622577703 +0000 UTC m=+1483.666959834" watchObservedRunningTime="2025-11-29 06:58:52.632259247 +0000 UTC m=+1483.676641328" Nov 29 06:58:52 crc kubenswrapper[4947]: I1129 06:58:52.666006 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zggzl" Nov 29 06:58:52 crc kubenswrapper[4947]: I1129 06:58:52.666206 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zggzl" Nov 29 06:58:52 crc kubenswrapper[4947]: I1129 06:58:52.988279 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 06:58:52 crc kubenswrapper[4947]: I1129 06:58:52.988375 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 06:58:52 crc kubenswrapper[4947]: I1129 06:58:52.988454 4947 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" Nov 29 06:58:52 crc kubenswrapper[4947]: I1129 06:58:52.989616 4947 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"df51260596870c91ccb9712810f435518b0c8fd5a5c15540a25aafaee5eb1aa5"} pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 06:58:52 crc kubenswrapper[4947]: I1129 06:58:52.989707 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" containerID="cri-o://df51260596870c91ccb9712810f435518b0c8fd5a5c15540a25aafaee5eb1aa5" gracePeriod=600 Nov 29 06:58:53 crc kubenswrapper[4947]: I1129 06:58:53.721750 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zggzl" podUID="5544ed49-9bfd-43ed-bd78-0e54f833ec68" containerName="registry-server" probeResult="failure" output=< Nov 29 06:58:53 crc kubenswrapper[4947]: timeout: failed to connect service ":50051" within 1s Nov 29 06:58:53 crc kubenswrapper[4947]: > Nov 29 06:58:54 crc kubenswrapper[4947]: I1129 06:58:54.526837 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 29 06:58:54 crc kubenswrapper[4947]: I1129 06:58:54.528790 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 29 06:58:54 crc kubenswrapper[4947]: I1129 06:58:54.558142 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 29 06:58:54 crc kubenswrapper[4947]: I1129 06:58:54.599755 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 29 06:58:54 crc kubenswrapper[4947]: I1129 06:58:54.657255 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 29 06:58:55 crc kubenswrapper[4947]: I1129 06:58:55.609471 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="62b04c16-c119-4709-a694-441964ae360a" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.178:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 29 06:58:55 crc kubenswrapper[4947]: I1129 06:58:55.609533 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="62b04c16-c119-4709-a694-441964ae360a" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.178:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 29 06:58:55 crc kubenswrapper[4947]: I1129 06:58:55.638564 4947 generic.go:334] "Generic (PLEG): container finished" podID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerID="df51260596870c91ccb9712810f435518b0c8fd5a5c15540a25aafaee5eb1aa5" exitCode=0 Nov 29 06:58:55 crc kubenswrapper[4947]: I1129 06:58:55.638647 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" event={"ID":"5f4d791f-bb61-4aaa-a09c-3007b59645a7","Type":"ContainerDied","Data":"df51260596870c91ccb9712810f435518b0c8fd5a5c15540a25aafaee5eb1aa5"} Nov 29 06:58:55 crc kubenswrapper[4947]: I1129 06:58:55.638708 4947 scope.go:117] "RemoveContainer" containerID="a415fb27869ca193be5294677b2f866f2ec48db054e83e7f53b656f014c7087f" Nov 29 06:58:56 crc kubenswrapper[4947]: I1129 06:58:56.655468 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" event={"ID":"5f4d791f-bb61-4aaa-a09c-3007b59645a7","Type":"ContainerStarted","Data":"4e08579e8ab72d8a7c4f3261905a80a5e108e5f74d8ab7f6a91c9b8476999fd3"} Nov 29 06:58:59 crc kubenswrapper[4947]: I1129 06:58:59.535314 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 29 06:58:59 crc kubenswrapper[4947]: I1129 06:58:59.543474 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 29 06:58:59 crc kubenswrapper[4947]: I1129 06:58:59.544013 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 29 06:58:59 crc kubenswrapper[4947]: I1129 06:58:59.695625 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 29 06:59:00 crc kubenswrapper[4947]: I1129 06:59:00.702334 4947 generic.go:334] "Generic (PLEG): container finished" podID="0549fba3-0ee0-400a-a8a2-545d91aa6a2b" containerID="0256943bfa58997ba30045b6c097151a0b5ea3412963e6ffe9e53ec0572ba363" exitCode=137 Nov 29 06:59:00 crc kubenswrapper[4947]: I1129 06:59:00.702526 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0549fba3-0ee0-400a-a8a2-545d91aa6a2b","Type":"ContainerDied","Data":"0256943bfa58997ba30045b6c097151a0b5ea3412963e6ffe9e53ec0572ba363"} Nov 29 06:59:01 crc kubenswrapper[4947]: I1129 06:59:01.076380 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 29 06:59:01 crc kubenswrapper[4947]: I1129 06:59:01.078649 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 29 06:59:01 crc kubenswrapper[4947]: I1129 06:59:01.175344 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bv8s\" (UniqueName: \"kubernetes.io/projected/0549fba3-0ee0-400a-a8a2-545d91aa6a2b-kube-api-access-5bv8s\") pod \"0549fba3-0ee0-400a-a8a2-545d91aa6a2b\" (UID: \"0549fba3-0ee0-400a-a8a2-545d91aa6a2b\") " Nov 29 06:59:01 crc kubenswrapper[4947]: I1129 06:59:01.175579 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0549fba3-0ee0-400a-a8a2-545d91aa6a2b-combined-ca-bundle\") pod \"0549fba3-0ee0-400a-a8a2-545d91aa6a2b\" (UID: \"0549fba3-0ee0-400a-a8a2-545d91aa6a2b\") " Nov 29 06:59:01 crc kubenswrapper[4947]: I1129 06:59:01.175709 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0549fba3-0ee0-400a-a8a2-545d91aa6a2b-config-data\") pod \"0549fba3-0ee0-400a-a8a2-545d91aa6a2b\" (UID: \"0549fba3-0ee0-400a-a8a2-545d91aa6a2b\") " Nov 29 06:59:01 crc kubenswrapper[4947]: I1129 06:59:01.188674 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0549fba3-0ee0-400a-a8a2-545d91aa6a2b-kube-api-access-5bv8s" (OuterVolumeSpecName: "kube-api-access-5bv8s") pod "0549fba3-0ee0-400a-a8a2-545d91aa6a2b" (UID: "0549fba3-0ee0-400a-a8a2-545d91aa6a2b"). InnerVolumeSpecName "kube-api-access-5bv8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:59:01 crc kubenswrapper[4947]: I1129 06:59:01.231635 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0549fba3-0ee0-400a-a8a2-545d91aa6a2b-config-data" (OuterVolumeSpecName: "config-data") pod "0549fba3-0ee0-400a-a8a2-545d91aa6a2b" (UID: "0549fba3-0ee0-400a-a8a2-545d91aa6a2b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:59:01 crc kubenswrapper[4947]: I1129 06:59:01.238015 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0549fba3-0ee0-400a-a8a2-545d91aa6a2b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0549fba3-0ee0-400a-a8a2-545d91aa6a2b" (UID: "0549fba3-0ee0-400a-a8a2-545d91aa6a2b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:59:01 crc kubenswrapper[4947]: I1129 06:59:01.295378 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bv8s\" (UniqueName: \"kubernetes.io/projected/0549fba3-0ee0-400a-a8a2-545d91aa6a2b-kube-api-access-5bv8s\") on node \"crc\" DevicePath \"\"" Nov 29 06:59:01 crc kubenswrapper[4947]: I1129 06:59:01.295968 4947 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0549fba3-0ee0-400a-a8a2-545d91aa6a2b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 06:59:01 crc kubenswrapper[4947]: I1129 06:59:01.296072 4947 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0549fba3-0ee0-400a-a8a2-545d91aa6a2b-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 06:59:01 crc kubenswrapper[4947]: I1129 06:59:01.715085 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0549fba3-0ee0-400a-a8a2-545d91aa6a2b","Type":"ContainerDied","Data":"edd501b2b8bc7b72b6985f1b327951ed62944ba763033b1ad97ff4035144b6f5"} Nov 29 06:59:01 crc kubenswrapper[4947]: I1129 06:59:01.715183 4947 scope.go:117] "RemoveContainer" containerID="0256943bfa58997ba30045b6c097151a0b5ea3412963e6ffe9e53ec0572ba363" Nov 29 06:59:01 crc kubenswrapper[4947]: I1129 06:59:01.716403 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 29 06:59:01 crc kubenswrapper[4947]: I1129 06:59:01.754850 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 29 06:59:01 crc kubenswrapper[4947]: I1129 06:59:01.768158 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 29 06:59:01 crc kubenswrapper[4947]: I1129 06:59:01.791519 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 29 06:59:01 crc kubenswrapper[4947]: E1129 06:59:01.792070 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0549fba3-0ee0-400a-a8a2-545d91aa6a2b" containerName="nova-cell1-novncproxy-novncproxy" Nov 29 06:59:01 crc kubenswrapper[4947]: I1129 06:59:01.792101 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="0549fba3-0ee0-400a-a8a2-545d91aa6a2b" containerName="nova-cell1-novncproxy-novncproxy" Nov 29 06:59:01 crc kubenswrapper[4947]: I1129 06:59:01.792405 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="0549fba3-0ee0-400a-a8a2-545d91aa6a2b" containerName="nova-cell1-novncproxy-novncproxy" Nov 29 06:59:01 crc kubenswrapper[4947]: I1129 06:59:01.793388 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 29 06:59:01 crc kubenswrapper[4947]: I1129 06:59:01.799841 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 29 06:59:01 crc kubenswrapper[4947]: I1129 06:59:01.800253 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 29 06:59:01 crc kubenswrapper[4947]: I1129 06:59:01.801359 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 29 06:59:01 crc kubenswrapper[4947]: I1129 06:59:01.820969 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 29 06:59:01 crc kubenswrapper[4947]: I1129 06:59:01.906781 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtx7k\" (UniqueName: \"kubernetes.io/projected/50cf9c2c-f6ba-4845-a384-9b689cf99484-kube-api-access-jtx7k\") pod \"nova-cell1-novncproxy-0\" (UID: \"50cf9c2c-f6ba-4845-a384-9b689cf99484\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 06:59:01 crc kubenswrapper[4947]: I1129 06:59:01.906935 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50cf9c2c-f6ba-4845-a384-9b689cf99484-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"50cf9c2c-f6ba-4845-a384-9b689cf99484\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 06:59:01 crc kubenswrapper[4947]: I1129 06:59:01.906978 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50cf9c2c-f6ba-4845-a384-9b689cf99484-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"50cf9c2c-f6ba-4845-a384-9b689cf99484\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 06:59:01 crc kubenswrapper[4947]: I1129 06:59:01.907009 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/50cf9c2c-f6ba-4845-a384-9b689cf99484-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"50cf9c2c-f6ba-4845-a384-9b689cf99484\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 06:59:01 crc kubenswrapper[4947]: I1129 06:59:01.907041 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/50cf9c2c-f6ba-4845-a384-9b689cf99484-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"50cf9c2c-f6ba-4845-a384-9b689cf99484\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 06:59:02 crc kubenswrapper[4947]: I1129 06:59:02.009809 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50cf9c2c-f6ba-4845-a384-9b689cf99484-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"50cf9c2c-f6ba-4845-a384-9b689cf99484\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 06:59:02 crc kubenswrapper[4947]: I1129 06:59:02.010833 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/50cf9c2c-f6ba-4845-a384-9b689cf99484-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"50cf9c2c-f6ba-4845-a384-9b689cf99484\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 06:59:02 crc kubenswrapper[4947]: I1129 06:59:02.010871 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/50cf9c2c-f6ba-4845-a384-9b689cf99484-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"50cf9c2c-f6ba-4845-a384-9b689cf99484\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 06:59:02 crc kubenswrapper[4947]: I1129 06:59:02.011000 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtx7k\" (UniqueName: \"kubernetes.io/projected/50cf9c2c-f6ba-4845-a384-9b689cf99484-kube-api-access-jtx7k\") pod \"nova-cell1-novncproxy-0\" (UID: \"50cf9c2c-f6ba-4845-a384-9b689cf99484\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 06:59:02 crc kubenswrapper[4947]: I1129 06:59:02.011106 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50cf9c2c-f6ba-4845-a384-9b689cf99484-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"50cf9c2c-f6ba-4845-a384-9b689cf99484\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 06:59:02 crc kubenswrapper[4947]: I1129 06:59:02.015023 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50cf9c2c-f6ba-4845-a384-9b689cf99484-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"50cf9c2c-f6ba-4845-a384-9b689cf99484\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 06:59:02 crc kubenswrapper[4947]: I1129 06:59:02.016640 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50cf9c2c-f6ba-4845-a384-9b689cf99484-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"50cf9c2c-f6ba-4845-a384-9b689cf99484\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 06:59:02 crc kubenswrapper[4947]: I1129 06:59:02.021101 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/50cf9c2c-f6ba-4845-a384-9b689cf99484-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"50cf9c2c-f6ba-4845-a384-9b689cf99484\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 06:59:02 crc kubenswrapper[4947]: I1129 06:59:02.021889 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/50cf9c2c-f6ba-4845-a384-9b689cf99484-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"50cf9c2c-f6ba-4845-a384-9b689cf99484\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 06:59:02 crc kubenswrapper[4947]: I1129 06:59:02.039202 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtx7k\" (UniqueName: \"kubernetes.io/projected/50cf9c2c-f6ba-4845-a384-9b689cf99484-kube-api-access-jtx7k\") pod \"nova-cell1-novncproxy-0\" (UID: \"50cf9c2c-f6ba-4845-a384-9b689cf99484\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 06:59:02 crc kubenswrapper[4947]: I1129 06:59:02.128421 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 29 06:59:02 crc kubenswrapper[4947]: I1129 06:59:02.668261 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 29 06:59:02 crc kubenswrapper[4947]: W1129 06:59:02.669758 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50cf9c2c_f6ba_4845_a384_9b689cf99484.slice/crio-775885a43b707f0d46ff0b243fc7a115d35ecc60d0c6cc78d5c6be5d964f8a85 WatchSource:0}: Error finding container 775885a43b707f0d46ff0b243fc7a115d35ecc60d0c6cc78d5c6be5d964f8a85: Status 404 returned error can't find the container with id 775885a43b707f0d46ff0b243fc7a115d35ecc60d0c6cc78d5c6be5d964f8a85 Nov 29 06:59:02 crc kubenswrapper[4947]: I1129 06:59:02.734952 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"50cf9c2c-f6ba-4845-a384-9b689cf99484","Type":"ContainerStarted","Data":"775885a43b707f0d46ff0b243fc7a115d35ecc60d0c6cc78d5c6be5d964f8a85"} Nov 29 06:59:02 crc kubenswrapper[4947]: I1129 06:59:02.742475 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zggzl" Nov 29 06:59:02 crc kubenswrapper[4947]: I1129 06:59:02.801633 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zggzl" Nov 29 06:59:03 crc kubenswrapper[4947]: I1129 06:59:03.191916 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0549fba3-0ee0-400a-a8a2-545d91aa6a2b" path="/var/lib/kubelet/pods/0549fba3-0ee0-400a-a8a2-545d91aa6a2b/volumes" Nov 29 06:59:03 crc kubenswrapper[4947]: I1129 06:59:03.745612 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zggzl"] Nov 29 06:59:03 crc kubenswrapper[4947]: I1129 06:59:03.750426 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"50cf9c2c-f6ba-4845-a384-9b689cf99484","Type":"ContainerStarted","Data":"d43d1e739cc6e8f862d28a4a69c29478573316f546991172eb19c2b1756ff8af"} Nov 29 06:59:03 crc kubenswrapper[4947]: I1129 06:59:03.777062 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.777035415 podStartE2EDuration="2.777035415s" podCreationTimestamp="2025-11-29 06:59:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:59:03.772307116 +0000 UTC m=+1494.816689217" watchObservedRunningTime="2025-11-29 06:59:03.777035415 +0000 UTC m=+1494.821417496" Nov 29 06:59:04 crc kubenswrapper[4947]: I1129 06:59:04.530786 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 29 06:59:04 crc kubenswrapper[4947]: I1129 06:59:04.531272 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 29 06:59:04 crc kubenswrapper[4947]: I1129 06:59:04.531885 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 29 06:59:04 crc kubenswrapper[4947]: I1129 06:59:04.532146 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 29 06:59:04 crc kubenswrapper[4947]: I1129 06:59:04.535295 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 29 06:59:04 crc kubenswrapper[4947]: I1129 06:59:04.556457 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 29 06:59:04 crc kubenswrapper[4947]: I1129 06:59:04.762515 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zggzl" podUID="5544ed49-9bfd-43ed-bd78-0e54f833ec68" containerName="registry-server" containerID="cri-o://8ebea970572d81c420d2ce82bba12f5121aa96280ee71bf414e008c5f7cd3d77" gracePeriod=2 Nov 29 06:59:04 crc kubenswrapper[4947]: I1129 06:59:04.768956 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-8n7gk"] Nov 29 06:59:04 crc kubenswrapper[4947]: I1129 06:59:04.771586 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-8n7gk" Nov 29 06:59:04 crc kubenswrapper[4947]: I1129 06:59:04.788282 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-8n7gk"] Nov 29 06:59:04 crc kubenswrapper[4947]: I1129 06:59:04.876762 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d82e0fd-ef7f-47cd-b7f7-095c424197dd-dns-svc\") pod \"dnsmasq-dns-5b856c5697-8n7gk\" (UID: \"8d82e0fd-ef7f-47cd-b7f7-095c424197dd\") " pod="openstack/dnsmasq-dns-5b856c5697-8n7gk" Nov 29 06:59:04 crc kubenswrapper[4947]: I1129 06:59:04.877005 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d82e0fd-ef7f-47cd-b7f7-095c424197dd-config\") pod \"dnsmasq-dns-5b856c5697-8n7gk\" (UID: \"8d82e0fd-ef7f-47cd-b7f7-095c424197dd\") " pod="openstack/dnsmasq-dns-5b856c5697-8n7gk" Nov 29 06:59:04 crc kubenswrapper[4947]: I1129 06:59:04.877181 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8d82e0fd-ef7f-47cd-b7f7-095c424197dd-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-8n7gk\" (UID: \"8d82e0fd-ef7f-47cd-b7f7-095c424197dd\") " pod="openstack/dnsmasq-dns-5b856c5697-8n7gk" Nov 29 06:59:04 crc kubenswrapper[4947]: I1129 06:59:04.877305 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8d82e0fd-ef7f-47cd-b7f7-095c424197dd-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-8n7gk\" (UID: \"8d82e0fd-ef7f-47cd-b7f7-095c424197dd\") " pod="openstack/dnsmasq-dns-5b856c5697-8n7gk" Nov 29 06:59:04 crc kubenswrapper[4947]: I1129 06:59:04.877385 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvqkm\" (UniqueName: \"kubernetes.io/projected/8d82e0fd-ef7f-47cd-b7f7-095c424197dd-kube-api-access-jvqkm\") pod \"dnsmasq-dns-5b856c5697-8n7gk\" (UID: \"8d82e0fd-ef7f-47cd-b7f7-095c424197dd\") " pod="openstack/dnsmasq-dns-5b856c5697-8n7gk" Nov 29 06:59:04 crc kubenswrapper[4947]: I1129 06:59:04.980809 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d82e0fd-ef7f-47cd-b7f7-095c424197dd-dns-svc\") pod \"dnsmasq-dns-5b856c5697-8n7gk\" (UID: \"8d82e0fd-ef7f-47cd-b7f7-095c424197dd\") " pod="openstack/dnsmasq-dns-5b856c5697-8n7gk" Nov 29 06:59:04 crc kubenswrapper[4947]: I1129 06:59:04.981304 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d82e0fd-ef7f-47cd-b7f7-095c424197dd-config\") pod \"dnsmasq-dns-5b856c5697-8n7gk\" (UID: \"8d82e0fd-ef7f-47cd-b7f7-095c424197dd\") " pod="openstack/dnsmasq-dns-5b856c5697-8n7gk" Nov 29 06:59:04 crc kubenswrapper[4947]: I1129 06:59:04.981396 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8d82e0fd-ef7f-47cd-b7f7-095c424197dd-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-8n7gk\" (UID: \"8d82e0fd-ef7f-47cd-b7f7-095c424197dd\") " pod="openstack/dnsmasq-dns-5b856c5697-8n7gk" Nov 29 06:59:04 crc kubenswrapper[4947]: I1129 06:59:04.981455 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8d82e0fd-ef7f-47cd-b7f7-095c424197dd-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-8n7gk\" (UID: \"8d82e0fd-ef7f-47cd-b7f7-095c424197dd\") " pod="openstack/dnsmasq-dns-5b856c5697-8n7gk" Nov 29 06:59:04 crc kubenswrapper[4947]: I1129 06:59:04.981509 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvqkm\" (UniqueName: \"kubernetes.io/projected/8d82e0fd-ef7f-47cd-b7f7-095c424197dd-kube-api-access-jvqkm\") pod \"dnsmasq-dns-5b856c5697-8n7gk\" (UID: \"8d82e0fd-ef7f-47cd-b7f7-095c424197dd\") " pod="openstack/dnsmasq-dns-5b856c5697-8n7gk" Nov 29 06:59:04 crc kubenswrapper[4947]: I1129 06:59:04.982368 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d82e0fd-ef7f-47cd-b7f7-095c424197dd-dns-svc\") pod \"dnsmasq-dns-5b856c5697-8n7gk\" (UID: \"8d82e0fd-ef7f-47cd-b7f7-095c424197dd\") " pod="openstack/dnsmasq-dns-5b856c5697-8n7gk" Nov 29 06:59:04 crc kubenswrapper[4947]: I1129 06:59:04.983441 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8d82e0fd-ef7f-47cd-b7f7-095c424197dd-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-8n7gk\" (UID: \"8d82e0fd-ef7f-47cd-b7f7-095c424197dd\") " pod="openstack/dnsmasq-dns-5b856c5697-8n7gk" Nov 29 06:59:04 crc kubenswrapper[4947]: I1129 06:59:04.986299 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d82e0fd-ef7f-47cd-b7f7-095c424197dd-config\") pod \"dnsmasq-dns-5b856c5697-8n7gk\" (UID: \"8d82e0fd-ef7f-47cd-b7f7-095c424197dd\") " pod="openstack/dnsmasq-dns-5b856c5697-8n7gk" Nov 29 06:59:04 crc kubenswrapper[4947]: I1129 06:59:04.987392 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8d82e0fd-ef7f-47cd-b7f7-095c424197dd-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-8n7gk\" (UID: \"8d82e0fd-ef7f-47cd-b7f7-095c424197dd\") " pod="openstack/dnsmasq-dns-5b856c5697-8n7gk" Nov 29 06:59:05 crc kubenswrapper[4947]: I1129 06:59:05.012644 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvqkm\" (UniqueName: \"kubernetes.io/projected/8d82e0fd-ef7f-47cd-b7f7-095c424197dd-kube-api-access-jvqkm\") pod \"dnsmasq-dns-5b856c5697-8n7gk\" (UID: \"8d82e0fd-ef7f-47cd-b7f7-095c424197dd\") " pod="openstack/dnsmasq-dns-5b856c5697-8n7gk" Nov 29 06:59:05 crc kubenswrapper[4947]: I1129 06:59:05.127099 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-8n7gk" Nov 29 06:59:05 crc kubenswrapper[4947]: I1129 06:59:05.437678 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zggzl" Nov 29 06:59:05 crc kubenswrapper[4947]: I1129 06:59:05.613200 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5544ed49-9bfd-43ed-bd78-0e54f833ec68-catalog-content\") pod \"5544ed49-9bfd-43ed-bd78-0e54f833ec68\" (UID: \"5544ed49-9bfd-43ed-bd78-0e54f833ec68\") " Nov 29 06:59:05 crc kubenswrapper[4947]: I1129 06:59:05.613390 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5544ed49-9bfd-43ed-bd78-0e54f833ec68-utilities\") pod \"5544ed49-9bfd-43ed-bd78-0e54f833ec68\" (UID: \"5544ed49-9bfd-43ed-bd78-0e54f833ec68\") " Nov 29 06:59:05 crc kubenswrapper[4947]: I1129 06:59:05.613483 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6h9z\" (UniqueName: \"kubernetes.io/projected/5544ed49-9bfd-43ed-bd78-0e54f833ec68-kube-api-access-x6h9z\") pod \"5544ed49-9bfd-43ed-bd78-0e54f833ec68\" (UID: \"5544ed49-9bfd-43ed-bd78-0e54f833ec68\") " Nov 29 06:59:05 crc kubenswrapper[4947]: I1129 06:59:05.617162 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5544ed49-9bfd-43ed-bd78-0e54f833ec68-utilities" (OuterVolumeSpecName: "utilities") pod "5544ed49-9bfd-43ed-bd78-0e54f833ec68" (UID: "5544ed49-9bfd-43ed-bd78-0e54f833ec68"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:59:05 crc kubenswrapper[4947]: I1129 06:59:05.636667 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5544ed49-9bfd-43ed-bd78-0e54f833ec68-kube-api-access-x6h9z" (OuterVolumeSpecName: "kube-api-access-x6h9z") pod "5544ed49-9bfd-43ed-bd78-0e54f833ec68" (UID: "5544ed49-9bfd-43ed-bd78-0e54f833ec68"). InnerVolumeSpecName "kube-api-access-x6h9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:59:05 crc kubenswrapper[4947]: I1129 06:59:05.716728 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5544ed49-9bfd-43ed-bd78-0e54f833ec68-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 06:59:05 crc kubenswrapper[4947]: I1129 06:59:05.716777 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6h9z\" (UniqueName: \"kubernetes.io/projected/5544ed49-9bfd-43ed-bd78-0e54f833ec68-kube-api-access-x6h9z\") on node \"crc\" DevicePath \"\"" Nov 29 06:59:05 crc kubenswrapper[4947]: I1129 06:59:05.784735 4947 generic.go:334] "Generic (PLEG): container finished" podID="5544ed49-9bfd-43ed-bd78-0e54f833ec68" containerID="8ebea970572d81c420d2ce82bba12f5121aa96280ee71bf414e008c5f7cd3d77" exitCode=0 Nov 29 06:59:05 crc kubenswrapper[4947]: I1129 06:59:05.786053 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zggzl" Nov 29 06:59:05 crc kubenswrapper[4947]: I1129 06:59:05.786282 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zggzl" event={"ID":"5544ed49-9bfd-43ed-bd78-0e54f833ec68","Type":"ContainerDied","Data":"8ebea970572d81c420d2ce82bba12f5121aa96280ee71bf414e008c5f7cd3d77"} Nov 29 06:59:05 crc kubenswrapper[4947]: I1129 06:59:05.786392 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zggzl" event={"ID":"5544ed49-9bfd-43ed-bd78-0e54f833ec68","Type":"ContainerDied","Data":"bc6424768ab5325a40bdcab3c24484969fa79bdc97862bbb56beaa72294b6f7f"} Nov 29 06:59:05 crc kubenswrapper[4947]: I1129 06:59:05.786423 4947 scope.go:117] "RemoveContainer" containerID="8ebea970572d81c420d2ce82bba12f5121aa96280ee71bf414e008c5f7cd3d77" Nov 29 06:59:05 crc kubenswrapper[4947]: I1129 06:59:05.820815 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5544ed49-9bfd-43ed-bd78-0e54f833ec68-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5544ed49-9bfd-43ed-bd78-0e54f833ec68" (UID: "5544ed49-9bfd-43ed-bd78-0e54f833ec68"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:59:05 crc kubenswrapper[4947]: I1129 06:59:05.823236 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5544ed49-9bfd-43ed-bd78-0e54f833ec68-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 06:59:05 crc kubenswrapper[4947]: I1129 06:59:05.854936 4947 scope.go:117] "RemoveContainer" containerID="5ad375efedb3967269790be87a19c40dad1d8088e196507aae7f602543b5df79" Nov 29 06:59:05 crc kubenswrapper[4947]: I1129 06:59:05.917474 4947 scope.go:117] "RemoveContainer" containerID="1b7ca7c2a3a409687e2ec8c650dfd7cc9ce71ca66fc9790a50aa22f2d83947b5" Nov 29 06:59:05 crc kubenswrapper[4947]: I1129 06:59:05.969211 4947 scope.go:117] "RemoveContainer" containerID="8ebea970572d81c420d2ce82bba12f5121aa96280ee71bf414e008c5f7cd3d77" Nov 29 06:59:05 crc kubenswrapper[4947]: E1129 06:59:05.971105 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ebea970572d81c420d2ce82bba12f5121aa96280ee71bf414e008c5f7cd3d77\": container with ID starting with 8ebea970572d81c420d2ce82bba12f5121aa96280ee71bf414e008c5f7cd3d77 not found: ID does not exist" containerID="8ebea970572d81c420d2ce82bba12f5121aa96280ee71bf414e008c5f7cd3d77" Nov 29 06:59:05 crc kubenswrapper[4947]: I1129 06:59:05.971172 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ebea970572d81c420d2ce82bba12f5121aa96280ee71bf414e008c5f7cd3d77"} err="failed to get container status \"8ebea970572d81c420d2ce82bba12f5121aa96280ee71bf414e008c5f7cd3d77\": rpc error: code = NotFound desc = could not find container \"8ebea970572d81c420d2ce82bba12f5121aa96280ee71bf414e008c5f7cd3d77\": container with ID starting with 8ebea970572d81c420d2ce82bba12f5121aa96280ee71bf414e008c5f7cd3d77 not found: ID does not exist" Nov 29 06:59:05 crc kubenswrapper[4947]: I1129 06:59:05.971210 4947 scope.go:117] "RemoveContainer" containerID="5ad375efedb3967269790be87a19c40dad1d8088e196507aae7f602543b5df79" Nov 29 06:59:05 crc kubenswrapper[4947]: E1129 06:59:05.972756 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ad375efedb3967269790be87a19c40dad1d8088e196507aae7f602543b5df79\": container with ID starting with 5ad375efedb3967269790be87a19c40dad1d8088e196507aae7f602543b5df79 not found: ID does not exist" containerID="5ad375efedb3967269790be87a19c40dad1d8088e196507aae7f602543b5df79" Nov 29 06:59:05 crc kubenswrapper[4947]: I1129 06:59:05.972844 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ad375efedb3967269790be87a19c40dad1d8088e196507aae7f602543b5df79"} err="failed to get container status \"5ad375efedb3967269790be87a19c40dad1d8088e196507aae7f602543b5df79\": rpc error: code = NotFound desc = could not find container \"5ad375efedb3967269790be87a19c40dad1d8088e196507aae7f602543b5df79\": container with ID starting with 5ad375efedb3967269790be87a19c40dad1d8088e196507aae7f602543b5df79 not found: ID does not exist" Nov 29 06:59:05 crc kubenswrapper[4947]: I1129 06:59:05.972887 4947 scope.go:117] "RemoveContainer" containerID="1b7ca7c2a3a409687e2ec8c650dfd7cc9ce71ca66fc9790a50aa22f2d83947b5" Nov 29 06:59:05 crc kubenswrapper[4947]: E1129 06:59:05.973878 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b7ca7c2a3a409687e2ec8c650dfd7cc9ce71ca66fc9790a50aa22f2d83947b5\": container with ID starting with 1b7ca7c2a3a409687e2ec8c650dfd7cc9ce71ca66fc9790a50aa22f2d83947b5 not found: ID does not exist" containerID="1b7ca7c2a3a409687e2ec8c650dfd7cc9ce71ca66fc9790a50aa22f2d83947b5" Nov 29 06:59:05 crc kubenswrapper[4947]: I1129 06:59:05.974073 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b7ca7c2a3a409687e2ec8c650dfd7cc9ce71ca66fc9790a50aa22f2d83947b5"} err="failed to get container status \"1b7ca7c2a3a409687e2ec8c650dfd7cc9ce71ca66fc9790a50aa22f2d83947b5\": rpc error: code = NotFound desc = could not find container \"1b7ca7c2a3a409687e2ec8c650dfd7cc9ce71ca66fc9790a50aa22f2d83947b5\": container with ID starting with 1b7ca7c2a3a409687e2ec8c650dfd7cc9ce71ca66fc9790a50aa22f2d83947b5 not found: ID does not exist" Nov 29 06:59:06 crc kubenswrapper[4947]: I1129 06:59:06.098386 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-8n7gk"] Nov 29 06:59:06 crc kubenswrapper[4947]: I1129 06:59:06.239057 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 29 06:59:06 crc kubenswrapper[4947]: I1129 06:59:06.310341 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zggzl"] Nov 29 06:59:06 crc kubenswrapper[4947]: I1129 06:59:06.325817 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zggzl"] Nov 29 06:59:06 crc kubenswrapper[4947]: I1129 06:59:06.802414 4947 generic.go:334] "Generic (PLEG): container finished" podID="8d82e0fd-ef7f-47cd-b7f7-095c424197dd" containerID="97b3127733878640f163af5537cd8db424d7267bc7b1a642e109a3719b211c84" exitCode=0 Nov 29 06:59:06 crc kubenswrapper[4947]: I1129 06:59:06.802469 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-8n7gk" event={"ID":"8d82e0fd-ef7f-47cd-b7f7-095c424197dd","Type":"ContainerDied","Data":"97b3127733878640f163af5537cd8db424d7267bc7b1a642e109a3719b211c84"} Nov 29 06:59:06 crc kubenswrapper[4947]: I1129 06:59:06.804946 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-8n7gk" event={"ID":"8d82e0fd-ef7f-47cd-b7f7-095c424197dd","Type":"ContainerStarted","Data":"694b057e23fc50cef6251a82b22842acd3d4bfb613c1295bca17c17a1190aa31"} Nov 29 06:59:07 crc kubenswrapper[4947]: I1129 06:59:07.129177 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 29 06:59:07 crc kubenswrapper[4947]: I1129 06:59:07.197788 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5544ed49-9bfd-43ed-bd78-0e54f833ec68" path="/var/lib/kubelet/pods/5544ed49-9bfd-43ed-bd78-0e54f833ec68/volumes" Nov 29 06:59:07 crc kubenswrapper[4947]: I1129 06:59:07.819807 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-8n7gk" event={"ID":"8d82e0fd-ef7f-47cd-b7f7-095c424197dd","Type":"ContainerStarted","Data":"206c4cf53a60f433f8e48ff65f16043af4a48924eed77f760bda76029c869633"} Nov 29 06:59:07 crc kubenswrapper[4947]: I1129 06:59:07.820238 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b856c5697-8n7gk" Nov 29 06:59:07 crc kubenswrapper[4947]: I1129 06:59:07.849933 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b856c5697-8n7gk" podStartSLOduration=3.849900105 podStartE2EDuration="3.849900105s" podCreationTimestamp="2025-11-29 06:59:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:59:07.844142391 +0000 UTC m=+1498.888524482" watchObservedRunningTime="2025-11-29 06:59:07.849900105 +0000 UTC m=+1498.894282186" Nov 29 06:59:08 crc kubenswrapper[4947]: I1129 06:59:08.005379 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 29 06:59:08 crc kubenswrapper[4947]: I1129 06:59:08.006500 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="62b04c16-c119-4709-a694-441964ae360a" containerName="nova-api-log" containerID="cri-o://c85c026a3f6f16ddabaad780b40696d87275eae6448aed3f1fe1139afc81af8b" gracePeriod=30 Nov 29 06:59:08 crc kubenswrapper[4947]: I1129 06:59:08.006662 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="62b04c16-c119-4709-a694-441964ae360a" containerName="nova-api-api" containerID="cri-o://c916c3331e7fa6684b961e29efe870c572a05ebfdc1e68a00668230675c750b4" gracePeriod=30 Nov 29 06:59:08 crc kubenswrapper[4947]: I1129 06:59:08.488018 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 06:59:08 crc kubenswrapper[4947]: I1129 06:59:08.488988 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="222c733d-f870-4232-b9d9-d2a9c738927f" containerName="ceilometer-central-agent" containerID="cri-o://ab257a326615ba400b6aabdcefccf2c813646047394625533dff7162ce23e0c1" gracePeriod=30 Nov 29 06:59:08 crc kubenswrapper[4947]: I1129 06:59:08.489079 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="222c733d-f870-4232-b9d9-d2a9c738927f" containerName="proxy-httpd" containerID="cri-o://5757b49b55088a093bd1c1509a030821dc46b673412467c680cfd40b2f2ac60c" gracePeriod=30 Nov 29 06:59:08 crc kubenswrapper[4947]: I1129 06:59:08.489161 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="222c733d-f870-4232-b9d9-d2a9c738927f" containerName="sg-core" containerID="cri-o://9ff2ae046700848d2b08ddcd201293d0059cb6918dec496bac3564bee1933bd9" gracePeriod=30 Nov 29 06:59:08 crc kubenswrapper[4947]: I1129 06:59:08.489254 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="222c733d-f870-4232-b9d9-d2a9c738927f" containerName="ceilometer-notification-agent" containerID="cri-o://46e0be66f82a12264ad3c18bad009d8802e669b6c4f7bf497631fae941b3657c" gracePeriod=30 Nov 29 06:59:08 crc kubenswrapper[4947]: I1129 06:59:08.836117 4947 generic.go:334] "Generic (PLEG): container finished" podID="222c733d-f870-4232-b9d9-d2a9c738927f" containerID="9ff2ae046700848d2b08ddcd201293d0059cb6918dec496bac3564bee1933bd9" exitCode=2 Nov 29 06:59:08 crc kubenswrapper[4947]: I1129 06:59:08.836183 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"222c733d-f870-4232-b9d9-d2a9c738927f","Type":"ContainerDied","Data":"9ff2ae046700848d2b08ddcd201293d0059cb6918dec496bac3564bee1933bd9"} Nov 29 06:59:08 crc kubenswrapper[4947]: I1129 06:59:08.839391 4947 generic.go:334] "Generic (PLEG): container finished" podID="62b04c16-c119-4709-a694-441964ae360a" containerID="c85c026a3f6f16ddabaad780b40696d87275eae6448aed3f1fe1139afc81af8b" exitCode=143 Nov 29 06:59:08 crc kubenswrapper[4947]: I1129 06:59:08.839424 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"62b04c16-c119-4709-a694-441964ae360a","Type":"ContainerDied","Data":"c85c026a3f6f16ddabaad780b40696d87275eae6448aed3f1fe1139afc81af8b"} Nov 29 06:59:09 crc kubenswrapper[4947]: I1129 06:59:09.855302 4947 generic.go:334] "Generic (PLEG): container finished" podID="222c733d-f870-4232-b9d9-d2a9c738927f" containerID="5757b49b55088a093bd1c1509a030821dc46b673412467c680cfd40b2f2ac60c" exitCode=0 Nov 29 06:59:09 crc kubenswrapper[4947]: I1129 06:59:09.855763 4947 generic.go:334] "Generic (PLEG): container finished" podID="222c733d-f870-4232-b9d9-d2a9c738927f" containerID="ab257a326615ba400b6aabdcefccf2c813646047394625533dff7162ce23e0c1" exitCode=0 Nov 29 06:59:09 crc kubenswrapper[4947]: I1129 06:59:09.855394 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"222c733d-f870-4232-b9d9-d2a9c738927f","Type":"ContainerDied","Data":"5757b49b55088a093bd1c1509a030821dc46b673412467c680cfd40b2f2ac60c"} Nov 29 06:59:09 crc kubenswrapper[4947]: I1129 06:59:09.855825 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"222c733d-f870-4232-b9d9-d2a9c738927f","Type":"ContainerDied","Data":"ab257a326615ba400b6aabdcefccf2c813646047394625533dff7162ce23e0c1"} Nov 29 06:59:11 crc kubenswrapper[4947]: I1129 06:59:11.771210 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 06:59:11 crc kubenswrapper[4947]: I1129 06:59:11.854448 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62b04c16-c119-4709-a694-441964ae360a-config-data\") pod \"62b04c16-c119-4709-a694-441964ae360a\" (UID: \"62b04c16-c119-4709-a694-441964ae360a\") " Nov 29 06:59:11 crc kubenswrapper[4947]: I1129 06:59:11.854671 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s77x2\" (UniqueName: \"kubernetes.io/projected/62b04c16-c119-4709-a694-441964ae360a-kube-api-access-s77x2\") pod \"62b04c16-c119-4709-a694-441964ae360a\" (UID: \"62b04c16-c119-4709-a694-441964ae360a\") " Nov 29 06:59:11 crc kubenswrapper[4947]: I1129 06:59:11.854716 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62b04c16-c119-4709-a694-441964ae360a-logs\") pod \"62b04c16-c119-4709-a694-441964ae360a\" (UID: \"62b04c16-c119-4709-a694-441964ae360a\") " Nov 29 06:59:11 crc kubenswrapper[4947]: I1129 06:59:11.854788 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62b04c16-c119-4709-a694-441964ae360a-combined-ca-bundle\") pod \"62b04c16-c119-4709-a694-441964ae360a\" (UID: \"62b04c16-c119-4709-a694-441964ae360a\") " Nov 29 06:59:11 crc kubenswrapper[4947]: I1129 06:59:11.856961 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62b04c16-c119-4709-a694-441964ae360a-logs" (OuterVolumeSpecName: "logs") pod "62b04c16-c119-4709-a694-441964ae360a" (UID: "62b04c16-c119-4709-a694-441964ae360a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:59:11 crc kubenswrapper[4947]: I1129 06:59:11.871225 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62b04c16-c119-4709-a694-441964ae360a-kube-api-access-s77x2" (OuterVolumeSpecName: "kube-api-access-s77x2") pod "62b04c16-c119-4709-a694-441964ae360a" (UID: "62b04c16-c119-4709-a694-441964ae360a"). InnerVolumeSpecName "kube-api-access-s77x2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:59:11 crc kubenswrapper[4947]: I1129 06:59:11.897970 4947 generic.go:334] "Generic (PLEG): container finished" podID="62b04c16-c119-4709-a694-441964ae360a" containerID="c916c3331e7fa6684b961e29efe870c572a05ebfdc1e68a00668230675c750b4" exitCode=0 Nov 29 06:59:11 crc kubenswrapper[4947]: I1129 06:59:11.898094 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"62b04c16-c119-4709-a694-441964ae360a","Type":"ContainerDied","Data":"c916c3331e7fa6684b961e29efe870c572a05ebfdc1e68a00668230675c750b4"} Nov 29 06:59:11 crc kubenswrapper[4947]: I1129 06:59:11.898128 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"62b04c16-c119-4709-a694-441964ae360a","Type":"ContainerDied","Data":"649a8a2ff291382e0b00452d31ba3e58045ca7439b6445e0c4c26699300a7234"} Nov 29 06:59:11 crc kubenswrapper[4947]: I1129 06:59:11.898167 4947 scope.go:117] "RemoveContainer" containerID="c916c3331e7fa6684b961e29efe870c572a05ebfdc1e68a00668230675c750b4" Nov 29 06:59:11 crc kubenswrapper[4947]: I1129 06:59:11.898458 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 06:59:11 crc kubenswrapper[4947]: I1129 06:59:11.929901 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62b04c16-c119-4709-a694-441964ae360a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "62b04c16-c119-4709-a694-441964ae360a" (UID: "62b04c16-c119-4709-a694-441964ae360a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:59:11 crc kubenswrapper[4947]: I1129 06:59:11.939399 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62b04c16-c119-4709-a694-441964ae360a-config-data" (OuterVolumeSpecName: "config-data") pod "62b04c16-c119-4709-a694-441964ae360a" (UID: "62b04c16-c119-4709-a694-441964ae360a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:59:11 crc kubenswrapper[4947]: I1129 06:59:11.960444 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s77x2\" (UniqueName: \"kubernetes.io/projected/62b04c16-c119-4709-a694-441964ae360a-kube-api-access-s77x2\") on node \"crc\" DevicePath \"\"" Nov 29 06:59:11 crc kubenswrapper[4947]: I1129 06:59:11.960489 4947 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62b04c16-c119-4709-a694-441964ae360a-logs\") on node \"crc\" DevicePath \"\"" Nov 29 06:59:11 crc kubenswrapper[4947]: I1129 06:59:11.960502 4947 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62b04c16-c119-4709-a694-441964ae360a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 06:59:11 crc kubenswrapper[4947]: I1129 06:59:11.960513 4947 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62b04c16-c119-4709-a694-441964ae360a-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 06:59:12 crc kubenswrapper[4947]: I1129 06:59:12.009443 4947 scope.go:117] "RemoveContainer" containerID="c85c026a3f6f16ddabaad780b40696d87275eae6448aed3f1fe1139afc81af8b" Nov 29 06:59:12 crc kubenswrapper[4947]: I1129 06:59:12.130577 4947 scope.go:117] "RemoveContainer" containerID="c916c3331e7fa6684b961e29efe870c572a05ebfdc1e68a00668230675c750b4" Nov 29 06:59:12 crc kubenswrapper[4947]: I1129 06:59:12.130814 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 29 06:59:12 crc kubenswrapper[4947]: E1129 06:59:12.133469 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c916c3331e7fa6684b961e29efe870c572a05ebfdc1e68a00668230675c750b4\": container with ID starting with c916c3331e7fa6684b961e29efe870c572a05ebfdc1e68a00668230675c750b4 not found: ID does not exist" containerID="c916c3331e7fa6684b961e29efe870c572a05ebfdc1e68a00668230675c750b4" Nov 29 06:59:12 crc kubenswrapper[4947]: I1129 06:59:12.133566 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c916c3331e7fa6684b961e29efe870c572a05ebfdc1e68a00668230675c750b4"} err="failed to get container status \"c916c3331e7fa6684b961e29efe870c572a05ebfdc1e68a00668230675c750b4\": rpc error: code = NotFound desc = could not find container \"c916c3331e7fa6684b961e29efe870c572a05ebfdc1e68a00668230675c750b4\": container with ID starting with c916c3331e7fa6684b961e29efe870c572a05ebfdc1e68a00668230675c750b4 not found: ID does not exist" Nov 29 06:59:12 crc kubenswrapper[4947]: I1129 06:59:12.133624 4947 scope.go:117] "RemoveContainer" containerID="c85c026a3f6f16ddabaad780b40696d87275eae6448aed3f1fe1139afc81af8b" Nov 29 06:59:12 crc kubenswrapper[4947]: E1129 06:59:12.134610 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c85c026a3f6f16ddabaad780b40696d87275eae6448aed3f1fe1139afc81af8b\": container with ID starting with c85c026a3f6f16ddabaad780b40696d87275eae6448aed3f1fe1139afc81af8b not found: ID does not exist" containerID="c85c026a3f6f16ddabaad780b40696d87275eae6448aed3f1fe1139afc81af8b" Nov 29 06:59:12 crc kubenswrapper[4947]: I1129 06:59:12.134645 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c85c026a3f6f16ddabaad780b40696d87275eae6448aed3f1fe1139afc81af8b"} err="failed to get container status \"c85c026a3f6f16ddabaad780b40696d87275eae6448aed3f1fe1139afc81af8b\": rpc error: code = NotFound desc = could not find container \"c85c026a3f6f16ddabaad780b40696d87275eae6448aed3f1fe1139afc81af8b\": container with ID starting with c85c026a3f6f16ddabaad780b40696d87275eae6448aed3f1fe1139afc81af8b not found: ID does not exist" Nov 29 06:59:12 crc kubenswrapper[4947]: I1129 06:59:12.175288 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 29 06:59:12 crc kubenswrapper[4947]: I1129 06:59:12.261660 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 29 06:59:12 crc kubenswrapper[4947]: I1129 06:59:12.282506 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 29 06:59:12 crc kubenswrapper[4947]: I1129 06:59:12.316922 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 29 06:59:12 crc kubenswrapper[4947]: E1129 06:59:12.317874 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62b04c16-c119-4709-a694-441964ae360a" containerName="nova-api-api" Nov 29 06:59:12 crc kubenswrapper[4947]: I1129 06:59:12.317913 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="62b04c16-c119-4709-a694-441964ae360a" containerName="nova-api-api" Nov 29 06:59:12 crc kubenswrapper[4947]: E1129 06:59:12.317951 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5544ed49-9bfd-43ed-bd78-0e54f833ec68" containerName="registry-server" Nov 29 06:59:12 crc kubenswrapper[4947]: I1129 06:59:12.317961 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="5544ed49-9bfd-43ed-bd78-0e54f833ec68" containerName="registry-server" Nov 29 06:59:12 crc kubenswrapper[4947]: E1129 06:59:12.317991 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5544ed49-9bfd-43ed-bd78-0e54f833ec68" containerName="extract-utilities" Nov 29 06:59:12 crc kubenswrapper[4947]: I1129 06:59:12.318002 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="5544ed49-9bfd-43ed-bd78-0e54f833ec68" containerName="extract-utilities" Nov 29 06:59:12 crc kubenswrapper[4947]: E1129 06:59:12.318024 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62b04c16-c119-4709-a694-441964ae360a" containerName="nova-api-log" Nov 29 06:59:12 crc kubenswrapper[4947]: I1129 06:59:12.318032 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="62b04c16-c119-4709-a694-441964ae360a" containerName="nova-api-log" Nov 29 06:59:12 crc kubenswrapper[4947]: E1129 06:59:12.318052 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5544ed49-9bfd-43ed-bd78-0e54f833ec68" containerName="extract-content" Nov 29 06:59:12 crc kubenswrapper[4947]: I1129 06:59:12.318061 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="5544ed49-9bfd-43ed-bd78-0e54f833ec68" containerName="extract-content" Nov 29 06:59:12 crc kubenswrapper[4947]: I1129 06:59:12.318351 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="62b04c16-c119-4709-a694-441964ae360a" containerName="nova-api-api" Nov 29 06:59:12 crc kubenswrapper[4947]: I1129 06:59:12.318371 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="5544ed49-9bfd-43ed-bd78-0e54f833ec68" containerName="registry-server" Nov 29 06:59:12 crc kubenswrapper[4947]: I1129 06:59:12.318383 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="62b04c16-c119-4709-a694-441964ae360a" containerName="nova-api-log" Nov 29 06:59:12 crc kubenswrapper[4947]: I1129 06:59:12.319885 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 06:59:12 crc kubenswrapper[4947]: I1129 06:59:12.326315 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 29 06:59:12 crc kubenswrapper[4947]: I1129 06:59:12.326517 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 29 06:59:12 crc kubenswrapper[4947]: I1129 06:59:12.327306 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 29 06:59:12 crc kubenswrapper[4947]: I1129 06:59:12.335414 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 29 06:59:12 crc kubenswrapper[4947]: I1129 06:59:12.475857 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2404cde8-1d52-4931-9361-434e7de71954-logs\") pod \"nova-api-0\" (UID: \"2404cde8-1d52-4931-9361-434e7de71954\") " pod="openstack/nova-api-0" Nov 29 06:59:12 crc kubenswrapper[4947]: I1129 06:59:12.476293 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2404cde8-1d52-4931-9361-434e7de71954-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2404cde8-1d52-4931-9361-434e7de71954\") " pod="openstack/nova-api-0" Nov 29 06:59:12 crc kubenswrapper[4947]: I1129 06:59:12.476483 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2404cde8-1d52-4931-9361-434e7de71954-public-tls-certs\") pod \"nova-api-0\" (UID: \"2404cde8-1d52-4931-9361-434e7de71954\") " pod="openstack/nova-api-0" Nov 29 06:59:12 crc kubenswrapper[4947]: I1129 06:59:12.476688 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2404cde8-1d52-4931-9361-434e7de71954-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2404cde8-1d52-4931-9361-434e7de71954\") " pod="openstack/nova-api-0" Nov 29 06:59:12 crc kubenswrapper[4947]: I1129 06:59:12.476766 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4gvw\" (UniqueName: \"kubernetes.io/projected/2404cde8-1d52-4931-9361-434e7de71954-kube-api-access-x4gvw\") pod \"nova-api-0\" (UID: \"2404cde8-1d52-4931-9361-434e7de71954\") " pod="openstack/nova-api-0" Nov 29 06:59:12 crc kubenswrapper[4947]: I1129 06:59:12.476856 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2404cde8-1d52-4931-9361-434e7de71954-config-data\") pod \"nova-api-0\" (UID: \"2404cde8-1d52-4931-9361-434e7de71954\") " pod="openstack/nova-api-0" Nov 29 06:59:12 crc kubenswrapper[4947]: I1129 06:59:12.579435 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2404cde8-1d52-4931-9361-434e7de71954-logs\") pod \"nova-api-0\" (UID: \"2404cde8-1d52-4931-9361-434e7de71954\") " pod="openstack/nova-api-0" Nov 29 06:59:12 crc kubenswrapper[4947]: I1129 06:59:12.580097 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2404cde8-1d52-4931-9361-434e7de71954-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2404cde8-1d52-4931-9361-434e7de71954\") " pod="openstack/nova-api-0" Nov 29 06:59:12 crc kubenswrapper[4947]: I1129 06:59:12.580175 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2404cde8-1d52-4931-9361-434e7de71954-public-tls-certs\") pod \"nova-api-0\" (UID: \"2404cde8-1d52-4931-9361-434e7de71954\") " pod="openstack/nova-api-0" Nov 29 06:59:12 crc kubenswrapper[4947]: I1129 06:59:12.580206 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2404cde8-1d52-4931-9361-434e7de71954-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2404cde8-1d52-4931-9361-434e7de71954\") " pod="openstack/nova-api-0" Nov 29 06:59:12 crc kubenswrapper[4947]: I1129 06:59:12.580252 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4gvw\" (UniqueName: \"kubernetes.io/projected/2404cde8-1d52-4931-9361-434e7de71954-kube-api-access-x4gvw\") pod \"nova-api-0\" (UID: \"2404cde8-1d52-4931-9361-434e7de71954\") " pod="openstack/nova-api-0" Nov 29 06:59:12 crc kubenswrapper[4947]: I1129 06:59:12.580309 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2404cde8-1d52-4931-9361-434e7de71954-config-data\") pod \"nova-api-0\" (UID: \"2404cde8-1d52-4931-9361-434e7de71954\") " pod="openstack/nova-api-0" Nov 29 06:59:12 crc kubenswrapper[4947]: I1129 06:59:12.580790 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2404cde8-1d52-4931-9361-434e7de71954-logs\") pod \"nova-api-0\" (UID: \"2404cde8-1d52-4931-9361-434e7de71954\") " pod="openstack/nova-api-0" Nov 29 06:59:12 crc kubenswrapper[4947]: I1129 06:59:12.585618 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2404cde8-1d52-4931-9361-434e7de71954-config-data\") pod \"nova-api-0\" (UID: \"2404cde8-1d52-4931-9361-434e7de71954\") " pod="openstack/nova-api-0" Nov 29 06:59:12 crc kubenswrapper[4947]: I1129 06:59:12.586090 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2404cde8-1d52-4931-9361-434e7de71954-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2404cde8-1d52-4931-9361-434e7de71954\") " pod="openstack/nova-api-0" Nov 29 06:59:12 crc kubenswrapper[4947]: I1129 06:59:12.586454 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2404cde8-1d52-4931-9361-434e7de71954-public-tls-certs\") pod \"nova-api-0\" (UID: \"2404cde8-1d52-4931-9361-434e7de71954\") " pod="openstack/nova-api-0" Nov 29 06:59:12 crc kubenswrapper[4947]: I1129 06:59:12.601595 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4gvw\" (UniqueName: \"kubernetes.io/projected/2404cde8-1d52-4931-9361-434e7de71954-kube-api-access-x4gvw\") pod \"nova-api-0\" (UID: \"2404cde8-1d52-4931-9361-434e7de71954\") " pod="openstack/nova-api-0" Nov 29 06:59:12 crc kubenswrapper[4947]: I1129 06:59:12.608361 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2404cde8-1d52-4931-9361-434e7de71954-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2404cde8-1d52-4931-9361-434e7de71954\") " pod="openstack/nova-api-0" Nov 29 06:59:12 crc kubenswrapper[4947]: I1129 06:59:12.732309 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 06:59:12 crc kubenswrapper[4947]: I1129 06:59:12.924835 4947 generic.go:334] "Generic (PLEG): container finished" podID="222c733d-f870-4232-b9d9-d2a9c738927f" containerID="46e0be66f82a12264ad3c18bad009d8802e669b6c4f7bf497631fae941b3657c" exitCode=0 Nov 29 06:59:12 crc kubenswrapper[4947]: I1129 06:59:12.925916 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"222c733d-f870-4232-b9d9-d2a9c738927f","Type":"ContainerDied","Data":"46e0be66f82a12264ad3c18bad009d8802e669b6c4f7bf497631fae941b3657c"} Nov 29 06:59:12 crc kubenswrapper[4947]: I1129 06:59:12.964079 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 29 06:59:13 crc kubenswrapper[4947]: I1129 06:59:13.194305 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62b04c16-c119-4709-a694-441964ae360a" path="/var/lib/kubelet/pods/62b04c16-c119-4709-a694-441964ae360a/volumes" Nov 29 06:59:13 crc kubenswrapper[4947]: I1129 06:59:13.236565 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-f6f5x"] Nov 29 06:59:13 crc kubenswrapper[4947]: I1129 06:59:13.242262 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-f6f5x" Nov 29 06:59:13 crc kubenswrapper[4947]: I1129 06:59:13.250945 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 29 06:59:13 crc kubenswrapper[4947]: I1129 06:59:13.251318 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 29 06:59:13 crc kubenswrapper[4947]: I1129 06:59:13.332741 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-f6f5x"] Nov 29 06:59:13 crc kubenswrapper[4947]: I1129 06:59:13.393698 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 29 06:59:13 crc kubenswrapper[4947]: I1129 06:59:13.408991 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsrb8\" (UniqueName: \"kubernetes.io/projected/f88302c5-0ec1-4e34-848e-0f5deaaa3fea-kube-api-access-nsrb8\") pod \"nova-cell1-cell-mapping-f6f5x\" (UID: \"f88302c5-0ec1-4e34-848e-0f5deaaa3fea\") " pod="openstack/nova-cell1-cell-mapping-f6f5x" Nov 29 06:59:13 crc kubenswrapper[4947]: I1129 06:59:13.409073 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f88302c5-0ec1-4e34-848e-0f5deaaa3fea-scripts\") pod \"nova-cell1-cell-mapping-f6f5x\" (UID: \"f88302c5-0ec1-4e34-848e-0f5deaaa3fea\") " pod="openstack/nova-cell1-cell-mapping-f6f5x" Nov 29 06:59:13 crc kubenswrapper[4947]: I1129 06:59:13.409109 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f88302c5-0ec1-4e34-848e-0f5deaaa3fea-config-data\") pod \"nova-cell1-cell-mapping-f6f5x\" (UID: \"f88302c5-0ec1-4e34-848e-0f5deaaa3fea\") " pod="openstack/nova-cell1-cell-mapping-f6f5x" Nov 29 06:59:13 crc kubenswrapper[4947]: I1129 06:59:13.410927 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f88302c5-0ec1-4e34-848e-0f5deaaa3fea-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-f6f5x\" (UID: \"f88302c5-0ec1-4e34-848e-0f5deaaa3fea\") " pod="openstack/nova-cell1-cell-mapping-f6f5x" Nov 29 06:59:13 crc kubenswrapper[4947]: I1129 06:59:13.512831 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f88302c5-0ec1-4e34-848e-0f5deaaa3fea-config-data\") pod \"nova-cell1-cell-mapping-f6f5x\" (UID: \"f88302c5-0ec1-4e34-848e-0f5deaaa3fea\") " pod="openstack/nova-cell1-cell-mapping-f6f5x" Nov 29 06:59:13 crc kubenswrapper[4947]: I1129 06:59:13.513058 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f88302c5-0ec1-4e34-848e-0f5deaaa3fea-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-f6f5x\" (UID: \"f88302c5-0ec1-4e34-848e-0f5deaaa3fea\") " pod="openstack/nova-cell1-cell-mapping-f6f5x" Nov 29 06:59:13 crc kubenswrapper[4947]: I1129 06:59:13.513162 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsrb8\" (UniqueName: \"kubernetes.io/projected/f88302c5-0ec1-4e34-848e-0f5deaaa3fea-kube-api-access-nsrb8\") pod \"nova-cell1-cell-mapping-f6f5x\" (UID: \"f88302c5-0ec1-4e34-848e-0f5deaaa3fea\") " pod="openstack/nova-cell1-cell-mapping-f6f5x" Nov 29 06:59:13 crc kubenswrapper[4947]: I1129 06:59:13.513201 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f88302c5-0ec1-4e34-848e-0f5deaaa3fea-scripts\") pod \"nova-cell1-cell-mapping-f6f5x\" (UID: \"f88302c5-0ec1-4e34-848e-0f5deaaa3fea\") " pod="openstack/nova-cell1-cell-mapping-f6f5x" Nov 29 06:59:13 crc kubenswrapper[4947]: I1129 06:59:13.523525 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f88302c5-0ec1-4e34-848e-0f5deaaa3fea-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-f6f5x\" (UID: \"f88302c5-0ec1-4e34-848e-0f5deaaa3fea\") " pod="openstack/nova-cell1-cell-mapping-f6f5x" Nov 29 06:59:13 crc kubenswrapper[4947]: I1129 06:59:13.525208 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f88302c5-0ec1-4e34-848e-0f5deaaa3fea-config-data\") pod \"nova-cell1-cell-mapping-f6f5x\" (UID: \"f88302c5-0ec1-4e34-848e-0f5deaaa3fea\") " pod="openstack/nova-cell1-cell-mapping-f6f5x" Nov 29 06:59:13 crc kubenswrapper[4947]: I1129 06:59:13.526269 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f88302c5-0ec1-4e34-848e-0f5deaaa3fea-scripts\") pod \"nova-cell1-cell-mapping-f6f5x\" (UID: \"f88302c5-0ec1-4e34-848e-0f5deaaa3fea\") " pod="openstack/nova-cell1-cell-mapping-f6f5x" Nov 29 06:59:13 crc kubenswrapper[4947]: I1129 06:59:13.538308 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsrb8\" (UniqueName: \"kubernetes.io/projected/f88302c5-0ec1-4e34-848e-0f5deaaa3fea-kube-api-access-nsrb8\") pod \"nova-cell1-cell-mapping-f6f5x\" (UID: \"f88302c5-0ec1-4e34-848e-0f5deaaa3fea\") " pod="openstack/nova-cell1-cell-mapping-f6f5x" Nov 29 06:59:13 crc kubenswrapper[4947]: I1129 06:59:13.597621 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-f6f5x" Nov 29 06:59:13 crc kubenswrapper[4947]: I1129 06:59:13.625634 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 06:59:13 crc kubenswrapper[4947]: I1129 06:59:13.719775 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/222c733d-f870-4232-b9d9-d2a9c738927f-combined-ca-bundle\") pod \"222c733d-f870-4232-b9d9-d2a9c738927f\" (UID: \"222c733d-f870-4232-b9d9-d2a9c738927f\") " Nov 29 06:59:13 crc kubenswrapper[4947]: I1129 06:59:13.719878 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/222c733d-f870-4232-b9d9-d2a9c738927f-scripts\") pod \"222c733d-f870-4232-b9d9-d2a9c738927f\" (UID: \"222c733d-f870-4232-b9d9-d2a9c738927f\") " Nov 29 06:59:13 crc kubenswrapper[4947]: I1129 06:59:13.720013 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/222c733d-f870-4232-b9d9-d2a9c738927f-sg-core-conf-yaml\") pod \"222c733d-f870-4232-b9d9-d2a9c738927f\" (UID: \"222c733d-f870-4232-b9d9-d2a9c738927f\") " Nov 29 06:59:13 crc kubenswrapper[4947]: I1129 06:59:13.720073 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/222c733d-f870-4232-b9d9-d2a9c738927f-log-httpd\") pod \"222c733d-f870-4232-b9d9-d2a9c738927f\" (UID: \"222c733d-f870-4232-b9d9-d2a9c738927f\") " Nov 29 06:59:13 crc kubenswrapper[4947]: I1129 06:59:13.720122 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/222c733d-f870-4232-b9d9-d2a9c738927f-run-httpd\") pod \"222c733d-f870-4232-b9d9-d2a9c738927f\" (UID: \"222c733d-f870-4232-b9d9-d2a9c738927f\") " Nov 29 06:59:13 crc kubenswrapper[4947]: I1129 06:59:13.720178 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/222c733d-f870-4232-b9d9-d2a9c738927f-ceilometer-tls-certs\") pod \"222c733d-f870-4232-b9d9-d2a9c738927f\" (UID: \"222c733d-f870-4232-b9d9-d2a9c738927f\") " Nov 29 06:59:13 crc kubenswrapper[4947]: I1129 06:59:13.720204 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7z4mh\" (UniqueName: \"kubernetes.io/projected/222c733d-f870-4232-b9d9-d2a9c738927f-kube-api-access-7z4mh\") pod \"222c733d-f870-4232-b9d9-d2a9c738927f\" (UID: \"222c733d-f870-4232-b9d9-d2a9c738927f\") " Nov 29 06:59:13 crc kubenswrapper[4947]: I1129 06:59:13.720321 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/222c733d-f870-4232-b9d9-d2a9c738927f-config-data\") pod \"222c733d-f870-4232-b9d9-d2a9c738927f\" (UID: \"222c733d-f870-4232-b9d9-d2a9c738927f\") " Nov 29 06:59:13 crc kubenswrapper[4947]: I1129 06:59:13.722203 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/222c733d-f870-4232-b9d9-d2a9c738927f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "222c733d-f870-4232-b9d9-d2a9c738927f" (UID: "222c733d-f870-4232-b9d9-d2a9c738927f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:59:13 crc kubenswrapper[4947]: I1129 06:59:13.724720 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/222c733d-f870-4232-b9d9-d2a9c738927f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "222c733d-f870-4232-b9d9-d2a9c738927f" (UID: "222c733d-f870-4232-b9d9-d2a9c738927f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:59:13 crc kubenswrapper[4947]: I1129 06:59:13.728605 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/222c733d-f870-4232-b9d9-d2a9c738927f-scripts" (OuterVolumeSpecName: "scripts") pod "222c733d-f870-4232-b9d9-d2a9c738927f" (UID: "222c733d-f870-4232-b9d9-d2a9c738927f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:59:13 crc kubenswrapper[4947]: I1129 06:59:13.737568 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/222c733d-f870-4232-b9d9-d2a9c738927f-kube-api-access-7z4mh" (OuterVolumeSpecName: "kube-api-access-7z4mh") pod "222c733d-f870-4232-b9d9-d2a9c738927f" (UID: "222c733d-f870-4232-b9d9-d2a9c738927f"). InnerVolumeSpecName "kube-api-access-7z4mh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:59:13 crc kubenswrapper[4947]: I1129 06:59:13.825979 4947 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/222c733d-f870-4232-b9d9-d2a9c738927f-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 06:59:13 crc kubenswrapper[4947]: I1129 06:59:13.826026 4947 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/222c733d-f870-4232-b9d9-d2a9c738927f-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 06:59:13 crc kubenswrapper[4947]: I1129 06:59:13.826037 4947 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/222c733d-f870-4232-b9d9-d2a9c738927f-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 06:59:13 crc kubenswrapper[4947]: I1129 06:59:13.826046 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7z4mh\" (UniqueName: \"kubernetes.io/projected/222c733d-f870-4232-b9d9-d2a9c738927f-kube-api-access-7z4mh\") on node \"crc\" DevicePath \"\"" Nov 29 06:59:13 crc kubenswrapper[4947]: I1129 06:59:13.856594 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/222c733d-f870-4232-b9d9-d2a9c738927f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "222c733d-f870-4232-b9d9-d2a9c738927f" (UID: "222c733d-f870-4232-b9d9-d2a9c738927f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:59:13 crc kubenswrapper[4947]: I1129 06:59:13.900190 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/222c733d-f870-4232-b9d9-d2a9c738927f-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "222c733d-f870-4232-b9d9-d2a9c738927f" (UID: "222c733d-f870-4232-b9d9-d2a9c738927f"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:59:13 crc kubenswrapper[4947]: I1129 06:59:13.927617 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/222c733d-f870-4232-b9d9-d2a9c738927f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "222c733d-f870-4232-b9d9-d2a9c738927f" (UID: "222c733d-f870-4232-b9d9-d2a9c738927f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:59:13 crc kubenswrapper[4947]: I1129 06:59:13.928878 4947 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/222c733d-f870-4232-b9d9-d2a9c738927f-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 29 06:59:13 crc kubenswrapper[4947]: I1129 06:59:13.928914 4947 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/222c733d-f870-4232-b9d9-d2a9c738927f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 06:59:13 crc kubenswrapper[4947]: I1129 06:59:13.928927 4947 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/222c733d-f870-4232-b9d9-d2a9c738927f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 29 06:59:13 crc kubenswrapper[4947]: I1129 06:59:13.951902 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"222c733d-f870-4232-b9d9-d2a9c738927f","Type":"ContainerDied","Data":"e549908914759ed275ca7c8e949e0659350022de8b3779c97844c28f320317d6"} Nov 29 06:59:13 crc kubenswrapper[4947]: I1129 06:59:13.951978 4947 scope.go:117] "RemoveContainer" containerID="5757b49b55088a093bd1c1509a030821dc46b673412467c680cfd40b2f2ac60c" Nov 29 06:59:13 crc kubenswrapper[4947]: I1129 06:59:13.952074 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 06:59:13 crc kubenswrapper[4947]: I1129 06:59:13.960894 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2404cde8-1d52-4931-9361-434e7de71954","Type":"ContainerStarted","Data":"f7e61fea305057921b77e15282c15cc48935a75159f3fdc4ee995e01881439ed"} Nov 29 06:59:13 crc kubenswrapper[4947]: I1129 06:59:13.960961 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2404cde8-1d52-4931-9361-434e7de71954","Type":"ContainerStarted","Data":"706d15caaf157231a6aba0715938b1ff5684b6d4bad5d5b2abdd789274f04841"} Nov 29 06:59:13 crc kubenswrapper[4947]: I1129 06:59:13.984947 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/222c733d-f870-4232-b9d9-d2a9c738927f-config-data" (OuterVolumeSpecName: "config-data") pod "222c733d-f870-4232-b9d9-d2a9c738927f" (UID: "222c733d-f870-4232-b9d9-d2a9c738927f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:59:14 crc kubenswrapper[4947]: I1129 06:59:14.002611 4947 scope.go:117] "RemoveContainer" containerID="9ff2ae046700848d2b08ddcd201293d0059cb6918dec496bac3564bee1933bd9" Nov 29 06:59:14 crc kubenswrapper[4947]: I1129 06:59:14.031630 4947 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/222c733d-f870-4232-b9d9-d2a9c738927f-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 06:59:14 crc kubenswrapper[4947]: I1129 06:59:14.055753 4947 scope.go:117] "RemoveContainer" containerID="46e0be66f82a12264ad3c18bad009d8802e669b6c4f7bf497631fae941b3657c" Nov 29 06:59:14 crc kubenswrapper[4947]: I1129 06:59:14.091051 4947 scope.go:117] "RemoveContainer" containerID="ab257a326615ba400b6aabdcefccf2c813646047394625533dff7162ce23e0c1" Nov 29 06:59:14 crc kubenswrapper[4947]: I1129 06:59:14.205695 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-f6f5x"] Nov 29 06:59:14 crc kubenswrapper[4947]: I1129 06:59:14.296006 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 06:59:14 crc kubenswrapper[4947]: I1129 06:59:14.309775 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 29 06:59:14 crc kubenswrapper[4947]: I1129 06:59:14.333817 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 29 06:59:14 crc kubenswrapper[4947]: E1129 06:59:14.335740 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="222c733d-f870-4232-b9d9-d2a9c738927f" containerName="ceilometer-notification-agent" Nov 29 06:59:14 crc kubenswrapper[4947]: I1129 06:59:14.335789 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="222c733d-f870-4232-b9d9-d2a9c738927f" containerName="ceilometer-notification-agent" Nov 29 06:59:14 crc kubenswrapper[4947]: E1129 06:59:14.335865 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="222c733d-f870-4232-b9d9-d2a9c738927f" containerName="sg-core" Nov 29 06:59:14 crc kubenswrapper[4947]: I1129 06:59:14.335880 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="222c733d-f870-4232-b9d9-d2a9c738927f" containerName="sg-core" Nov 29 06:59:14 crc kubenswrapper[4947]: E1129 06:59:14.335905 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="222c733d-f870-4232-b9d9-d2a9c738927f" containerName="ceilometer-central-agent" Nov 29 06:59:14 crc kubenswrapper[4947]: I1129 06:59:14.335918 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="222c733d-f870-4232-b9d9-d2a9c738927f" containerName="ceilometer-central-agent" Nov 29 06:59:14 crc kubenswrapper[4947]: E1129 06:59:14.335940 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="222c733d-f870-4232-b9d9-d2a9c738927f" containerName="proxy-httpd" Nov 29 06:59:14 crc kubenswrapper[4947]: I1129 06:59:14.335950 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="222c733d-f870-4232-b9d9-d2a9c738927f" containerName="proxy-httpd" Nov 29 06:59:14 crc kubenswrapper[4947]: I1129 06:59:14.336378 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="222c733d-f870-4232-b9d9-d2a9c738927f" containerName="proxy-httpd" Nov 29 06:59:14 crc kubenswrapper[4947]: I1129 06:59:14.336415 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="222c733d-f870-4232-b9d9-d2a9c738927f" containerName="ceilometer-central-agent" Nov 29 06:59:14 crc kubenswrapper[4947]: I1129 06:59:14.336449 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="222c733d-f870-4232-b9d9-d2a9c738927f" containerName="ceilometer-notification-agent" Nov 29 06:59:14 crc kubenswrapper[4947]: I1129 06:59:14.336461 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="222c733d-f870-4232-b9d9-d2a9c738927f" containerName="sg-core" Nov 29 06:59:14 crc kubenswrapper[4947]: I1129 06:59:14.338974 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 06:59:14 crc kubenswrapper[4947]: I1129 06:59:14.342893 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 29 06:59:14 crc kubenswrapper[4947]: I1129 06:59:14.356338 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 29 06:59:14 crc kubenswrapper[4947]: I1129 06:59:14.358507 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 29 06:59:14 crc kubenswrapper[4947]: I1129 06:59:14.391561 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 06:59:14 crc kubenswrapper[4947]: I1129 06:59:14.440732 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61d5ade8-7aff-4794-b125-8976230ac2c7-scripts\") pod \"ceilometer-0\" (UID: \"61d5ade8-7aff-4794-b125-8976230ac2c7\") " pod="openstack/ceilometer-0" Nov 29 06:59:14 crc kubenswrapper[4947]: I1129 06:59:14.440813 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61d5ade8-7aff-4794-b125-8976230ac2c7-log-httpd\") pod \"ceilometer-0\" (UID: \"61d5ade8-7aff-4794-b125-8976230ac2c7\") " pod="openstack/ceilometer-0" Nov 29 06:59:14 crc kubenswrapper[4947]: I1129 06:59:14.440885 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61d5ade8-7aff-4794-b125-8976230ac2c7-config-data\") pod \"ceilometer-0\" (UID: \"61d5ade8-7aff-4794-b125-8976230ac2c7\") " pod="openstack/ceilometer-0" Nov 29 06:59:14 crc kubenswrapper[4947]: I1129 06:59:14.441127 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61d5ade8-7aff-4794-b125-8976230ac2c7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"61d5ade8-7aff-4794-b125-8976230ac2c7\") " pod="openstack/ceilometer-0" Nov 29 06:59:14 crc kubenswrapper[4947]: I1129 06:59:14.441494 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/61d5ade8-7aff-4794-b125-8976230ac2c7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"61d5ade8-7aff-4794-b125-8976230ac2c7\") " pod="openstack/ceilometer-0" Nov 29 06:59:14 crc kubenswrapper[4947]: I1129 06:59:14.441563 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrcdd\" (UniqueName: \"kubernetes.io/projected/61d5ade8-7aff-4794-b125-8976230ac2c7-kube-api-access-qrcdd\") pod \"ceilometer-0\" (UID: \"61d5ade8-7aff-4794-b125-8976230ac2c7\") " pod="openstack/ceilometer-0" Nov 29 06:59:14 crc kubenswrapper[4947]: I1129 06:59:14.441775 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61d5ade8-7aff-4794-b125-8976230ac2c7-run-httpd\") pod \"ceilometer-0\" (UID: \"61d5ade8-7aff-4794-b125-8976230ac2c7\") " pod="openstack/ceilometer-0" Nov 29 06:59:14 crc kubenswrapper[4947]: I1129 06:59:14.441858 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/61d5ade8-7aff-4794-b125-8976230ac2c7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"61d5ade8-7aff-4794-b125-8976230ac2c7\") " pod="openstack/ceilometer-0" Nov 29 06:59:14 crc kubenswrapper[4947]: I1129 06:59:14.544397 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61d5ade8-7aff-4794-b125-8976230ac2c7-run-httpd\") pod \"ceilometer-0\" (UID: \"61d5ade8-7aff-4794-b125-8976230ac2c7\") " pod="openstack/ceilometer-0" Nov 29 06:59:14 crc kubenswrapper[4947]: I1129 06:59:14.544497 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/61d5ade8-7aff-4794-b125-8976230ac2c7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"61d5ade8-7aff-4794-b125-8976230ac2c7\") " pod="openstack/ceilometer-0" Nov 29 06:59:14 crc kubenswrapper[4947]: I1129 06:59:14.544545 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61d5ade8-7aff-4794-b125-8976230ac2c7-scripts\") pod \"ceilometer-0\" (UID: \"61d5ade8-7aff-4794-b125-8976230ac2c7\") " pod="openstack/ceilometer-0" Nov 29 06:59:14 crc kubenswrapper[4947]: I1129 06:59:14.544567 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61d5ade8-7aff-4794-b125-8976230ac2c7-log-httpd\") pod \"ceilometer-0\" (UID: \"61d5ade8-7aff-4794-b125-8976230ac2c7\") " pod="openstack/ceilometer-0" Nov 29 06:59:14 crc kubenswrapper[4947]: I1129 06:59:14.544596 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61d5ade8-7aff-4794-b125-8976230ac2c7-config-data\") pod \"ceilometer-0\" (UID: \"61d5ade8-7aff-4794-b125-8976230ac2c7\") " pod="openstack/ceilometer-0" Nov 29 06:59:14 crc kubenswrapper[4947]: I1129 06:59:14.544641 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61d5ade8-7aff-4794-b125-8976230ac2c7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"61d5ade8-7aff-4794-b125-8976230ac2c7\") " pod="openstack/ceilometer-0" Nov 29 06:59:14 crc kubenswrapper[4947]: I1129 06:59:14.544733 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/61d5ade8-7aff-4794-b125-8976230ac2c7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"61d5ade8-7aff-4794-b125-8976230ac2c7\") " pod="openstack/ceilometer-0" Nov 29 06:59:14 crc kubenswrapper[4947]: I1129 06:59:14.544769 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrcdd\" (UniqueName: \"kubernetes.io/projected/61d5ade8-7aff-4794-b125-8976230ac2c7-kube-api-access-qrcdd\") pod \"ceilometer-0\" (UID: \"61d5ade8-7aff-4794-b125-8976230ac2c7\") " pod="openstack/ceilometer-0" Nov 29 06:59:14 crc kubenswrapper[4947]: I1129 06:59:14.546148 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61d5ade8-7aff-4794-b125-8976230ac2c7-log-httpd\") pod \"ceilometer-0\" (UID: \"61d5ade8-7aff-4794-b125-8976230ac2c7\") " pod="openstack/ceilometer-0" Nov 29 06:59:14 crc kubenswrapper[4947]: I1129 06:59:14.546264 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61d5ade8-7aff-4794-b125-8976230ac2c7-run-httpd\") pod \"ceilometer-0\" (UID: \"61d5ade8-7aff-4794-b125-8976230ac2c7\") " pod="openstack/ceilometer-0" Nov 29 06:59:14 crc kubenswrapper[4947]: I1129 06:59:14.552108 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/61d5ade8-7aff-4794-b125-8976230ac2c7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"61d5ade8-7aff-4794-b125-8976230ac2c7\") " pod="openstack/ceilometer-0" Nov 29 06:59:14 crc kubenswrapper[4947]: I1129 06:59:14.556727 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/61d5ade8-7aff-4794-b125-8976230ac2c7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"61d5ade8-7aff-4794-b125-8976230ac2c7\") " pod="openstack/ceilometer-0" Nov 29 06:59:14 crc kubenswrapper[4947]: I1129 06:59:14.558433 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61d5ade8-7aff-4794-b125-8976230ac2c7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"61d5ade8-7aff-4794-b125-8976230ac2c7\") " pod="openstack/ceilometer-0" Nov 29 06:59:14 crc kubenswrapper[4947]: I1129 06:59:14.573322 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61d5ade8-7aff-4794-b125-8976230ac2c7-scripts\") pod \"ceilometer-0\" (UID: \"61d5ade8-7aff-4794-b125-8976230ac2c7\") " pod="openstack/ceilometer-0" Nov 29 06:59:14 crc kubenswrapper[4947]: I1129 06:59:14.581357 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61d5ade8-7aff-4794-b125-8976230ac2c7-config-data\") pod \"ceilometer-0\" (UID: \"61d5ade8-7aff-4794-b125-8976230ac2c7\") " pod="openstack/ceilometer-0" Nov 29 06:59:14 crc kubenswrapper[4947]: I1129 06:59:14.582661 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrcdd\" (UniqueName: \"kubernetes.io/projected/61d5ade8-7aff-4794-b125-8976230ac2c7-kube-api-access-qrcdd\") pod \"ceilometer-0\" (UID: \"61d5ade8-7aff-4794-b125-8976230ac2c7\") " pod="openstack/ceilometer-0" Nov 29 06:59:14 crc kubenswrapper[4947]: I1129 06:59:14.775854 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 06:59:14 crc kubenswrapper[4947]: I1129 06:59:14.992499 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2404cde8-1d52-4931-9361-434e7de71954","Type":"ContainerStarted","Data":"b11a1bac308817c416b51baae92a04ac7cfd167554827b16476b89c6d45fbff7"} Nov 29 06:59:15 crc kubenswrapper[4947]: I1129 06:59:15.001332 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-f6f5x" event={"ID":"f88302c5-0ec1-4e34-848e-0f5deaaa3fea","Type":"ContainerStarted","Data":"26ebe7134e34e48a78b93ca743611ac2b6b566a39deb26b3d9388c0dc4cd659d"} Nov 29 06:59:15 crc kubenswrapper[4947]: I1129 06:59:15.001385 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-f6f5x" event={"ID":"f88302c5-0ec1-4e34-848e-0f5deaaa3fea","Type":"ContainerStarted","Data":"b785757a5d6da6c01732e0bc641ee94d59ad5d3fb708ed4335d70e6f7b6d7c88"} Nov 29 06:59:15 crc kubenswrapper[4947]: I1129 06:59:15.028020 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.027993792 podStartE2EDuration="3.027993792s" podCreationTimestamp="2025-11-29 06:59:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:59:15.019499458 +0000 UTC m=+1506.063881539" watchObservedRunningTime="2025-11-29 06:59:15.027993792 +0000 UTC m=+1506.072375863" Nov 29 06:59:15 crc kubenswrapper[4947]: I1129 06:59:15.129799 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b856c5697-8n7gk" Nov 29 06:59:15 crc kubenswrapper[4947]: I1129 06:59:15.167989 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-f6f5x" podStartSLOduration=2.167964151 podStartE2EDuration="2.167964151s" podCreationTimestamp="2025-11-29 06:59:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:59:15.051558344 +0000 UTC m=+1506.095940425" watchObservedRunningTime="2025-11-29 06:59:15.167964151 +0000 UTC m=+1506.212346242" Nov 29 06:59:15 crc kubenswrapper[4947]: I1129 06:59:15.221056 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="222c733d-f870-4232-b9d9-d2a9c738927f" path="/var/lib/kubelet/pods/222c733d-f870-4232-b9d9-d2a9c738927f/volumes" Nov 29 06:59:15 crc kubenswrapper[4947]: I1129 06:59:15.241023 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-2kftb"] Nov 29 06:59:15 crc kubenswrapper[4947]: I1129 06:59:15.241423 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-566b5b7845-2kftb" podUID="bdbb0f6f-844f-420f-b832-87f194453bce" containerName="dnsmasq-dns" containerID="cri-o://b4f31d5dd03ce54fcb1f43df5ba0bc27a290fa319c3dfc294564a7bf76c0c71a" gracePeriod=10 Nov 29 06:59:15 crc kubenswrapper[4947]: I1129 06:59:15.461894 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 06:59:16 crc kubenswrapper[4947]: I1129 06:59:16.015886 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61d5ade8-7aff-4794-b125-8976230ac2c7","Type":"ContainerStarted","Data":"1781f03763779110c7515fc0152e9e6d6ce0ff1b34effcd6bfd2b0751c79d0d8"} Nov 29 06:59:16 crc kubenswrapper[4947]: I1129 06:59:16.021193 4947 generic.go:334] "Generic (PLEG): container finished" podID="bdbb0f6f-844f-420f-b832-87f194453bce" containerID="b4f31d5dd03ce54fcb1f43df5ba0bc27a290fa319c3dfc294564a7bf76c0c71a" exitCode=0 Nov 29 06:59:16 crc kubenswrapper[4947]: I1129 06:59:16.021370 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-2kftb" event={"ID":"bdbb0f6f-844f-420f-b832-87f194453bce","Type":"ContainerDied","Data":"b4f31d5dd03ce54fcb1f43df5ba0bc27a290fa319c3dfc294564a7bf76c0c71a"} Nov 29 06:59:16 crc kubenswrapper[4947]: I1129 06:59:16.607288 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-2kftb" Nov 29 06:59:16 crc kubenswrapper[4947]: I1129 06:59:16.644274 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bdbb0f6f-844f-420f-b832-87f194453bce-ovsdbserver-sb\") pod \"bdbb0f6f-844f-420f-b832-87f194453bce\" (UID: \"bdbb0f6f-844f-420f-b832-87f194453bce\") " Nov 29 06:59:16 crc kubenswrapper[4947]: I1129 06:59:16.644406 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwvgd\" (UniqueName: \"kubernetes.io/projected/bdbb0f6f-844f-420f-b832-87f194453bce-kube-api-access-jwvgd\") pod \"bdbb0f6f-844f-420f-b832-87f194453bce\" (UID: \"bdbb0f6f-844f-420f-b832-87f194453bce\") " Nov 29 06:59:16 crc kubenswrapper[4947]: I1129 06:59:16.644479 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdbb0f6f-844f-420f-b832-87f194453bce-config\") pod \"bdbb0f6f-844f-420f-b832-87f194453bce\" (UID: \"bdbb0f6f-844f-420f-b832-87f194453bce\") " Nov 29 06:59:16 crc kubenswrapper[4947]: I1129 06:59:16.644543 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bdbb0f6f-844f-420f-b832-87f194453bce-ovsdbserver-nb\") pod \"bdbb0f6f-844f-420f-b832-87f194453bce\" (UID: \"bdbb0f6f-844f-420f-b832-87f194453bce\") " Nov 29 06:59:16 crc kubenswrapper[4947]: I1129 06:59:16.644599 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bdbb0f6f-844f-420f-b832-87f194453bce-dns-svc\") pod \"bdbb0f6f-844f-420f-b832-87f194453bce\" (UID: \"bdbb0f6f-844f-420f-b832-87f194453bce\") " Nov 29 06:59:16 crc kubenswrapper[4947]: I1129 06:59:16.657574 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdbb0f6f-844f-420f-b832-87f194453bce-kube-api-access-jwvgd" (OuterVolumeSpecName: "kube-api-access-jwvgd") pod "bdbb0f6f-844f-420f-b832-87f194453bce" (UID: "bdbb0f6f-844f-420f-b832-87f194453bce"). InnerVolumeSpecName "kube-api-access-jwvgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:59:16 crc kubenswrapper[4947]: I1129 06:59:16.748354 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwvgd\" (UniqueName: \"kubernetes.io/projected/bdbb0f6f-844f-420f-b832-87f194453bce-kube-api-access-jwvgd\") on node \"crc\" DevicePath \"\"" Nov 29 06:59:16 crc kubenswrapper[4947]: I1129 06:59:16.765441 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdbb0f6f-844f-420f-b832-87f194453bce-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bdbb0f6f-844f-420f-b832-87f194453bce" (UID: "bdbb0f6f-844f-420f-b832-87f194453bce"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:59:16 crc kubenswrapper[4947]: I1129 06:59:16.781555 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdbb0f6f-844f-420f-b832-87f194453bce-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bdbb0f6f-844f-420f-b832-87f194453bce" (UID: "bdbb0f6f-844f-420f-b832-87f194453bce"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:59:16 crc kubenswrapper[4947]: I1129 06:59:16.793978 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdbb0f6f-844f-420f-b832-87f194453bce-config" (OuterVolumeSpecName: "config") pod "bdbb0f6f-844f-420f-b832-87f194453bce" (UID: "bdbb0f6f-844f-420f-b832-87f194453bce"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:59:16 crc kubenswrapper[4947]: I1129 06:59:16.798729 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdbb0f6f-844f-420f-b832-87f194453bce-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bdbb0f6f-844f-420f-b832-87f194453bce" (UID: "bdbb0f6f-844f-420f-b832-87f194453bce"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 06:59:16 crc kubenswrapper[4947]: I1129 06:59:16.850778 4947 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bdbb0f6f-844f-420f-b832-87f194453bce-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 06:59:16 crc kubenswrapper[4947]: I1129 06:59:16.850841 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdbb0f6f-844f-420f-b832-87f194453bce-config\") on node \"crc\" DevicePath \"\"" Nov 29 06:59:16 crc kubenswrapper[4947]: I1129 06:59:16.850855 4947 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bdbb0f6f-844f-420f-b832-87f194453bce-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 06:59:16 crc kubenswrapper[4947]: I1129 06:59:16.850866 4947 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bdbb0f6f-844f-420f-b832-87f194453bce-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 06:59:17 crc kubenswrapper[4947]: I1129 06:59:17.038433 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-2kftb" event={"ID":"bdbb0f6f-844f-420f-b832-87f194453bce","Type":"ContainerDied","Data":"4ba821c1db8a3d272e8faf0ab2faf7be875d89b205b6b4901cd7d63f39abeb9a"} Nov 29 06:59:17 crc kubenswrapper[4947]: I1129 06:59:17.038963 4947 scope.go:117] "RemoveContainer" containerID="b4f31d5dd03ce54fcb1f43df5ba0bc27a290fa319c3dfc294564a7bf76c0c71a" Nov 29 06:59:17 crc kubenswrapper[4947]: I1129 06:59:17.038529 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-2kftb" Nov 29 06:59:17 crc kubenswrapper[4947]: I1129 06:59:17.080637 4947 scope.go:117] "RemoveContainer" containerID="caa9c510c9b79f3f4683e017d41493f057cbf1b0f029448aa427d2e5d9f9366d" Nov 29 06:59:17 crc kubenswrapper[4947]: I1129 06:59:17.090325 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-2kftb"] Nov 29 06:59:17 crc kubenswrapper[4947]: I1129 06:59:17.100261 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-2kftb"] Nov 29 06:59:17 crc kubenswrapper[4947]: I1129 06:59:17.191126 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdbb0f6f-844f-420f-b832-87f194453bce" path="/var/lib/kubelet/pods/bdbb0f6f-844f-420f-b832-87f194453bce/volumes" Nov 29 06:59:18 crc kubenswrapper[4947]: I1129 06:59:18.056499 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61d5ade8-7aff-4794-b125-8976230ac2c7","Type":"ContainerStarted","Data":"16478996e26d9b8193ea7c9e93cb57fb8c498423707add8febb80121b9cc9b92"} Nov 29 06:59:19 crc kubenswrapper[4947]: I1129 06:59:19.070679 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61d5ade8-7aff-4794-b125-8976230ac2c7","Type":"ContainerStarted","Data":"f371057812d11d5d65ef8ad96bacb494338a8f6411b5e99349323ac4db876d5a"} Nov 29 06:59:20 crc kubenswrapper[4947]: I1129 06:59:20.086452 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61d5ade8-7aff-4794-b125-8976230ac2c7","Type":"ContainerStarted","Data":"cf0ac48485450ed668311bf28161e20e6ff07face7d484829813d36539c40be3"} Nov 29 06:59:21 crc kubenswrapper[4947]: I1129 06:59:21.120809 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61d5ade8-7aff-4794-b125-8976230ac2c7","Type":"ContainerStarted","Data":"68808fd967124b482b1cb4b10a5cb67d25d6315894f66100a60932207750eff7"} Nov 29 06:59:21 crc kubenswrapper[4947]: I1129 06:59:21.121625 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 29 06:59:21 crc kubenswrapper[4947]: I1129 06:59:21.155577 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.092513211 podStartE2EDuration="7.155540498s" podCreationTimestamp="2025-11-29 06:59:14 +0000 UTC" firstStartedPulling="2025-11-29 06:59:15.51769388 +0000 UTC m=+1506.562075971" lastFinishedPulling="2025-11-29 06:59:20.580721177 +0000 UTC m=+1511.625103258" observedRunningTime="2025-11-29 06:59:21.152182153 +0000 UTC m=+1512.196564234" watchObservedRunningTime="2025-11-29 06:59:21.155540498 +0000 UTC m=+1512.199922579" Nov 29 06:59:22 crc kubenswrapper[4947]: I1129 06:59:22.733485 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 29 06:59:22 crc kubenswrapper[4947]: I1129 06:59:22.733946 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 29 06:59:23 crc kubenswrapper[4947]: I1129 06:59:23.146194 4947 generic.go:334] "Generic (PLEG): container finished" podID="f88302c5-0ec1-4e34-848e-0f5deaaa3fea" containerID="26ebe7134e34e48a78b93ca743611ac2b6b566a39deb26b3d9388c0dc4cd659d" exitCode=0 Nov 29 06:59:23 crc kubenswrapper[4947]: I1129 06:59:23.146273 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-f6f5x" event={"ID":"f88302c5-0ec1-4e34-848e-0f5deaaa3fea","Type":"ContainerDied","Data":"26ebe7134e34e48a78b93ca743611ac2b6b566a39deb26b3d9388c0dc4cd659d"} Nov 29 06:59:23 crc kubenswrapper[4947]: I1129 06:59:23.750473 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2404cde8-1d52-4931-9361-434e7de71954" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.183:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 29 06:59:23 crc kubenswrapper[4947]: I1129 06:59:23.750551 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2404cde8-1d52-4931-9361-434e7de71954" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.183:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 29 06:59:24 crc kubenswrapper[4947]: I1129 06:59:24.604011 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-f6f5x" Nov 29 06:59:24 crc kubenswrapper[4947]: I1129 06:59:24.658299 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsrb8\" (UniqueName: \"kubernetes.io/projected/f88302c5-0ec1-4e34-848e-0f5deaaa3fea-kube-api-access-nsrb8\") pod \"f88302c5-0ec1-4e34-848e-0f5deaaa3fea\" (UID: \"f88302c5-0ec1-4e34-848e-0f5deaaa3fea\") " Nov 29 06:59:24 crc kubenswrapper[4947]: I1129 06:59:24.658500 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f88302c5-0ec1-4e34-848e-0f5deaaa3fea-scripts\") pod \"f88302c5-0ec1-4e34-848e-0f5deaaa3fea\" (UID: \"f88302c5-0ec1-4e34-848e-0f5deaaa3fea\") " Nov 29 06:59:24 crc kubenswrapper[4947]: I1129 06:59:24.658745 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f88302c5-0ec1-4e34-848e-0f5deaaa3fea-combined-ca-bundle\") pod \"f88302c5-0ec1-4e34-848e-0f5deaaa3fea\" (UID: \"f88302c5-0ec1-4e34-848e-0f5deaaa3fea\") " Nov 29 06:59:24 crc kubenswrapper[4947]: I1129 06:59:24.658803 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f88302c5-0ec1-4e34-848e-0f5deaaa3fea-config-data\") pod \"f88302c5-0ec1-4e34-848e-0f5deaaa3fea\" (UID: \"f88302c5-0ec1-4e34-848e-0f5deaaa3fea\") " Nov 29 06:59:24 crc kubenswrapper[4947]: I1129 06:59:24.683616 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88302c5-0ec1-4e34-848e-0f5deaaa3fea-kube-api-access-nsrb8" (OuterVolumeSpecName: "kube-api-access-nsrb8") pod "f88302c5-0ec1-4e34-848e-0f5deaaa3fea" (UID: "f88302c5-0ec1-4e34-848e-0f5deaaa3fea"). InnerVolumeSpecName "kube-api-access-nsrb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:59:24 crc kubenswrapper[4947]: I1129 06:59:24.686073 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88302c5-0ec1-4e34-848e-0f5deaaa3fea-scripts" (OuterVolumeSpecName: "scripts") pod "f88302c5-0ec1-4e34-848e-0f5deaaa3fea" (UID: "f88302c5-0ec1-4e34-848e-0f5deaaa3fea"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:59:24 crc kubenswrapper[4947]: I1129 06:59:24.697133 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88302c5-0ec1-4e34-848e-0f5deaaa3fea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f88302c5-0ec1-4e34-848e-0f5deaaa3fea" (UID: "f88302c5-0ec1-4e34-848e-0f5deaaa3fea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:59:24 crc kubenswrapper[4947]: I1129 06:59:24.709640 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88302c5-0ec1-4e34-848e-0f5deaaa3fea-config-data" (OuterVolumeSpecName: "config-data") pod "f88302c5-0ec1-4e34-848e-0f5deaaa3fea" (UID: "f88302c5-0ec1-4e34-848e-0f5deaaa3fea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:59:24 crc kubenswrapper[4947]: I1129 06:59:24.761325 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsrb8\" (UniqueName: \"kubernetes.io/projected/f88302c5-0ec1-4e34-848e-0f5deaaa3fea-kube-api-access-nsrb8\") on node \"crc\" DevicePath \"\"" Nov 29 06:59:24 crc kubenswrapper[4947]: I1129 06:59:24.761390 4947 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f88302c5-0ec1-4e34-848e-0f5deaaa3fea-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 06:59:24 crc kubenswrapper[4947]: I1129 06:59:24.761403 4947 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f88302c5-0ec1-4e34-848e-0f5deaaa3fea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 06:59:24 crc kubenswrapper[4947]: I1129 06:59:24.761415 4947 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f88302c5-0ec1-4e34-848e-0f5deaaa3fea-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 06:59:25 crc kubenswrapper[4947]: I1129 06:59:25.172468 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-f6f5x" event={"ID":"f88302c5-0ec1-4e34-848e-0f5deaaa3fea","Type":"ContainerDied","Data":"b785757a5d6da6c01732e0bc641ee94d59ad5d3fb708ed4335d70e6f7b6d7c88"} Nov 29 06:59:25 crc kubenswrapper[4947]: I1129 06:59:25.172578 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b785757a5d6da6c01732e0bc641ee94d59ad5d3fb708ed4335d70e6f7b6d7c88" Nov 29 06:59:25 crc kubenswrapper[4947]: I1129 06:59:25.172669 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-f6f5x" Nov 29 06:59:25 crc kubenswrapper[4947]: E1129 06:59:25.390103 4947 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf88302c5_0ec1_4e34_848e_0f5deaaa3fea.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf88302c5_0ec1_4e34_848e_0f5deaaa3fea.slice/crio-b785757a5d6da6c01732e0bc641ee94d59ad5d3fb708ed4335d70e6f7b6d7c88\": RecentStats: unable to find data in memory cache]" Nov 29 06:59:25 crc kubenswrapper[4947]: I1129 06:59:25.419390 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 29 06:59:25 crc kubenswrapper[4947]: I1129 06:59:25.419719 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2404cde8-1d52-4931-9361-434e7de71954" containerName="nova-api-log" containerID="cri-o://f7e61fea305057921b77e15282c15cc48935a75159f3fdc4ee995e01881439ed" gracePeriod=30 Nov 29 06:59:25 crc kubenswrapper[4947]: I1129 06:59:25.419911 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2404cde8-1d52-4931-9361-434e7de71954" containerName="nova-api-api" containerID="cri-o://b11a1bac308817c416b51baae92a04ac7cfd167554827b16476b89c6d45fbff7" gracePeriod=30 Nov 29 06:59:25 crc kubenswrapper[4947]: I1129 06:59:25.433756 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 06:59:25 crc kubenswrapper[4947]: I1129 06:59:25.434002 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="eb7e3676-7ede-4882-984c-4f2e68c73420" containerName="nova-scheduler-scheduler" containerID="cri-o://f00a8fe203a625eaac6622ebc6f9c8c85707ed2a66ebf16e16ab703139d741e7" gracePeriod=30 Nov 29 06:59:25 crc kubenswrapper[4947]: I1129 06:59:25.487082 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 06:59:25 crc kubenswrapper[4947]: I1129 06:59:25.487404 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7845fd7b-f71a-4974-ae50-17ce9451207f" containerName="nova-metadata-log" containerID="cri-o://0156d25edea207b6a6e347008a9998faa722816370e6a383eb5b8b311e8b51b6" gracePeriod=30 Nov 29 06:59:25 crc kubenswrapper[4947]: I1129 06:59:25.487586 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7845fd7b-f71a-4974-ae50-17ce9451207f" containerName="nova-metadata-metadata" containerID="cri-o://48c0f8eea5f305cda56e8cb0a04ff0c65cd0e03f7d23246573a49adfd3e8a6fc" gracePeriod=30 Nov 29 06:59:26 crc kubenswrapper[4947]: I1129 06:59:26.208736 4947 generic.go:334] "Generic (PLEG): container finished" podID="7845fd7b-f71a-4974-ae50-17ce9451207f" containerID="0156d25edea207b6a6e347008a9998faa722816370e6a383eb5b8b311e8b51b6" exitCode=143 Nov 29 06:59:26 crc kubenswrapper[4947]: I1129 06:59:26.208973 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7845fd7b-f71a-4974-ae50-17ce9451207f","Type":"ContainerDied","Data":"0156d25edea207b6a6e347008a9998faa722816370e6a383eb5b8b311e8b51b6"} Nov 29 06:59:26 crc kubenswrapper[4947]: I1129 06:59:26.217196 4947 generic.go:334] "Generic (PLEG): container finished" podID="2404cde8-1d52-4931-9361-434e7de71954" containerID="f7e61fea305057921b77e15282c15cc48935a75159f3fdc4ee995e01881439ed" exitCode=143 Nov 29 06:59:26 crc kubenswrapper[4947]: I1129 06:59:26.217277 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2404cde8-1d52-4931-9361-434e7de71954","Type":"ContainerDied","Data":"f7e61fea305057921b77e15282c15cc48935a75159f3fdc4ee995e01881439ed"} Nov 29 06:59:29 crc kubenswrapper[4947]: I1129 06:59:29.254643 4947 generic.go:334] "Generic (PLEG): container finished" podID="7845fd7b-f71a-4974-ae50-17ce9451207f" containerID="48c0f8eea5f305cda56e8cb0a04ff0c65cd0e03f7d23246573a49adfd3e8a6fc" exitCode=0 Nov 29 06:59:29 crc kubenswrapper[4947]: I1129 06:59:29.254726 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7845fd7b-f71a-4974-ae50-17ce9451207f","Type":"ContainerDied","Data":"48c0f8eea5f305cda56e8cb0a04ff0c65cd0e03f7d23246573a49adfd3e8a6fc"} Nov 29 06:59:29 crc kubenswrapper[4947]: E1129 06:59:29.561540 4947 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f00a8fe203a625eaac6622ebc6f9c8c85707ed2a66ebf16e16ab703139d741e7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 29 06:59:29 crc kubenswrapper[4947]: E1129 06:59:29.565027 4947 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f00a8fe203a625eaac6622ebc6f9c8c85707ed2a66ebf16e16ab703139d741e7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 29 06:59:29 crc kubenswrapper[4947]: E1129 06:59:29.566588 4947 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f00a8fe203a625eaac6622ebc6f9c8c85707ed2a66ebf16e16ab703139d741e7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 29 06:59:29 crc kubenswrapper[4947]: E1129 06:59:29.566649 4947 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="eb7e3676-7ede-4882-984c-4f2e68c73420" containerName="nova-scheduler-scheduler" Nov 29 06:59:29 crc kubenswrapper[4947]: I1129 06:59:29.654936 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 06:59:29 crc kubenswrapper[4947]: I1129 06:59:29.670451 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7845fd7b-f71a-4974-ae50-17ce9451207f-nova-metadata-tls-certs\") pod \"7845fd7b-f71a-4974-ae50-17ce9451207f\" (UID: \"7845fd7b-f71a-4974-ae50-17ce9451207f\") " Nov 29 06:59:29 crc kubenswrapper[4947]: I1129 06:59:29.670608 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7845fd7b-f71a-4974-ae50-17ce9451207f-logs\") pod \"7845fd7b-f71a-4974-ae50-17ce9451207f\" (UID: \"7845fd7b-f71a-4974-ae50-17ce9451207f\") " Nov 29 06:59:29 crc kubenswrapper[4947]: I1129 06:59:29.670709 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7845fd7b-f71a-4974-ae50-17ce9451207f-combined-ca-bundle\") pod \"7845fd7b-f71a-4974-ae50-17ce9451207f\" (UID: \"7845fd7b-f71a-4974-ae50-17ce9451207f\") " Nov 29 06:59:29 crc kubenswrapper[4947]: I1129 06:59:29.671014 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7845fd7b-f71a-4974-ae50-17ce9451207f-config-data\") pod \"7845fd7b-f71a-4974-ae50-17ce9451207f\" (UID: \"7845fd7b-f71a-4974-ae50-17ce9451207f\") " Nov 29 06:59:29 crc kubenswrapper[4947]: I1129 06:59:29.671068 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tcxp\" (UniqueName: \"kubernetes.io/projected/7845fd7b-f71a-4974-ae50-17ce9451207f-kube-api-access-8tcxp\") pod \"7845fd7b-f71a-4974-ae50-17ce9451207f\" (UID: \"7845fd7b-f71a-4974-ae50-17ce9451207f\") " Nov 29 06:59:29 crc kubenswrapper[4947]: I1129 06:59:29.671251 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7845fd7b-f71a-4974-ae50-17ce9451207f-logs" (OuterVolumeSpecName: "logs") pod "7845fd7b-f71a-4974-ae50-17ce9451207f" (UID: "7845fd7b-f71a-4974-ae50-17ce9451207f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:59:29 crc kubenswrapper[4947]: I1129 06:59:29.671904 4947 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7845fd7b-f71a-4974-ae50-17ce9451207f-logs\") on node \"crc\" DevicePath \"\"" Nov 29 06:59:29 crc kubenswrapper[4947]: I1129 06:59:29.680776 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7845fd7b-f71a-4974-ae50-17ce9451207f-kube-api-access-8tcxp" (OuterVolumeSpecName: "kube-api-access-8tcxp") pod "7845fd7b-f71a-4974-ae50-17ce9451207f" (UID: "7845fd7b-f71a-4974-ae50-17ce9451207f"). InnerVolumeSpecName "kube-api-access-8tcxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:59:29 crc kubenswrapper[4947]: I1129 06:59:29.760896 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7845fd7b-f71a-4974-ae50-17ce9451207f-config-data" (OuterVolumeSpecName: "config-data") pod "7845fd7b-f71a-4974-ae50-17ce9451207f" (UID: "7845fd7b-f71a-4974-ae50-17ce9451207f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:59:29 crc kubenswrapper[4947]: I1129 06:59:29.774981 4947 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7845fd7b-f71a-4974-ae50-17ce9451207f-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 06:59:29 crc kubenswrapper[4947]: I1129 06:59:29.775023 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tcxp\" (UniqueName: \"kubernetes.io/projected/7845fd7b-f71a-4974-ae50-17ce9451207f-kube-api-access-8tcxp\") on node \"crc\" DevicePath \"\"" Nov 29 06:59:29 crc kubenswrapper[4947]: I1129 06:59:29.777695 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7845fd7b-f71a-4974-ae50-17ce9451207f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7845fd7b-f71a-4974-ae50-17ce9451207f" (UID: "7845fd7b-f71a-4974-ae50-17ce9451207f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:59:29 crc kubenswrapper[4947]: I1129 06:59:29.791943 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7845fd7b-f71a-4974-ae50-17ce9451207f-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "7845fd7b-f71a-4974-ae50-17ce9451207f" (UID: "7845fd7b-f71a-4974-ae50-17ce9451207f"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:59:29 crc kubenswrapper[4947]: I1129 06:59:29.876973 4947 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7845fd7b-f71a-4974-ae50-17ce9451207f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 06:59:29 crc kubenswrapper[4947]: I1129 06:59:29.877015 4947 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7845fd7b-f71a-4974-ae50-17ce9451207f-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.274057 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7845fd7b-f71a-4974-ae50-17ce9451207f","Type":"ContainerDied","Data":"7304c05d88c1961ea09b914d4a6f84e2ab7e95c63fc008f6685333f6bed0dd5a"} Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.274155 4947 scope.go:117] "RemoveContainer" containerID="48c0f8eea5f305cda56e8cb0a04ff0c65cd0e03f7d23246573a49adfd3e8a6fc" Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.275840 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.279746 4947 generic.go:334] "Generic (PLEG): container finished" podID="2404cde8-1d52-4931-9361-434e7de71954" containerID="b11a1bac308817c416b51baae92a04ac7cfd167554827b16476b89c6d45fbff7" exitCode=0 Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.279804 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2404cde8-1d52-4931-9361-434e7de71954","Type":"ContainerDied","Data":"b11a1bac308817c416b51baae92a04ac7cfd167554827b16476b89c6d45fbff7"} Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.279841 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2404cde8-1d52-4931-9361-434e7de71954","Type":"ContainerDied","Data":"706d15caaf157231a6aba0715938b1ff5684b6d4bad5d5b2abdd789274f04841"} Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.279859 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="706d15caaf157231a6aba0715938b1ff5684b6d4bad5d5b2abdd789274f04841" Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.318501 4947 scope.go:117] "RemoveContainer" containerID="0156d25edea207b6a6e347008a9998faa722816370e6a383eb5b8b311e8b51b6" Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.320111 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.340700 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.362839 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.394781 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 29 06:59:30 crc kubenswrapper[4947]: E1129 06:59:30.395322 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f88302c5-0ec1-4e34-848e-0f5deaaa3fea" containerName="nova-manage" Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.395347 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="f88302c5-0ec1-4e34-848e-0f5deaaa3fea" containerName="nova-manage" Nov 29 06:59:30 crc kubenswrapper[4947]: E1129 06:59:30.395366 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7845fd7b-f71a-4974-ae50-17ce9451207f" containerName="nova-metadata-metadata" Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.395378 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="7845fd7b-f71a-4974-ae50-17ce9451207f" containerName="nova-metadata-metadata" Nov 29 06:59:30 crc kubenswrapper[4947]: E1129 06:59:30.395404 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdbb0f6f-844f-420f-b832-87f194453bce" containerName="dnsmasq-dns" Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.395411 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdbb0f6f-844f-420f-b832-87f194453bce" containerName="dnsmasq-dns" Nov 29 06:59:30 crc kubenswrapper[4947]: E1129 06:59:30.395423 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdbb0f6f-844f-420f-b832-87f194453bce" containerName="init" Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.395429 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdbb0f6f-844f-420f-b832-87f194453bce" containerName="init" Nov 29 06:59:30 crc kubenswrapper[4947]: E1129 06:59:30.395446 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2404cde8-1d52-4931-9361-434e7de71954" containerName="nova-api-api" Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.395454 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="2404cde8-1d52-4931-9361-434e7de71954" containerName="nova-api-api" Nov 29 06:59:30 crc kubenswrapper[4947]: E1129 06:59:30.395466 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7845fd7b-f71a-4974-ae50-17ce9451207f" containerName="nova-metadata-log" Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.395478 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="7845fd7b-f71a-4974-ae50-17ce9451207f" containerName="nova-metadata-log" Nov 29 06:59:30 crc kubenswrapper[4947]: E1129 06:59:30.395501 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2404cde8-1d52-4931-9361-434e7de71954" containerName="nova-api-log" Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.395509 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="2404cde8-1d52-4931-9361-434e7de71954" containerName="nova-api-log" Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.395794 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="7845fd7b-f71a-4974-ae50-17ce9451207f" containerName="nova-metadata-metadata" Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.395822 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdbb0f6f-844f-420f-b832-87f194453bce" containerName="dnsmasq-dns" Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.395839 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="f88302c5-0ec1-4e34-848e-0f5deaaa3fea" containerName="nova-manage" Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.395852 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="2404cde8-1d52-4931-9361-434e7de71954" containerName="nova-api-api" Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.395862 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="2404cde8-1d52-4931-9361-434e7de71954" containerName="nova-api-log" Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.395874 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="7845fd7b-f71a-4974-ae50-17ce9451207f" containerName="nova-metadata-log" Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.396787 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4gvw\" (UniqueName: \"kubernetes.io/projected/2404cde8-1d52-4931-9361-434e7de71954-kube-api-access-x4gvw\") pod \"2404cde8-1d52-4931-9361-434e7de71954\" (UID: \"2404cde8-1d52-4931-9361-434e7de71954\") " Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.397010 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2404cde8-1d52-4931-9361-434e7de71954-public-tls-certs\") pod \"2404cde8-1d52-4931-9361-434e7de71954\" (UID: \"2404cde8-1d52-4931-9361-434e7de71954\") " Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.397167 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2404cde8-1d52-4931-9361-434e7de71954-combined-ca-bundle\") pod \"2404cde8-1d52-4931-9361-434e7de71954\" (UID: \"2404cde8-1d52-4931-9361-434e7de71954\") " Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.397315 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2404cde8-1d52-4931-9361-434e7de71954-logs\") pod \"2404cde8-1d52-4931-9361-434e7de71954\" (UID: \"2404cde8-1d52-4931-9361-434e7de71954\") " Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.397725 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2404cde8-1d52-4931-9361-434e7de71954-config-data\") pod \"2404cde8-1d52-4931-9361-434e7de71954\" (UID: \"2404cde8-1d52-4931-9361-434e7de71954\") " Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.397835 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.397841 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2404cde8-1d52-4931-9361-434e7de71954-internal-tls-certs\") pod \"2404cde8-1d52-4931-9361-434e7de71954\" (UID: \"2404cde8-1d52-4931-9361-434e7de71954\") " Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.399854 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2404cde8-1d52-4931-9361-434e7de71954-logs" (OuterVolumeSpecName: "logs") pod "2404cde8-1d52-4931-9361-434e7de71954" (UID: "2404cde8-1d52-4931-9361-434e7de71954"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.400198 4947 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2404cde8-1d52-4931-9361-434e7de71954-logs\") on node \"crc\" DevicePath \"\"" Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.410059 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.410406 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.411533 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2404cde8-1d52-4931-9361-434e7de71954-kube-api-access-x4gvw" (OuterVolumeSpecName: "kube-api-access-x4gvw") pod "2404cde8-1d52-4931-9361-434e7de71954" (UID: "2404cde8-1d52-4931-9361-434e7de71954"). InnerVolumeSpecName "kube-api-access-x4gvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.429759 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.440488 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2404cde8-1d52-4931-9361-434e7de71954-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2404cde8-1d52-4931-9361-434e7de71954" (UID: "2404cde8-1d52-4931-9361-434e7de71954"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.456369 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2404cde8-1d52-4931-9361-434e7de71954-config-data" (OuterVolumeSpecName: "config-data") pod "2404cde8-1d52-4931-9361-434e7de71954" (UID: "2404cde8-1d52-4931-9361-434e7de71954"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.493426 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2404cde8-1d52-4931-9361-434e7de71954-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2404cde8-1d52-4931-9361-434e7de71954" (UID: "2404cde8-1d52-4931-9361-434e7de71954"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.501579 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2404cde8-1d52-4931-9361-434e7de71954-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2404cde8-1d52-4931-9361-434e7de71954" (UID: "2404cde8-1d52-4931-9361-434e7de71954"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.502337 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2404cde8-1d52-4931-9361-434e7de71954-internal-tls-certs\") pod \"2404cde8-1d52-4931-9361-434e7de71954\" (UID: \"2404cde8-1d52-4931-9361-434e7de71954\") " Nov 29 06:59:30 crc kubenswrapper[4947]: W1129 06:59:30.502517 4947 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/2404cde8-1d52-4931-9361-434e7de71954/volumes/kubernetes.io~secret/internal-tls-certs Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.502542 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2404cde8-1d52-4931-9361-434e7de71954-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2404cde8-1d52-4931-9361-434e7de71954" (UID: "2404cde8-1d52-4931-9361-434e7de71954"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.502960 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5pn2\" (UniqueName: \"kubernetes.io/projected/3e8cb9c8-c54d-4269-b59b-6e865d503815-kube-api-access-n5pn2\") pod \"nova-metadata-0\" (UID: \"3e8cb9c8-c54d-4269-b59b-6e865d503815\") " pod="openstack/nova-metadata-0" Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.503025 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e8cb9c8-c54d-4269-b59b-6e865d503815-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3e8cb9c8-c54d-4269-b59b-6e865d503815\") " pod="openstack/nova-metadata-0" Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.503357 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e8cb9c8-c54d-4269-b59b-6e865d503815-config-data\") pod \"nova-metadata-0\" (UID: \"3e8cb9c8-c54d-4269-b59b-6e865d503815\") " pod="openstack/nova-metadata-0" Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.503431 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e8cb9c8-c54d-4269-b59b-6e865d503815-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3e8cb9c8-c54d-4269-b59b-6e865d503815\") " pod="openstack/nova-metadata-0" Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.503720 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e8cb9c8-c54d-4269-b59b-6e865d503815-logs\") pod \"nova-metadata-0\" (UID: \"3e8cb9c8-c54d-4269-b59b-6e865d503815\") " pod="openstack/nova-metadata-0" Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.503898 4947 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2404cde8-1d52-4931-9361-434e7de71954-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.503917 4947 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2404cde8-1d52-4931-9361-434e7de71954-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.503929 4947 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2404cde8-1d52-4931-9361-434e7de71954-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.503940 4947 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2404cde8-1d52-4931-9361-434e7de71954-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.503952 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4gvw\" (UniqueName: \"kubernetes.io/projected/2404cde8-1d52-4931-9361-434e7de71954-kube-api-access-x4gvw\") on node \"crc\" DevicePath \"\"" Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.605803 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e8cb9c8-c54d-4269-b59b-6e865d503815-logs\") pod \"nova-metadata-0\" (UID: \"3e8cb9c8-c54d-4269-b59b-6e865d503815\") " pod="openstack/nova-metadata-0" Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.605941 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5pn2\" (UniqueName: \"kubernetes.io/projected/3e8cb9c8-c54d-4269-b59b-6e865d503815-kube-api-access-n5pn2\") pod \"nova-metadata-0\" (UID: \"3e8cb9c8-c54d-4269-b59b-6e865d503815\") " pod="openstack/nova-metadata-0" Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.605980 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e8cb9c8-c54d-4269-b59b-6e865d503815-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3e8cb9c8-c54d-4269-b59b-6e865d503815\") " pod="openstack/nova-metadata-0" Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.606018 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e8cb9c8-c54d-4269-b59b-6e865d503815-config-data\") pod \"nova-metadata-0\" (UID: \"3e8cb9c8-c54d-4269-b59b-6e865d503815\") " pod="openstack/nova-metadata-0" Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.606039 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e8cb9c8-c54d-4269-b59b-6e865d503815-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3e8cb9c8-c54d-4269-b59b-6e865d503815\") " pod="openstack/nova-metadata-0" Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.607374 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e8cb9c8-c54d-4269-b59b-6e865d503815-logs\") pod \"nova-metadata-0\" (UID: \"3e8cb9c8-c54d-4269-b59b-6e865d503815\") " pod="openstack/nova-metadata-0" Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.609974 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e8cb9c8-c54d-4269-b59b-6e865d503815-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3e8cb9c8-c54d-4269-b59b-6e865d503815\") " pod="openstack/nova-metadata-0" Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.611451 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e8cb9c8-c54d-4269-b59b-6e865d503815-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3e8cb9c8-c54d-4269-b59b-6e865d503815\") " pod="openstack/nova-metadata-0" Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.612915 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e8cb9c8-c54d-4269-b59b-6e865d503815-config-data\") pod \"nova-metadata-0\" (UID: \"3e8cb9c8-c54d-4269-b59b-6e865d503815\") " pod="openstack/nova-metadata-0" Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.628828 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5pn2\" (UniqueName: \"kubernetes.io/projected/3e8cb9c8-c54d-4269-b59b-6e865d503815-kube-api-access-n5pn2\") pod \"nova-metadata-0\" (UID: \"3e8cb9c8-c54d-4269-b59b-6e865d503815\") " pod="openstack/nova-metadata-0" Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.760573 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.878587 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.911577 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgcr9\" (UniqueName: \"kubernetes.io/projected/eb7e3676-7ede-4882-984c-4f2e68c73420-kube-api-access-mgcr9\") pod \"eb7e3676-7ede-4882-984c-4f2e68c73420\" (UID: \"eb7e3676-7ede-4882-984c-4f2e68c73420\") " Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.911651 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb7e3676-7ede-4882-984c-4f2e68c73420-combined-ca-bundle\") pod \"eb7e3676-7ede-4882-984c-4f2e68c73420\" (UID: \"eb7e3676-7ede-4882-984c-4f2e68c73420\") " Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.911700 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb7e3676-7ede-4882-984c-4f2e68c73420-config-data\") pod \"eb7e3676-7ede-4882-984c-4f2e68c73420\" (UID: \"eb7e3676-7ede-4882-984c-4f2e68c73420\") " Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.926357 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb7e3676-7ede-4882-984c-4f2e68c73420-kube-api-access-mgcr9" (OuterVolumeSpecName: "kube-api-access-mgcr9") pod "eb7e3676-7ede-4882-984c-4f2e68c73420" (UID: "eb7e3676-7ede-4882-984c-4f2e68c73420"). InnerVolumeSpecName "kube-api-access-mgcr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 06:59:30 crc kubenswrapper[4947]: E1129 06:59:30.944328 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb7e3676-7ede-4882-984c-4f2e68c73420-config-data podName:eb7e3676-7ede-4882-984c-4f2e68c73420 nodeName:}" failed. No retries permitted until 2025-11-29 06:59:31.444285215 +0000 UTC m=+1522.488667296 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/eb7e3676-7ede-4882-984c-4f2e68c73420-config-data") pod "eb7e3676-7ede-4882-984c-4f2e68c73420" (UID: "eb7e3676-7ede-4882-984c-4f2e68c73420") : error deleting /var/lib/kubelet/pods/eb7e3676-7ede-4882-984c-4f2e68c73420/volume-subpaths: remove /var/lib/kubelet/pods/eb7e3676-7ede-4882-984c-4f2e68c73420/volume-subpaths: no such file or directory Nov 29 06:59:30 crc kubenswrapper[4947]: I1129 06:59:30.948837 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb7e3676-7ede-4882-984c-4f2e68c73420-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb7e3676-7ede-4882-984c-4f2e68c73420" (UID: "eb7e3676-7ede-4882-984c-4f2e68c73420"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:59:31 crc kubenswrapper[4947]: I1129 06:59:31.014116 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgcr9\" (UniqueName: \"kubernetes.io/projected/eb7e3676-7ede-4882-984c-4f2e68c73420-kube-api-access-mgcr9\") on node \"crc\" DevicePath \"\"" Nov 29 06:59:31 crc kubenswrapper[4947]: I1129 06:59:31.014159 4947 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb7e3676-7ede-4882-984c-4f2e68c73420-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 06:59:31 crc kubenswrapper[4947]: I1129 06:59:31.190092 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7845fd7b-f71a-4974-ae50-17ce9451207f" path="/var/lib/kubelet/pods/7845fd7b-f71a-4974-ae50-17ce9451207f/volumes" Nov 29 06:59:31 crc kubenswrapper[4947]: I1129 06:59:31.294276 4947 generic.go:334] "Generic (PLEG): container finished" podID="eb7e3676-7ede-4882-984c-4f2e68c73420" containerID="f00a8fe203a625eaac6622ebc6f9c8c85707ed2a66ebf16e16ab703139d741e7" exitCode=0 Nov 29 06:59:31 crc kubenswrapper[4947]: I1129 06:59:31.294378 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 29 06:59:31 crc kubenswrapper[4947]: I1129 06:59:31.294420 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 06:59:31 crc kubenswrapper[4947]: I1129 06:59:31.294386 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"eb7e3676-7ede-4882-984c-4f2e68c73420","Type":"ContainerDied","Data":"f00a8fe203a625eaac6622ebc6f9c8c85707ed2a66ebf16e16ab703139d741e7"} Nov 29 06:59:31 crc kubenswrapper[4947]: I1129 06:59:31.294495 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"eb7e3676-7ede-4882-984c-4f2e68c73420","Type":"ContainerDied","Data":"cc150f53865e730a423176da4e5df3db1b394478c8ca3aef88f0682974932700"} Nov 29 06:59:31 crc kubenswrapper[4947]: I1129 06:59:31.294560 4947 scope.go:117] "RemoveContainer" containerID="f00a8fe203a625eaac6622ebc6f9c8c85707ed2a66ebf16e16ab703139d741e7" Nov 29 06:59:31 crc kubenswrapper[4947]: I1129 06:59:31.327759 4947 scope.go:117] "RemoveContainer" containerID="f00a8fe203a625eaac6622ebc6f9c8c85707ed2a66ebf16e16ab703139d741e7" Nov 29 06:59:31 crc kubenswrapper[4947]: E1129 06:59:31.328375 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f00a8fe203a625eaac6622ebc6f9c8c85707ed2a66ebf16e16ab703139d741e7\": container with ID starting with f00a8fe203a625eaac6622ebc6f9c8c85707ed2a66ebf16e16ab703139d741e7 not found: ID does not exist" containerID="f00a8fe203a625eaac6622ebc6f9c8c85707ed2a66ebf16e16ab703139d741e7" Nov 29 06:59:31 crc kubenswrapper[4947]: I1129 06:59:31.328449 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f00a8fe203a625eaac6622ebc6f9c8c85707ed2a66ebf16e16ab703139d741e7"} err="failed to get container status \"f00a8fe203a625eaac6622ebc6f9c8c85707ed2a66ebf16e16ab703139d741e7\": rpc error: code = NotFound desc = could not find container \"f00a8fe203a625eaac6622ebc6f9c8c85707ed2a66ebf16e16ab703139d741e7\": container with ID starting with f00a8fe203a625eaac6622ebc6f9c8c85707ed2a66ebf16e16ab703139d741e7 not found: ID does not exist" Nov 29 06:59:31 crc kubenswrapper[4947]: I1129 06:59:31.346876 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 29 06:59:31 crc kubenswrapper[4947]: I1129 06:59:31.389312 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 29 06:59:31 crc kubenswrapper[4947]: I1129 06:59:31.404998 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 29 06:59:31 crc kubenswrapper[4947]: E1129 06:59:31.405525 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb7e3676-7ede-4882-984c-4f2e68c73420" containerName="nova-scheduler-scheduler" Nov 29 06:59:31 crc kubenswrapper[4947]: I1129 06:59:31.405542 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb7e3676-7ede-4882-984c-4f2e68c73420" containerName="nova-scheduler-scheduler" Nov 29 06:59:31 crc kubenswrapper[4947]: I1129 06:59:31.405786 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb7e3676-7ede-4882-984c-4f2e68c73420" containerName="nova-scheduler-scheduler" Nov 29 06:59:31 crc kubenswrapper[4947]: I1129 06:59:31.406931 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 06:59:31 crc kubenswrapper[4947]: I1129 06:59:31.411817 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 29 06:59:31 crc kubenswrapper[4947]: I1129 06:59:31.428582 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 29 06:59:31 crc kubenswrapper[4947]: I1129 06:59:31.430843 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 06:59:31 crc kubenswrapper[4947]: I1129 06:59:31.430889 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 29 06:59:31 crc kubenswrapper[4947]: I1129 06:59:31.431095 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 29 06:59:31 crc kubenswrapper[4947]: I1129 06:59:31.433318 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9b456d6-c5bb-4832-9497-b029168ba297-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e9b456d6-c5bb-4832-9497-b029168ba297\") " pod="openstack/nova-api-0" Nov 29 06:59:31 crc kubenswrapper[4947]: I1129 06:59:31.433341 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9b456d6-c5bb-4832-9497-b029168ba297-public-tls-certs\") pod \"nova-api-0\" (UID: \"e9b456d6-c5bb-4832-9497-b029168ba297\") " pod="openstack/nova-api-0" Nov 29 06:59:31 crc kubenswrapper[4947]: I1129 06:59:31.433381 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9b456d6-c5bb-4832-9497-b029168ba297-config-data\") pod \"nova-api-0\" (UID: \"e9b456d6-c5bb-4832-9497-b029168ba297\") " pod="openstack/nova-api-0" Nov 29 06:59:31 crc kubenswrapper[4947]: I1129 06:59:31.433442 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29q6q\" (UniqueName: \"kubernetes.io/projected/e9b456d6-c5bb-4832-9497-b029168ba297-kube-api-access-29q6q\") pod \"nova-api-0\" (UID: \"e9b456d6-c5bb-4832-9497-b029168ba297\") " pod="openstack/nova-api-0" Nov 29 06:59:31 crc kubenswrapper[4947]: I1129 06:59:31.433488 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9b456d6-c5bb-4832-9497-b029168ba297-logs\") pod \"nova-api-0\" (UID: \"e9b456d6-c5bb-4832-9497-b029168ba297\") " pod="openstack/nova-api-0" Nov 29 06:59:31 crc kubenswrapper[4947]: I1129 06:59:31.433557 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9b456d6-c5bb-4832-9497-b029168ba297-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e9b456d6-c5bb-4832-9497-b029168ba297\") " pod="openstack/nova-api-0" Nov 29 06:59:31 crc kubenswrapper[4947]: I1129 06:59:31.536184 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb7e3676-7ede-4882-984c-4f2e68c73420-config-data\") pod \"eb7e3676-7ede-4882-984c-4f2e68c73420\" (UID: \"eb7e3676-7ede-4882-984c-4f2e68c73420\") " Nov 29 06:59:31 crc kubenswrapper[4947]: I1129 06:59:31.536787 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9b456d6-c5bb-4832-9497-b029168ba297-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e9b456d6-c5bb-4832-9497-b029168ba297\") " pod="openstack/nova-api-0" Nov 29 06:59:31 crc kubenswrapper[4947]: I1129 06:59:31.536825 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9b456d6-c5bb-4832-9497-b029168ba297-public-tls-certs\") pod \"nova-api-0\" (UID: \"e9b456d6-c5bb-4832-9497-b029168ba297\") " pod="openstack/nova-api-0" Nov 29 06:59:31 crc kubenswrapper[4947]: I1129 06:59:31.536869 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9b456d6-c5bb-4832-9497-b029168ba297-config-data\") pod \"nova-api-0\" (UID: \"e9b456d6-c5bb-4832-9497-b029168ba297\") " pod="openstack/nova-api-0" Nov 29 06:59:31 crc kubenswrapper[4947]: I1129 06:59:31.536938 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29q6q\" (UniqueName: \"kubernetes.io/projected/e9b456d6-c5bb-4832-9497-b029168ba297-kube-api-access-29q6q\") pod \"nova-api-0\" (UID: \"e9b456d6-c5bb-4832-9497-b029168ba297\") " pod="openstack/nova-api-0" Nov 29 06:59:31 crc kubenswrapper[4947]: I1129 06:59:31.536987 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9b456d6-c5bb-4832-9497-b029168ba297-logs\") pod \"nova-api-0\" (UID: \"e9b456d6-c5bb-4832-9497-b029168ba297\") " pod="openstack/nova-api-0" Nov 29 06:59:31 crc kubenswrapper[4947]: I1129 06:59:31.537074 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9b456d6-c5bb-4832-9497-b029168ba297-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e9b456d6-c5bb-4832-9497-b029168ba297\") " pod="openstack/nova-api-0" Nov 29 06:59:31 crc kubenswrapper[4947]: I1129 06:59:31.540346 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9b456d6-c5bb-4832-9497-b029168ba297-logs\") pod \"nova-api-0\" (UID: \"e9b456d6-c5bb-4832-9497-b029168ba297\") " pod="openstack/nova-api-0" Nov 29 06:59:31 crc kubenswrapper[4947]: I1129 06:59:31.543368 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9b456d6-c5bb-4832-9497-b029168ba297-public-tls-certs\") pod \"nova-api-0\" (UID: \"e9b456d6-c5bb-4832-9497-b029168ba297\") " pod="openstack/nova-api-0" Nov 29 06:59:31 crc kubenswrapper[4947]: I1129 06:59:31.543654 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9b456d6-c5bb-4832-9497-b029168ba297-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e9b456d6-c5bb-4832-9497-b029168ba297\") " pod="openstack/nova-api-0" Nov 29 06:59:31 crc kubenswrapper[4947]: I1129 06:59:31.544309 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9b456d6-c5bb-4832-9497-b029168ba297-config-data\") pod \"nova-api-0\" (UID: \"e9b456d6-c5bb-4832-9497-b029168ba297\") " pod="openstack/nova-api-0" Nov 29 06:59:31 crc kubenswrapper[4947]: I1129 06:59:31.546153 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb7e3676-7ede-4882-984c-4f2e68c73420-config-data" (OuterVolumeSpecName: "config-data") pod "eb7e3676-7ede-4882-984c-4f2e68c73420" (UID: "eb7e3676-7ede-4882-984c-4f2e68c73420"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 06:59:31 crc kubenswrapper[4947]: I1129 06:59:31.548175 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9b456d6-c5bb-4832-9497-b029168ba297-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e9b456d6-c5bb-4832-9497-b029168ba297\") " pod="openstack/nova-api-0" Nov 29 06:59:31 crc kubenswrapper[4947]: I1129 06:59:31.566195 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29q6q\" (UniqueName: \"kubernetes.io/projected/e9b456d6-c5bb-4832-9497-b029168ba297-kube-api-access-29q6q\") pod \"nova-api-0\" (UID: \"e9b456d6-c5bb-4832-9497-b029168ba297\") " pod="openstack/nova-api-0" Nov 29 06:59:31 crc kubenswrapper[4947]: I1129 06:59:31.639459 4947 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb7e3676-7ede-4882-984c-4f2e68c73420-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 06:59:31 crc kubenswrapper[4947]: I1129 06:59:31.653985 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 06:59:31 crc kubenswrapper[4947]: I1129 06:59:31.667169 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 06:59:31 crc kubenswrapper[4947]: I1129 06:59:31.684274 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 06:59:31 crc kubenswrapper[4947]: I1129 06:59:31.687032 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 29 06:59:31 crc kubenswrapper[4947]: I1129 06:59:31.690628 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 29 06:59:31 crc kubenswrapper[4947]: I1129 06:59:31.693107 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 06:59:31 crc kubenswrapper[4947]: I1129 06:59:31.741161 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7wwd\" (UniqueName: \"kubernetes.io/projected/3531a457-8faa-47a6-8db0-4bfc82898e36-kube-api-access-k7wwd\") pod \"nova-scheduler-0\" (UID: \"3531a457-8faa-47a6-8db0-4bfc82898e36\") " pod="openstack/nova-scheduler-0" Nov 29 06:59:31 crc kubenswrapper[4947]: I1129 06:59:31.741362 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3531a457-8faa-47a6-8db0-4bfc82898e36-config-data\") pod \"nova-scheduler-0\" (UID: \"3531a457-8faa-47a6-8db0-4bfc82898e36\") " pod="openstack/nova-scheduler-0" Nov 29 06:59:31 crc kubenswrapper[4947]: I1129 06:59:31.741502 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3531a457-8faa-47a6-8db0-4bfc82898e36-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3531a457-8faa-47a6-8db0-4bfc82898e36\") " pod="openstack/nova-scheduler-0" Nov 29 06:59:31 crc kubenswrapper[4947]: I1129 06:59:31.801387 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 06:59:31 crc kubenswrapper[4947]: I1129 06:59:31.844285 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3531a457-8faa-47a6-8db0-4bfc82898e36-config-data\") pod \"nova-scheduler-0\" (UID: \"3531a457-8faa-47a6-8db0-4bfc82898e36\") " pod="openstack/nova-scheduler-0" Nov 29 06:59:31 crc kubenswrapper[4947]: I1129 06:59:31.844472 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3531a457-8faa-47a6-8db0-4bfc82898e36-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3531a457-8faa-47a6-8db0-4bfc82898e36\") " pod="openstack/nova-scheduler-0" Nov 29 06:59:31 crc kubenswrapper[4947]: I1129 06:59:31.845173 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7wwd\" (UniqueName: \"kubernetes.io/projected/3531a457-8faa-47a6-8db0-4bfc82898e36-kube-api-access-k7wwd\") pod \"nova-scheduler-0\" (UID: \"3531a457-8faa-47a6-8db0-4bfc82898e36\") " pod="openstack/nova-scheduler-0" Nov 29 06:59:31 crc kubenswrapper[4947]: I1129 06:59:31.850080 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3531a457-8faa-47a6-8db0-4bfc82898e36-config-data\") pod \"nova-scheduler-0\" (UID: \"3531a457-8faa-47a6-8db0-4bfc82898e36\") " pod="openstack/nova-scheduler-0" Nov 29 06:59:31 crc kubenswrapper[4947]: I1129 06:59:31.863925 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3531a457-8faa-47a6-8db0-4bfc82898e36-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3531a457-8faa-47a6-8db0-4bfc82898e36\") " pod="openstack/nova-scheduler-0" Nov 29 06:59:31 crc kubenswrapper[4947]: I1129 06:59:31.879836 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7wwd\" (UniqueName: \"kubernetes.io/projected/3531a457-8faa-47a6-8db0-4bfc82898e36-kube-api-access-k7wwd\") pod \"nova-scheduler-0\" (UID: \"3531a457-8faa-47a6-8db0-4bfc82898e36\") " pod="openstack/nova-scheduler-0" Nov 29 06:59:32 crc kubenswrapper[4947]: I1129 06:59:32.022237 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 29 06:59:32 crc kubenswrapper[4947]: I1129 06:59:32.307321 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3e8cb9c8-c54d-4269-b59b-6e865d503815","Type":"ContainerStarted","Data":"79c92e0217f72097a8bd41f465655687e69176c6031c4f551eb3ed08db7dc102"} Nov 29 06:59:32 crc kubenswrapper[4947]: I1129 06:59:32.307815 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3e8cb9c8-c54d-4269-b59b-6e865d503815","Type":"ContainerStarted","Data":"6c7b875c0879db6b75b09f97e5bff50af56bbe5cb719fedbf710a8e01076470a"} Nov 29 06:59:32 crc kubenswrapper[4947]: I1129 06:59:32.307835 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3e8cb9c8-c54d-4269-b59b-6e865d503815","Type":"ContainerStarted","Data":"402a263a37fa71b595b2f200a04cf45c039ce2406fc4e60fc1411bf4c4dbb814"} Nov 29 06:59:32 crc kubenswrapper[4947]: I1129 06:59:32.325767 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 29 06:59:32 crc kubenswrapper[4947]: W1129 06:59:32.332388 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9b456d6_c5bb_4832_9497_b029168ba297.slice/crio-94c10827725454c9b88060ea90ff8fb5ea2b6d6ba22a672c8dac808ed56e3598 WatchSource:0}: Error finding container 94c10827725454c9b88060ea90ff8fb5ea2b6d6ba22a672c8dac808ed56e3598: Status 404 returned error can't find the container with id 94c10827725454c9b88060ea90ff8fb5ea2b6d6ba22a672c8dac808ed56e3598 Nov 29 06:59:32 crc kubenswrapper[4947]: I1129 06:59:32.557165 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 06:59:33 crc kubenswrapper[4947]: I1129 06:59:33.193457 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2404cde8-1d52-4931-9361-434e7de71954" path="/var/lib/kubelet/pods/2404cde8-1d52-4931-9361-434e7de71954/volumes" Nov 29 06:59:33 crc kubenswrapper[4947]: I1129 06:59:33.194289 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb7e3676-7ede-4882-984c-4f2e68c73420" path="/var/lib/kubelet/pods/eb7e3676-7ede-4882-984c-4f2e68c73420/volumes" Nov 29 06:59:33 crc kubenswrapper[4947]: I1129 06:59:33.322600 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3531a457-8faa-47a6-8db0-4bfc82898e36","Type":"ContainerStarted","Data":"3f6d826e589c4511bc22edf5b6ec8d5c0e13d12ad843b840ec42ee849906a1db"} Nov 29 06:59:33 crc kubenswrapper[4947]: I1129 06:59:33.323056 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3531a457-8faa-47a6-8db0-4bfc82898e36","Type":"ContainerStarted","Data":"0f79bafe80aa77b53717ba32258ec36957bebf3e041800e6ffdf39d56293bbdd"} Nov 29 06:59:33 crc kubenswrapper[4947]: I1129 06:59:33.326952 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e9b456d6-c5bb-4832-9497-b029168ba297","Type":"ContainerStarted","Data":"43147e355001fabf1ae2740d16cd28d94456da0daf2e35d17e06d2d0d4139f3c"} Nov 29 06:59:33 crc kubenswrapper[4947]: I1129 06:59:33.326998 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e9b456d6-c5bb-4832-9497-b029168ba297","Type":"ContainerStarted","Data":"fc250cdefc69d81628b83c7ccf5af52724405e2064455491a3628264238578c5"} Nov 29 06:59:33 crc kubenswrapper[4947]: I1129 06:59:33.327013 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e9b456d6-c5bb-4832-9497-b029168ba297","Type":"ContainerStarted","Data":"94c10827725454c9b88060ea90ff8fb5ea2b6d6ba22a672c8dac808ed56e3598"} Nov 29 06:59:33 crc kubenswrapper[4947]: I1129 06:59:33.350014 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.349973224 podStartE2EDuration="2.349973224s" podCreationTimestamp="2025-11-29 06:59:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:59:33.344294741 +0000 UTC m=+1524.388676822" watchObservedRunningTime="2025-11-29 06:59:33.349973224 +0000 UTC m=+1524.394355305" Nov 29 06:59:33 crc kubenswrapper[4947]: I1129 06:59:33.373037 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.373007263 podStartE2EDuration="2.373007263s" podCreationTimestamp="2025-11-29 06:59:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:59:33.369133345 +0000 UTC m=+1524.413515446" watchObservedRunningTime="2025-11-29 06:59:33.373007263 +0000 UTC m=+1524.417389344" Nov 29 06:59:33 crc kubenswrapper[4947]: I1129 06:59:33.436952 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.4369110689999998 podStartE2EDuration="3.436911069s" podCreationTimestamp="2025-11-29 06:59:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 06:59:33.414878165 +0000 UTC m=+1524.459260246" watchObservedRunningTime="2025-11-29 06:59:33.436911069 +0000 UTC m=+1524.481293160" Nov 29 06:59:34 crc kubenswrapper[4947]: I1129 06:59:34.529264 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="7845fd7b-f71a-4974-ae50-17ce9451207f" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.176:8775/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 29 06:59:34 crc kubenswrapper[4947]: I1129 06:59:34.529490 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="7845fd7b-f71a-4974-ae50-17ce9451207f" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.176:8775/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 29 06:59:35 crc kubenswrapper[4947]: I1129 06:59:35.761563 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 29 06:59:35 crc kubenswrapper[4947]: I1129 06:59:35.762073 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 29 06:59:37 crc kubenswrapper[4947]: I1129 06:59:37.023610 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 29 06:59:40 crc kubenswrapper[4947]: I1129 06:59:40.761476 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 29 06:59:40 crc kubenswrapper[4947]: I1129 06:59:40.761945 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 29 06:59:41 crc kubenswrapper[4947]: I1129 06:59:41.782456 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3e8cb9c8-c54d-4269-b59b-6e865d503815" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.186:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 29 06:59:41 crc kubenswrapper[4947]: I1129 06:59:41.783009 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3e8cb9c8-c54d-4269-b59b-6e865d503815" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.186:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 29 06:59:41 crc kubenswrapper[4947]: I1129 06:59:41.802937 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 29 06:59:41 crc kubenswrapper[4947]: I1129 06:59:41.803042 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 29 06:59:42 crc kubenswrapper[4947]: I1129 06:59:42.024362 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 29 06:59:42 crc kubenswrapper[4947]: I1129 06:59:42.067196 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 29 06:59:42 crc kubenswrapper[4947]: I1129 06:59:42.461000 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 29 06:59:42 crc kubenswrapper[4947]: I1129 06:59:42.819534 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e9b456d6-c5bb-4832-9497-b029168ba297" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.187:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 29 06:59:42 crc kubenswrapper[4947]: I1129 06:59:42.819626 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e9b456d6-c5bb-4832-9497-b029168ba297" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.187:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 29 06:59:44 crc kubenswrapper[4947]: I1129 06:59:44.787266 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 29 06:59:50 crc kubenswrapper[4947]: I1129 06:59:50.769661 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 29 06:59:50 crc kubenswrapper[4947]: I1129 06:59:50.771303 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 29 06:59:50 crc kubenswrapper[4947]: I1129 06:59:50.776710 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 29 06:59:51 crc kubenswrapper[4947]: I1129 06:59:51.578568 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 29 06:59:51 crc kubenswrapper[4947]: I1129 06:59:51.809517 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 29 06:59:51 crc kubenswrapper[4947]: I1129 06:59:51.810245 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 29 06:59:51 crc kubenswrapper[4947]: I1129 06:59:51.815198 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 29 06:59:51 crc kubenswrapper[4947]: I1129 06:59:51.818077 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 29 06:59:52 crc kubenswrapper[4947]: I1129 06:59:52.589258 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 29 06:59:52 crc kubenswrapper[4947]: I1129 06:59:52.770110 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 29 07:00:00 crc kubenswrapper[4947]: I1129 07:00:00.170116 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406660-v5gfx"] Nov 29 07:00:00 crc kubenswrapper[4947]: I1129 07:00:00.175775 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406660-v5gfx" Nov 29 07:00:00 crc kubenswrapper[4947]: I1129 07:00:00.179460 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 29 07:00:00 crc kubenswrapper[4947]: I1129 07:00:00.179730 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 29 07:00:00 crc kubenswrapper[4947]: I1129 07:00:00.208085 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406660-v5gfx"] Nov 29 07:00:00 crc kubenswrapper[4947]: I1129 07:00:00.233313 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7b7812d7-d76d-4055-8c4c-c59f058af07f-config-volume\") pod \"collect-profiles-29406660-v5gfx\" (UID: \"7b7812d7-d76d-4055-8c4c-c59f058af07f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406660-v5gfx" Nov 29 07:00:00 crc kubenswrapper[4947]: I1129 07:00:00.233386 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwnzk\" (UniqueName: \"kubernetes.io/projected/7b7812d7-d76d-4055-8c4c-c59f058af07f-kube-api-access-fwnzk\") pod \"collect-profiles-29406660-v5gfx\" (UID: \"7b7812d7-d76d-4055-8c4c-c59f058af07f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406660-v5gfx" Nov 29 07:00:00 crc kubenswrapper[4947]: I1129 07:00:00.233444 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7b7812d7-d76d-4055-8c4c-c59f058af07f-secret-volume\") pod \"collect-profiles-29406660-v5gfx\" (UID: \"7b7812d7-d76d-4055-8c4c-c59f058af07f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406660-v5gfx" Nov 29 07:00:00 crc kubenswrapper[4947]: I1129 07:00:00.335534 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7b7812d7-d76d-4055-8c4c-c59f058af07f-config-volume\") pod \"collect-profiles-29406660-v5gfx\" (UID: \"7b7812d7-d76d-4055-8c4c-c59f058af07f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406660-v5gfx" Nov 29 07:00:00 crc kubenswrapper[4947]: I1129 07:00:00.335643 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwnzk\" (UniqueName: \"kubernetes.io/projected/7b7812d7-d76d-4055-8c4c-c59f058af07f-kube-api-access-fwnzk\") pod \"collect-profiles-29406660-v5gfx\" (UID: \"7b7812d7-d76d-4055-8c4c-c59f058af07f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406660-v5gfx" Nov 29 07:00:00 crc kubenswrapper[4947]: I1129 07:00:00.335705 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7b7812d7-d76d-4055-8c4c-c59f058af07f-secret-volume\") pod \"collect-profiles-29406660-v5gfx\" (UID: \"7b7812d7-d76d-4055-8c4c-c59f058af07f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406660-v5gfx" Nov 29 07:00:00 crc kubenswrapper[4947]: I1129 07:00:00.336847 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7b7812d7-d76d-4055-8c4c-c59f058af07f-config-volume\") pod \"collect-profiles-29406660-v5gfx\" (UID: \"7b7812d7-d76d-4055-8c4c-c59f058af07f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406660-v5gfx" Nov 29 07:00:00 crc kubenswrapper[4947]: I1129 07:00:00.343923 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7b7812d7-d76d-4055-8c4c-c59f058af07f-secret-volume\") pod \"collect-profiles-29406660-v5gfx\" (UID: \"7b7812d7-d76d-4055-8c4c-c59f058af07f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406660-v5gfx" Nov 29 07:00:00 crc kubenswrapper[4947]: I1129 07:00:00.356846 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwnzk\" (UniqueName: \"kubernetes.io/projected/7b7812d7-d76d-4055-8c4c-c59f058af07f-kube-api-access-fwnzk\") pod \"collect-profiles-29406660-v5gfx\" (UID: \"7b7812d7-d76d-4055-8c4c-c59f058af07f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406660-v5gfx" Nov 29 07:00:00 crc kubenswrapper[4947]: I1129 07:00:00.524526 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406660-v5gfx" Nov 29 07:00:01 crc kubenswrapper[4947]: I1129 07:00:01.125034 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406660-v5gfx"] Nov 29 07:00:01 crc kubenswrapper[4947]: I1129 07:00:01.745913 4947 generic.go:334] "Generic (PLEG): container finished" podID="7b7812d7-d76d-4055-8c4c-c59f058af07f" containerID="d3ea36a03170ca5c542cde276b1e0a404cef6cbc6e1bb7e8a93dbba887929c75" exitCode=0 Nov 29 07:00:01 crc kubenswrapper[4947]: I1129 07:00:01.746899 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406660-v5gfx" event={"ID":"7b7812d7-d76d-4055-8c4c-c59f058af07f","Type":"ContainerDied","Data":"d3ea36a03170ca5c542cde276b1e0a404cef6cbc6e1bb7e8a93dbba887929c75"} Nov 29 07:00:01 crc kubenswrapper[4947]: I1129 07:00:01.746949 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406660-v5gfx" event={"ID":"7b7812d7-d76d-4055-8c4c-c59f058af07f","Type":"ContainerStarted","Data":"72438b7958c7eed867fd788ed12b6179cae884565c645a12b108801fba1a1b72"} Nov 29 07:00:02 crc kubenswrapper[4947]: I1129 07:00:02.285825 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 29 07:00:03 crc kubenswrapper[4947]: I1129 07:00:03.145670 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406660-v5gfx" Nov 29 07:00:03 crc kubenswrapper[4947]: I1129 07:00:03.208010 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7b7812d7-d76d-4055-8c4c-c59f058af07f-config-volume\") pod \"7b7812d7-d76d-4055-8c4c-c59f058af07f\" (UID: \"7b7812d7-d76d-4055-8c4c-c59f058af07f\") " Nov 29 07:00:03 crc kubenswrapper[4947]: I1129 07:00:03.210018 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b7812d7-d76d-4055-8c4c-c59f058af07f-config-volume" (OuterVolumeSpecName: "config-volume") pod "7b7812d7-d76d-4055-8c4c-c59f058af07f" (UID: "7b7812d7-d76d-4055-8c4c-c59f058af07f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:00:03 crc kubenswrapper[4947]: I1129 07:00:03.231420 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 29 07:00:03 crc kubenswrapper[4947]: I1129 07:00:03.310833 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwnzk\" (UniqueName: \"kubernetes.io/projected/7b7812d7-d76d-4055-8c4c-c59f058af07f-kube-api-access-fwnzk\") pod \"7b7812d7-d76d-4055-8c4c-c59f058af07f\" (UID: \"7b7812d7-d76d-4055-8c4c-c59f058af07f\") " Nov 29 07:00:03 crc kubenswrapper[4947]: I1129 07:00:03.311166 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7b7812d7-d76d-4055-8c4c-c59f058af07f-secret-volume\") pod \"7b7812d7-d76d-4055-8c4c-c59f058af07f\" (UID: \"7b7812d7-d76d-4055-8c4c-c59f058af07f\") " Nov 29 07:00:03 crc kubenswrapper[4947]: I1129 07:00:03.311814 4947 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7b7812d7-d76d-4055-8c4c-c59f058af07f-config-volume\") on node \"crc\" DevicePath \"\"" Nov 29 07:00:03 crc kubenswrapper[4947]: I1129 07:00:03.323025 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b7812d7-d76d-4055-8c4c-c59f058af07f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7b7812d7-d76d-4055-8c4c-c59f058af07f" (UID: "7b7812d7-d76d-4055-8c4c-c59f058af07f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:00:03 crc kubenswrapper[4947]: I1129 07:00:03.339312 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b7812d7-d76d-4055-8c4c-c59f058af07f-kube-api-access-fwnzk" (OuterVolumeSpecName: "kube-api-access-fwnzk") pod "7b7812d7-d76d-4055-8c4c-c59f058af07f" (UID: "7b7812d7-d76d-4055-8c4c-c59f058af07f"). InnerVolumeSpecName "kube-api-access-fwnzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:00:03 crc kubenswrapper[4947]: I1129 07:00:03.412835 4947 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7b7812d7-d76d-4055-8c4c-c59f058af07f-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 29 07:00:03 crc kubenswrapper[4947]: I1129 07:00:03.412907 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwnzk\" (UniqueName: \"kubernetes.io/projected/7b7812d7-d76d-4055-8c4c-c59f058af07f-kube-api-access-fwnzk\") on node \"crc\" DevicePath \"\"" Nov 29 07:00:03 crc kubenswrapper[4947]: I1129 07:00:03.772198 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406660-v5gfx" event={"ID":"7b7812d7-d76d-4055-8c4c-c59f058af07f","Type":"ContainerDied","Data":"72438b7958c7eed867fd788ed12b6179cae884565c645a12b108801fba1a1b72"} Nov 29 07:00:03 crc kubenswrapper[4947]: I1129 07:00:03.772333 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72438b7958c7eed867fd788ed12b6179cae884565c645a12b108801fba1a1b72" Nov 29 07:00:03 crc kubenswrapper[4947]: I1129 07:00:03.772399 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406660-v5gfx" Nov 29 07:00:07 crc kubenswrapper[4947]: I1129 07:00:07.743668 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="1df9108b-7e5b-4dd6-bd7e-787381428bce" containerName="rabbitmq" containerID="cri-o://0fff10656ad29de8b7f5aeb6366cf54e2b7e706199c69fac88eb15e75c97c542" gracePeriod=604795 Nov 29 07:00:08 crc kubenswrapper[4947]: I1129 07:00:08.474092 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8" containerName="rabbitmq" containerID="cri-o://1de336f26a889f79cd35e84bb053e33754774c33fa596603e4d541d0ced2a2dc" gracePeriod=604795 Nov 29 07:00:14 crc kubenswrapper[4947]: I1129 07:00:14.538957 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 29 07:00:14 crc kubenswrapper[4947]: I1129 07:00:14.691602 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"1df9108b-7e5b-4dd6-bd7e-787381428bce\" (UID: \"1df9108b-7e5b-4dd6-bd7e-787381428bce\") " Nov 29 07:00:14 crc kubenswrapper[4947]: I1129 07:00:14.691679 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1df9108b-7e5b-4dd6-bd7e-787381428bce-pod-info\") pod \"1df9108b-7e5b-4dd6-bd7e-787381428bce\" (UID: \"1df9108b-7e5b-4dd6-bd7e-787381428bce\") " Nov 29 07:00:14 crc kubenswrapper[4947]: I1129 07:00:14.691755 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1df9108b-7e5b-4dd6-bd7e-787381428bce-server-conf\") pod \"1df9108b-7e5b-4dd6-bd7e-787381428bce\" (UID: \"1df9108b-7e5b-4dd6-bd7e-787381428bce\") " Nov 29 07:00:14 crc kubenswrapper[4947]: I1129 07:00:14.691803 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1df9108b-7e5b-4dd6-bd7e-787381428bce-rabbitmq-confd\") pod \"1df9108b-7e5b-4dd6-bd7e-787381428bce\" (UID: \"1df9108b-7e5b-4dd6-bd7e-787381428bce\") " Nov 29 07:00:14 crc kubenswrapper[4947]: I1129 07:00:14.691872 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hflkc\" (UniqueName: \"kubernetes.io/projected/1df9108b-7e5b-4dd6-bd7e-787381428bce-kube-api-access-hflkc\") pod \"1df9108b-7e5b-4dd6-bd7e-787381428bce\" (UID: \"1df9108b-7e5b-4dd6-bd7e-787381428bce\") " Nov 29 07:00:14 crc kubenswrapper[4947]: I1129 07:00:14.691952 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1df9108b-7e5b-4dd6-bd7e-787381428bce-erlang-cookie-secret\") pod \"1df9108b-7e5b-4dd6-bd7e-787381428bce\" (UID: \"1df9108b-7e5b-4dd6-bd7e-787381428bce\") " Nov 29 07:00:14 crc kubenswrapper[4947]: I1129 07:00:14.691990 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1df9108b-7e5b-4dd6-bd7e-787381428bce-rabbitmq-tls\") pod \"1df9108b-7e5b-4dd6-bd7e-787381428bce\" (UID: \"1df9108b-7e5b-4dd6-bd7e-787381428bce\") " Nov 29 07:00:14 crc kubenswrapper[4947]: I1129 07:00:14.692057 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1df9108b-7e5b-4dd6-bd7e-787381428bce-plugins-conf\") pod \"1df9108b-7e5b-4dd6-bd7e-787381428bce\" (UID: \"1df9108b-7e5b-4dd6-bd7e-787381428bce\") " Nov 29 07:00:14 crc kubenswrapper[4947]: I1129 07:00:14.692091 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1df9108b-7e5b-4dd6-bd7e-787381428bce-config-data\") pod \"1df9108b-7e5b-4dd6-bd7e-787381428bce\" (UID: \"1df9108b-7e5b-4dd6-bd7e-787381428bce\") " Nov 29 07:00:14 crc kubenswrapper[4947]: I1129 07:00:14.692119 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1df9108b-7e5b-4dd6-bd7e-787381428bce-rabbitmq-plugins\") pod \"1df9108b-7e5b-4dd6-bd7e-787381428bce\" (UID: \"1df9108b-7e5b-4dd6-bd7e-787381428bce\") " Nov 29 07:00:14 crc kubenswrapper[4947]: I1129 07:00:14.692241 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1df9108b-7e5b-4dd6-bd7e-787381428bce-rabbitmq-erlang-cookie\") pod \"1df9108b-7e5b-4dd6-bd7e-787381428bce\" (UID: \"1df9108b-7e5b-4dd6-bd7e-787381428bce\") " Nov 29 07:00:14 crc kubenswrapper[4947]: I1129 07:00:14.693337 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1df9108b-7e5b-4dd6-bd7e-787381428bce-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "1df9108b-7e5b-4dd6-bd7e-787381428bce" (UID: "1df9108b-7e5b-4dd6-bd7e-787381428bce"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:00:14 crc kubenswrapper[4947]: I1129 07:00:14.693463 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1df9108b-7e5b-4dd6-bd7e-787381428bce-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "1df9108b-7e5b-4dd6-bd7e-787381428bce" (UID: "1df9108b-7e5b-4dd6-bd7e-787381428bce"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:00:14 crc kubenswrapper[4947]: I1129 07:00:14.693944 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1df9108b-7e5b-4dd6-bd7e-787381428bce-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "1df9108b-7e5b-4dd6-bd7e-787381428bce" (UID: "1df9108b-7e5b-4dd6-bd7e-787381428bce"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:00:14 crc kubenswrapper[4947]: I1129 07:00:14.694730 4947 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1df9108b-7e5b-4dd6-bd7e-787381428bce-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 29 07:00:14 crc kubenswrapper[4947]: I1129 07:00:14.694746 4947 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1df9108b-7e5b-4dd6-bd7e-787381428bce-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 29 07:00:14 crc kubenswrapper[4947]: I1129 07:00:14.694762 4947 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1df9108b-7e5b-4dd6-bd7e-787381428bce-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 29 07:00:14 crc kubenswrapper[4947]: I1129 07:00:14.713967 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1df9108b-7e5b-4dd6-bd7e-787381428bce-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "1df9108b-7e5b-4dd6-bd7e-787381428bce" (UID: "1df9108b-7e5b-4dd6-bd7e-787381428bce"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:00:14 crc kubenswrapper[4947]: I1129 07:00:14.714094 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/1df9108b-7e5b-4dd6-bd7e-787381428bce-pod-info" (OuterVolumeSpecName: "pod-info") pod "1df9108b-7e5b-4dd6-bd7e-787381428bce" (UID: "1df9108b-7e5b-4dd6-bd7e-787381428bce"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 29 07:00:14 crc kubenswrapper[4947]: I1129 07:00:14.715431 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "persistence") pod "1df9108b-7e5b-4dd6-bd7e-787381428bce" (UID: "1df9108b-7e5b-4dd6-bd7e-787381428bce"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 29 07:00:14 crc kubenswrapper[4947]: I1129 07:00:14.716724 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1df9108b-7e5b-4dd6-bd7e-787381428bce-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "1df9108b-7e5b-4dd6-bd7e-787381428bce" (UID: "1df9108b-7e5b-4dd6-bd7e-787381428bce"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:00:14 crc kubenswrapper[4947]: I1129 07:00:14.718740 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1df9108b-7e5b-4dd6-bd7e-787381428bce-kube-api-access-hflkc" (OuterVolumeSpecName: "kube-api-access-hflkc") pod "1df9108b-7e5b-4dd6-bd7e-787381428bce" (UID: "1df9108b-7e5b-4dd6-bd7e-787381428bce"). InnerVolumeSpecName "kube-api-access-hflkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:00:14 crc kubenswrapper[4947]: I1129 07:00:14.745748 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1df9108b-7e5b-4dd6-bd7e-787381428bce-config-data" (OuterVolumeSpecName: "config-data") pod "1df9108b-7e5b-4dd6-bd7e-787381428bce" (UID: "1df9108b-7e5b-4dd6-bd7e-787381428bce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:00:14 crc kubenswrapper[4947]: I1129 07:00:14.797937 4947 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Nov 29 07:00:14 crc kubenswrapper[4947]: I1129 07:00:14.798072 4947 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1df9108b-7e5b-4dd6-bd7e-787381428bce-pod-info\") on node \"crc\" DevicePath \"\"" Nov 29 07:00:14 crc kubenswrapper[4947]: I1129 07:00:14.798091 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hflkc\" (UniqueName: \"kubernetes.io/projected/1df9108b-7e5b-4dd6-bd7e-787381428bce-kube-api-access-hflkc\") on node \"crc\" DevicePath \"\"" Nov 29 07:00:14 crc kubenswrapper[4947]: I1129 07:00:14.798104 4947 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1df9108b-7e5b-4dd6-bd7e-787381428bce-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 29 07:00:14 crc kubenswrapper[4947]: I1129 07:00:14.798114 4947 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1df9108b-7e5b-4dd6-bd7e-787381428bce-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:00:14 crc kubenswrapper[4947]: I1129 07:00:14.798148 4947 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1df9108b-7e5b-4dd6-bd7e-787381428bce-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:00:14 crc kubenswrapper[4947]: I1129 07:00:14.805088 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1df9108b-7e5b-4dd6-bd7e-787381428bce-server-conf" (OuterVolumeSpecName: "server-conf") pod "1df9108b-7e5b-4dd6-bd7e-787381428bce" (UID: "1df9108b-7e5b-4dd6-bd7e-787381428bce"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:00:14 crc kubenswrapper[4947]: I1129 07:00:14.829578 4947 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Nov 29 07:00:14 crc kubenswrapper[4947]: I1129 07:00:14.846937 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1df9108b-7e5b-4dd6-bd7e-787381428bce-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "1df9108b-7e5b-4dd6-bd7e-787381428bce" (UID: "1df9108b-7e5b-4dd6-bd7e-787381428bce"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:00:14 crc kubenswrapper[4947]: I1129 07:00:14.900001 4947 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Nov 29 07:00:14 crc kubenswrapper[4947]: I1129 07:00:14.900458 4947 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1df9108b-7e5b-4dd6-bd7e-787381428bce-server-conf\") on node \"crc\" DevicePath \"\"" Nov 29 07:00:14 crc kubenswrapper[4947]: I1129 07:00:14.900528 4947 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1df9108b-7e5b-4dd6-bd7e-787381428bce-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 29 07:00:14 crc kubenswrapper[4947]: I1129 07:00:14.937968 4947 generic.go:334] "Generic (PLEG): container finished" podID="e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8" containerID="1de336f26a889f79cd35e84bb053e33754774c33fa596603e4d541d0ced2a2dc" exitCode=0 Nov 29 07:00:14 crc kubenswrapper[4947]: I1129 07:00:14.938068 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8","Type":"ContainerDied","Data":"1de336f26a889f79cd35e84bb053e33754774c33fa596603e4d541d0ced2a2dc"} Nov 29 07:00:14 crc kubenswrapper[4947]: I1129 07:00:14.941601 4947 generic.go:334] "Generic (PLEG): container finished" podID="1df9108b-7e5b-4dd6-bd7e-787381428bce" containerID="0fff10656ad29de8b7f5aeb6366cf54e2b7e706199c69fac88eb15e75c97c542" exitCode=0 Nov 29 07:00:14 crc kubenswrapper[4947]: I1129 07:00:14.941761 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1df9108b-7e5b-4dd6-bd7e-787381428bce","Type":"ContainerDied","Data":"0fff10656ad29de8b7f5aeb6366cf54e2b7e706199c69fac88eb15e75c97c542"} Nov 29 07:00:14 crc kubenswrapper[4947]: I1129 07:00:14.941879 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1df9108b-7e5b-4dd6-bd7e-787381428bce","Type":"ContainerDied","Data":"7b5d98b30de299d7d151cb0014a2db5c1f871768f887428c580707732d5b07de"} Nov 29 07:00:14 crc kubenswrapper[4947]: I1129 07:00:14.941949 4947 scope.go:117] "RemoveContainer" containerID="0fff10656ad29de8b7f5aeb6366cf54e2b7e706199c69fac88eb15e75c97c542" Nov 29 07:00:14 crc kubenswrapper[4947]: I1129 07:00:14.942040 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 29 07:00:14 crc kubenswrapper[4947]: I1129 07:00:14.980133 4947 scope.go:117] "RemoveContainer" containerID="c557257ddf99194f29d76a34461fba82b4700930bfc653296e8ab52b9ef25318" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.013209 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.043441 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.057599 4947 scope.go:117] "RemoveContainer" containerID="0fff10656ad29de8b7f5aeb6366cf54e2b7e706199c69fac88eb15e75c97c542" Nov 29 07:00:15 crc kubenswrapper[4947]: E1129 07:00:15.062539 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fff10656ad29de8b7f5aeb6366cf54e2b7e706199c69fac88eb15e75c97c542\": container with ID starting with 0fff10656ad29de8b7f5aeb6366cf54e2b7e706199c69fac88eb15e75c97c542 not found: ID does not exist" containerID="0fff10656ad29de8b7f5aeb6366cf54e2b7e706199c69fac88eb15e75c97c542" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.062626 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fff10656ad29de8b7f5aeb6366cf54e2b7e706199c69fac88eb15e75c97c542"} err="failed to get container status \"0fff10656ad29de8b7f5aeb6366cf54e2b7e706199c69fac88eb15e75c97c542\": rpc error: code = NotFound desc = could not find container \"0fff10656ad29de8b7f5aeb6366cf54e2b7e706199c69fac88eb15e75c97c542\": container with ID starting with 0fff10656ad29de8b7f5aeb6366cf54e2b7e706199c69fac88eb15e75c97c542 not found: ID does not exist" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.062680 4947 scope.go:117] "RemoveContainer" containerID="c557257ddf99194f29d76a34461fba82b4700930bfc653296e8ab52b9ef25318" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.070468 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 29 07:00:15 crc kubenswrapper[4947]: E1129 07:00:15.071124 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1df9108b-7e5b-4dd6-bd7e-787381428bce" containerName="rabbitmq" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.071144 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="1df9108b-7e5b-4dd6-bd7e-787381428bce" containerName="rabbitmq" Nov 29 07:00:15 crc kubenswrapper[4947]: E1129 07:00:15.071158 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b7812d7-d76d-4055-8c4c-c59f058af07f" containerName="collect-profiles" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.071164 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b7812d7-d76d-4055-8c4c-c59f058af07f" containerName="collect-profiles" Nov 29 07:00:15 crc kubenswrapper[4947]: E1129 07:00:15.071179 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1df9108b-7e5b-4dd6-bd7e-787381428bce" containerName="setup-container" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.071186 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="1df9108b-7e5b-4dd6-bd7e-787381428bce" containerName="setup-container" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.071420 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b7812d7-d76d-4055-8c4c-c59f058af07f" containerName="collect-profiles" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.071440 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="1df9108b-7e5b-4dd6-bd7e-787381428bce" containerName="rabbitmq" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.072628 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 29 07:00:15 crc kubenswrapper[4947]: E1129 07:00:15.074403 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c557257ddf99194f29d76a34461fba82b4700930bfc653296e8ab52b9ef25318\": container with ID starting with c557257ddf99194f29d76a34461fba82b4700930bfc653296e8ab52b9ef25318 not found: ID does not exist" containerID="c557257ddf99194f29d76a34461fba82b4700930bfc653296e8ab52b9ef25318" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.074458 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c557257ddf99194f29d76a34461fba82b4700930bfc653296e8ab52b9ef25318"} err="failed to get container status \"c557257ddf99194f29d76a34461fba82b4700930bfc653296e8ab52b9ef25318\": rpc error: code = NotFound desc = could not find container \"c557257ddf99194f29d76a34461fba82b4700930bfc653296e8ab52b9ef25318\": container with ID starting with c557257ddf99194f29d76a34461fba82b4700930bfc653296e8ab52b9ef25318 not found: ID does not exist" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.076115 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.076337 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.076674 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.076804 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.076979 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.077513 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-j9h7d" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.077715 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.080575 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.193500 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1df9108b-7e5b-4dd6-bd7e-787381428bce" path="/var/lib/kubelet/pods/1df9108b-7e5b-4dd6-bd7e-787381428bce/volumes" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.207430 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/173b8534-1ee1-448a-bdd1-62369c58057b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"173b8534-1ee1-448a-bdd1-62369c58057b\") " pod="openstack/rabbitmq-server-0" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.207484 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/173b8534-1ee1-448a-bdd1-62369c58057b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"173b8534-1ee1-448a-bdd1-62369c58057b\") " pod="openstack/rabbitmq-server-0" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.207536 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/173b8534-1ee1-448a-bdd1-62369c58057b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"173b8534-1ee1-448a-bdd1-62369c58057b\") " pod="openstack/rabbitmq-server-0" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.207562 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/173b8534-1ee1-448a-bdd1-62369c58057b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"173b8534-1ee1-448a-bdd1-62369c58057b\") " pod="openstack/rabbitmq-server-0" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.207588 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/173b8534-1ee1-448a-bdd1-62369c58057b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"173b8534-1ee1-448a-bdd1-62369c58057b\") " pod="openstack/rabbitmq-server-0" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.207609 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/173b8534-1ee1-448a-bdd1-62369c58057b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"173b8534-1ee1-448a-bdd1-62369c58057b\") " pod="openstack/rabbitmq-server-0" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.207666 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q62zx\" (UniqueName: \"kubernetes.io/projected/173b8534-1ee1-448a-bdd1-62369c58057b-kube-api-access-q62zx\") pod \"rabbitmq-server-0\" (UID: \"173b8534-1ee1-448a-bdd1-62369c58057b\") " pod="openstack/rabbitmq-server-0" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.207701 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/173b8534-1ee1-448a-bdd1-62369c58057b-config-data\") pod \"rabbitmq-server-0\" (UID: \"173b8534-1ee1-448a-bdd1-62369c58057b\") " pod="openstack/rabbitmq-server-0" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.207717 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/173b8534-1ee1-448a-bdd1-62369c58057b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"173b8534-1ee1-448a-bdd1-62369c58057b\") " pod="openstack/rabbitmq-server-0" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.207741 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"173b8534-1ee1-448a-bdd1-62369c58057b\") " pod="openstack/rabbitmq-server-0" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.207759 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/173b8534-1ee1-448a-bdd1-62369c58057b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"173b8534-1ee1-448a-bdd1-62369c58057b\") " pod="openstack/rabbitmq-server-0" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.309647 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q62zx\" (UniqueName: \"kubernetes.io/projected/173b8534-1ee1-448a-bdd1-62369c58057b-kube-api-access-q62zx\") pod \"rabbitmq-server-0\" (UID: \"173b8534-1ee1-448a-bdd1-62369c58057b\") " pod="openstack/rabbitmq-server-0" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.309719 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/173b8534-1ee1-448a-bdd1-62369c58057b-config-data\") pod \"rabbitmq-server-0\" (UID: \"173b8534-1ee1-448a-bdd1-62369c58057b\") " pod="openstack/rabbitmq-server-0" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.309739 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/173b8534-1ee1-448a-bdd1-62369c58057b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"173b8534-1ee1-448a-bdd1-62369c58057b\") " pod="openstack/rabbitmq-server-0" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.309771 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"173b8534-1ee1-448a-bdd1-62369c58057b\") " pod="openstack/rabbitmq-server-0" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.309797 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/173b8534-1ee1-448a-bdd1-62369c58057b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"173b8534-1ee1-448a-bdd1-62369c58057b\") " pod="openstack/rabbitmq-server-0" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.309850 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/173b8534-1ee1-448a-bdd1-62369c58057b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"173b8534-1ee1-448a-bdd1-62369c58057b\") " pod="openstack/rabbitmq-server-0" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.309876 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/173b8534-1ee1-448a-bdd1-62369c58057b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"173b8534-1ee1-448a-bdd1-62369c58057b\") " pod="openstack/rabbitmq-server-0" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.309964 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/173b8534-1ee1-448a-bdd1-62369c58057b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"173b8534-1ee1-448a-bdd1-62369c58057b\") " pod="openstack/rabbitmq-server-0" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.309991 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/173b8534-1ee1-448a-bdd1-62369c58057b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"173b8534-1ee1-448a-bdd1-62369c58057b\") " pod="openstack/rabbitmq-server-0" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.310015 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/173b8534-1ee1-448a-bdd1-62369c58057b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"173b8534-1ee1-448a-bdd1-62369c58057b\") " pod="openstack/rabbitmq-server-0" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.310033 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/173b8534-1ee1-448a-bdd1-62369c58057b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"173b8534-1ee1-448a-bdd1-62369c58057b\") " pod="openstack/rabbitmq-server-0" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.310605 4947 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"173b8534-1ee1-448a-bdd1-62369c58057b\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-server-0" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.310739 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/173b8534-1ee1-448a-bdd1-62369c58057b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"173b8534-1ee1-448a-bdd1-62369c58057b\") " pod="openstack/rabbitmq-server-0" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.311185 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/173b8534-1ee1-448a-bdd1-62369c58057b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"173b8534-1ee1-448a-bdd1-62369c58057b\") " pod="openstack/rabbitmq-server-0" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.311314 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/173b8534-1ee1-448a-bdd1-62369c58057b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"173b8534-1ee1-448a-bdd1-62369c58057b\") " pod="openstack/rabbitmq-server-0" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.311999 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/173b8534-1ee1-448a-bdd1-62369c58057b-config-data\") pod \"rabbitmq-server-0\" (UID: \"173b8534-1ee1-448a-bdd1-62369c58057b\") " pod="openstack/rabbitmq-server-0" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.312824 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/173b8534-1ee1-448a-bdd1-62369c58057b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"173b8534-1ee1-448a-bdd1-62369c58057b\") " pod="openstack/rabbitmq-server-0" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.321278 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/173b8534-1ee1-448a-bdd1-62369c58057b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"173b8534-1ee1-448a-bdd1-62369c58057b\") " pod="openstack/rabbitmq-server-0" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.322843 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/173b8534-1ee1-448a-bdd1-62369c58057b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"173b8534-1ee1-448a-bdd1-62369c58057b\") " pod="openstack/rabbitmq-server-0" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.337519 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/173b8534-1ee1-448a-bdd1-62369c58057b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"173b8534-1ee1-448a-bdd1-62369c58057b\") " pod="openstack/rabbitmq-server-0" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.345037 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/173b8534-1ee1-448a-bdd1-62369c58057b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"173b8534-1ee1-448a-bdd1-62369c58057b\") " pod="openstack/rabbitmq-server-0" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.348393 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q62zx\" (UniqueName: \"kubernetes.io/projected/173b8534-1ee1-448a-bdd1-62369c58057b-kube-api-access-q62zx\") pod \"rabbitmq-server-0\" (UID: \"173b8534-1ee1-448a-bdd1-62369c58057b\") " pod="openstack/rabbitmq-server-0" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.428585 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"173b8534-1ee1-448a-bdd1-62369c58057b\") " pod="openstack/rabbitmq-server-0" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.449407 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.722572 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.822599 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-server-conf\") pod \"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8\" (UID: \"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8\") " Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.822720 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87qj4\" (UniqueName: \"kubernetes.io/projected/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-kube-api-access-87qj4\") pod \"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8\" (UID: \"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8\") " Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.822790 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-pod-info\") pod \"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8\" (UID: \"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8\") " Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.822839 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-erlang-cookie-secret\") pod \"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8\" (UID: \"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8\") " Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.822919 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-plugins-conf\") pod \"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8\" (UID: \"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8\") " Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.822950 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-rabbitmq-tls\") pod \"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8\" (UID: \"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8\") " Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.822979 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-rabbitmq-plugins\") pod \"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8\" (UID: \"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8\") " Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.823055 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-rabbitmq-confd\") pod \"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8\" (UID: \"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8\") " Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.823104 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-rabbitmq-erlang-cookie\") pod \"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8\" (UID: \"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8\") " Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.823172 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8\" (UID: \"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8\") " Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.823232 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-config-data\") pod \"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8\" (UID: \"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8\") " Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.825183 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8" (UID: "e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.826089 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8" (UID: "e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.826967 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8" (UID: "e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.831655 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-kube-api-access-87qj4" (OuterVolumeSpecName: "kube-api-access-87qj4") pod "e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8" (UID: "e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8"). InnerVolumeSpecName "kube-api-access-87qj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.832491 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8" (UID: "e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.832644 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "persistence") pod "e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8" (UID: "e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.835608 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-pod-info" (OuterVolumeSpecName: "pod-info") pod "e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8" (UID: "e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.843051 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8" (UID: "e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.862912 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-config-data" (OuterVolumeSpecName: "config-data") pod "e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8" (UID: "e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.894271 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-server-conf" (OuterVolumeSpecName: "server-conf") pod "e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8" (UID: "e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.925783 4947 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.926016 4947 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.926058 4947 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.926074 4947 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-server-conf\") on node \"crc\" DevicePath \"\"" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.926087 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87qj4\" (UniqueName: \"kubernetes.io/projected/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-kube-api-access-87qj4\") on node \"crc\" DevicePath \"\"" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.926096 4947 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-pod-info\") on node \"crc\" DevicePath \"\"" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.926106 4947 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.926137 4947 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.926146 4947 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.926155 4947 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.952897 4947 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.952925 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8" (UID: "e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.957087 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8","Type":"ContainerDied","Data":"d960731a22a937b28d835a5850d4d039e926a582af834bacc53628877272a038"} Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.957148 4947 scope.go:117] "RemoveContainer" containerID="1de336f26a889f79cd35e84bb053e33754774c33fa596603e4d541d0ced2a2dc" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.957342 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:00:15 crc kubenswrapper[4947]: I1129 07:00:15.984886 4947 scope.go:117] "RemoveContainer" containerID="715f470ce1721879c0311ceda3a06eeda1c4246dd6ce79c6e137a07f44a5fe37" Nov 29 07:00:16 crc kubenswrapper[4947]: I1129 07:00:16.018317 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 29 07:00:16 crc kubenswrapper[4947]: I1129 07:00:16.029038 4947 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 29 07:00:16 crc kubenswrapper[4947]: I1129 07:00:16.029102 4947 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Nov 29 07:00:16 crc kubenswrapper[4947]: I1129 07:00:16.036954 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 29 07:00:16 crc kubenswrapper[4947]: I1129 07:00:16.050792 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 29 07:00:16 crc kubenswrapper[4947]: I1129 07:00:16.059895 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 29 07:00:16 crc kubenswrapper[4947]: E1129 07:00:16.060617 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8" containerName="setup-container" Nov 29 07:00:16 crc kubenswrapper[4947]: I1129 07:00:16.060652 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8" containerName="setup-container" Nov 29 07:00:16 crc kubenswrapper[4947]: E1129 07:00:16.060674 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8" containerName="rabbitmq" Nov 29 07:00:16 crc kubenswrapper[4947]: I1129 07:00:16.060681 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8" containerName="rabbitmq" Nov 29 07:00:16 crc kubenswrapper[4947]: I1129 07:00:16.060873 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8" containerName="rabbitmq" Nov 29 07:00:16 crc kubenswrapper[4947]: I1129 07:00:16.062386 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:00:16 crc kubenswrapper[4947]: I1129 07:00:16.086317 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 29 07:00:16 crc kubenswrapper[4947]: I1129 07:00:16.097555 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-xvs4b" Nov 29 07:00:16 crc kubenswrapper[4947]: I1129 07:00:16.097852 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 29 07:00:16 crc kubenswrapper[4947]: I1129 07:00:16.098361 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 29 07:00:16 crc kubenswrapper[4947]: I1129 07:00:16.098722 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 29 07:00:16 crc kubenswrapper[4947]: I1129 07:00:16.099200 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 29 07:00:16 crc kubenswrapper[4947]: I1129 07:00:16.100062 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 29 07:00:16 crc kubenswrapper[4947]: I1129 07:00:16.104388 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 29 07:00:16 crc kubenswrapper[4947]: I1129 07:00:16.233252 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2x7h\" (UniqueName: \"kubernetes.io/projected/d4d9399d-41b3-40c1-89d4-8124e0966300-kube-api-access-r2x7h\") pod \"rabbitmq-cell1-server-0\" (UID: \"d4d9399d-41b3-40c1-89d4-8124e0966300\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:00:16 crc kubenswrapper[4947]: I1129 07:00:16.233313 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d4d9399d-41b3-40c1-89d4-8124e0966300-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d4d9399d-41b3-40c1-89d4-8124e0966300\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:00:16 crc kubenswrapper[4947]: I1129 07:00:16.233341 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d4d9399d-41b3-40c1-89d4-8124e0966300\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:00:16 crc kubenswrapper[4947]: I1129 07:00:16.233367 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d4d9399d-41b3-40c1-89d4-8124e0966300-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d4d9399d-41b3-40c1-89d4-8124e0966300\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:00:16 crc kubenswrapper[4947]: I1129 07:00:16.233434 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d4d9399d-41b3-40c1-89d4-8124e0966300-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d4d9399d-41b3-40c1-89d4-8124e0966300\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:00:16 crc kubenswrapper[4947]: I1129 07:00:16.233466 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d4d9399d-41b3-40c1-89d4-8124e0966300-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d4d9399d-41b3-40c1-89d4-8124e0966300\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:00:16 crc kubenswrapper[4947]: I1129 07:00:16.233488 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d4d9399d-41b3-40c1-89d4-8124e0966300-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d4d9399d-41b3-40c1-89d4-8124e0966300\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:00:16 crc kubenswrapper[4947]: I1129 07:00:16.233515 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d4d9399d-41b3-40c1-89d4-8124e0966300-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d4d9399d-41b3-40c1-89d4-8124e0966300\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:00:16 crc kubenswrapper[4947]: I1129 07:00:16.233544 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d4d9399d-41b3-40c1-89d4-8124e0966300-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d4d9399d-41b3-40c1-89d4-8124e0966300\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:00:16 crc kubenswrapper[4947]: I1129 07:00:16.233561 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d4d9399d-41b3-40c1-89d4-8124e0966300-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d4d9399d-41b3-40c1-89d4-8124e0966300\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:00:16 crc kubenswrapper[4947]: I1129 07:00:16.233586 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d4d9399d-41b3-40c1-89d4-8124e0966300-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d4d9399d-41b3-40c1-89d4-8124e0966300\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:00:16 crc kubenswrapper[4947]: I1129 07:00:16.335181 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d4d9399d-41b3-40c1-89d4-8124e0966300-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d4d9399d-41b3-40c1-89d4-8124e0966300\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:00:16 crc kubenswrapper[4947]: I1129 07:00:16.335304 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d4d9399d-41b3-40c1-89d4-8124e0966300-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d4d9399d-41b3-40c1-89d4-8124e0966300\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:00:16 crc kubenswrapper[4947]: I1129 07:00:16.335355 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d4d9399d-41b3-40c1-89d4-8124e0966300-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d4d9399d-41b3-40c1-89d4-8124e0966300\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:00:16 crc kubenswrapper[4947]: I1129 07:00:16.335391 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d4d9399d-41b3-40c1-89d4-8124e0966300-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d4d9399d-41b3-40c1-89d4-8124e0966300\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:00:16 crc kubenswrapper[4947]: I1129 07:00:16.335434 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d4d9399d-41b3-40c1-89d4-8124e0966300-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d4d9399d-41b3-40c1-89d4-8124e0966300\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:00:16 crc kubenswrapper[4947]: I1129 07:00:16.335461 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d4d9399d-41b3-40c1-89d4-8124e0966300-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d4d9399d-41b3-40c1-89d4-8124e0966300\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:00:16 crc kubenswrapper[4947]: I1129 07:00:16.335500 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d4d9399d-41b3-40c1-89d4-8124e0966300-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d4d9399d-41b3-40c1-89d4-8124e0966300\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:00:16 crc kubenswrapper[4947]: I1129 07:00:16.335550 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2x7h\" (UniqueName: \"kubernetes.io/projected/d4d9399d-41b3-40c1-89d4-8124e0966300-kube-api-access-r2x7h\") pod \"rabbitmq-cell1-server-0\" (UID: \"d4d9399d-41b3-40c1-89d4-8124e0966300\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:00:16 crc kubenswrapper[4947]: I1129 07:00:16.335579 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d4d9399d-41b3-40c1-89d4-8124e0966300-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d4d9399d-41b3-40c1-89d4-8124e0966300\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:00:16 crc kubenswrapper[4947]: I1129 07:00:16.335608 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d4d9399d-41b3-40c1-89d4-8124e0966300\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:00:16 crc kubenswrapper[4947]: I1129 07:00:16.335638 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d4d9399d-41b3-40c1-89d4-8124e0966300-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d4d9399d-41b3-40c1-89d4-8124e0966300\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:00:16 crc kubenswrapper[4947]: I1129 07:00:16.336687 4947 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d4d9399d-41b3-40c1-89d4-8124e0966300\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:00:16 crc kubenswrapper[4947]: I1129 07:00:16.337117 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d4d9399d-41b3-40c1-89d4-8124e0966300-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d4d9399d-41b3-40c1-89d4-8124e0966300\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:00:16 crc kubenswrapper[4947]: I1129 07:00:16.337902 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d4d9399d-41b3-40c1-89d4-8124e0966300-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d4d9399d-41b3-40c1-89d4-8124e0966300\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:00:16 crc kubenswrapper[4947]: I1129 07:00:16.338094 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d4d9399d-41b3-40c1-89d4-8124e0966300-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d4d9399d-41b3-40c1-89d4-8124e0966300\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:00:16 crc kubenswrapper[4947]: I1129 07:00:16.338491 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d4d9399d-41b3-40c1-89d4-8124e0966300-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d4d9399d-41b3-40c1-89d4-8124e0966300\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:00:16 crc kubenswrapper[4947]: I1129 07:00:16.339047 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d4d9399d-41b3-40c1-89d4-8124e0966300-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d4d9399d-41b3-40c1-89d4-8124e0966300\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:00:16 crc kubenswrapper[4947]: I1129 07:00:16.343294 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d4d9399d-41b3-40c1-89d4-8124e0966300-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d4d9399d-41b3-40c1-89d4-8124e0966300\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:00:16 crc kubenswrapper[4947]: I1129 07:00:16.343620 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d4d9399d-41b3-40c1-89d4-8124e0966300-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d4d9399d-41b3-40c1-89d4-8124e0966300\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:00:16 crc kubenswrapper[4947]: I1129 07:00:16.345201 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d4d9399d-41b3-40c1-89d4-8124e0966300-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d4d9399d-41b3-40c1-89d4-8124e0966300\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:00:16 crc kubenswrapper[4947]: I1129 07:00:16.345729 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d4d9399d-41b3-40c1-89d4-8124e0966300-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d4d9399d-41b3-40c1-89d4-8124e0966300\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:00:16 crc kubenswrapper[4947]: I1129 07:00:16.360253 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2x7h\" (UniqueName: \"kubernetes.io/projected/d4d9399d-41b3-40c1-89d4-8124e0966300-kube-api-access-r2x7h\") pod \"rabbitmq-cell1-server-0\" (UID: \"d4d9399d-41b3-40c1-89d4-8124e0966300\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:00:16 crc kubenswrapper[4947]: I1129 07:00:16.369772 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d4d9399d-41b3-40c1-89d4-8124e0966300\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:00:16 crc kubenswrapper[4947]: I1129 07:00:16.561600 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:00:16 crc kubenswrapper[4947]: I1129 07:00:16.969712 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"173b8534-1ee1-448a-bdd1-62369c58057b","Type":"ContainerStarted","Data":"3b6c207996ae2926b99ee0b2659cf5c59fddf828f324cc56da7c4d3187a05f56"} Nov 29 07:00:17 crc kubenswrapper[4947]: I1129 07:00:17.058342 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 29 07:00:17 crc kubenswrapper[4947]: W1129 07:00:17.134137 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4d9399d_41b3_40c1_89d4_8124e0966300.slice/crio-405bd76e5ff8f3a007b53da528b3b5b996b098c1173dcf7456f9ece524bc4691 WatchSource:0}: Error finding container 405bd76e5ff8f3a007b53da528b3b5b996b098c1173dcf7456f9ece524bc4691: Status 404 returned error can't find the container with id 405bd76e5ff8f3a007b53da528b3b5b996b098c1173dcf7456f9ece524bc4691 Nov 29 07:00:17 crc kubenswrapper[4947]: I1129 07:00:17.203577 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8" path="/var/lib/kubelet/pods/e9b892d6-7b7d-4259-9dbd-6d0c0d8b12a8/volumes" Nov 29 07:00:17 crc kubenswrapper[4947]: I1129 07:00:17.986250 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d4d9399d-41b3-40c1-89d4-8124e0966300","Type":"ContainerStarted","Data":"405bd76e5ff8f3a007b53da528b3b5b996b098c1173dcf7456f9ece524bc4691"} Nov 29 07:00:17 crc kubenswrapper[4947]: I1129 07:00:17.989395 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"173b8534-1ee1-448a-bdd1-62369c58057b","Type":"ContainerStarted","Data":"7c1c38e9120330d778b0061fac8e20e8f5bee266a6a49ed89c1e193fb511e276"} Nov 29 07:00:18 crc kubenswrapper[4947]: I1129 07:00:18.967917 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-trw52"] Nov 29 07:00:18 crc kubenswrapper[4947]: I1129 07:00:18.970193 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-trw52" Nov 29 07:00:18 crc kubenswrapper[4947]: I1129 07:00:18.989008 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Nov 29 07:00:18 crc kubenswrapper[4947]: I1129 07:00:18.989271 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-trw52"] Nov 29 07:00:19 crc kubenswrapper[4947]: I1129 07:00:19.003100 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d4d9399d-41b3-40c1-89d4-8124e0966300","Type":"ContainerStarted","Data":"05facf2f4f72a86e3c22461a3abc89b10f558de1363b0f33d5cbddf1bd76b4ef"} Nov 29 07:00:19 crc kubenswrapper[4947]: I1129 07:00:19.099480 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f384cf4-0f9f-4059-bc4e-15bb7fce1604-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-trw52\" (UID: \"5f384cf4-0f9f-4059-bc4e-15bb7fce1604\") " pod="openstack/dnsmasq-dns-6447ccbd8f-trw52" Nov 29 07:00:19 crc kubenswrapper[4947]: I1129 07:00:19.099621 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f384cf4-0f9f-4059-bc4e-15bb7fce1604-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-trw52\" (UID: \"5f384cf4-0f9f-4059-bc4e-15bb7fce1604\") " pod="openstack/dnsmasq-dns-6447ccbd8f-trw52" Nov 29 07:00:19 crc kubenswrapper[4947]: I1129 07:00:19.099783 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5f384cf4-0f9f-4059-bc4e-15bb7fce1604-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-trw52\" (UID: \"5f384cf4-0f9f-4059-bc4e-15bb7fce1604\") " pod="openstack/dnsmasq-dns-6447ccbd8f-trw52" Nov 29 07:00:19 crc kubenswrapper[4947]: I1129 07:00:19.099978 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f384cf4-0f9f-4059-bc4e-15bb7fce1604-config\") pod \"dnsmasq-dns-6447ccbd8f-trw52\" (UID: \"5f384cf4-0f9f-4059-bc4e-15bb7fce1604\") " pod="openstack/dnsmasq-dns-6447ccbd8f-trw52" Nov 29 07:00:19 crc kubenswrapper[4947]: I1129 07:00:19.100190 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp4gl\" (UniqueName: \"kubernetes.io/projected/5f384cf4-0f9f-4059-bc4e-15bb7fce1604-kube-api-access-cp4gl\") pod \"dnsmasq-dns-6447ccbd8f-trw52\" (UID: \"5f384cf4-0f9f-4059-bc4e-15bb7fce1604\") " pod="openstack/dnsmasq-dns-6447ccbd8f-trw52" Nov 29 07:00:19 crc kubenswrapper[4947]: I1129 07:00:19.100303 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f384cf4-0f9f-4059-bc4e-15bb7fce1604-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-trw52\" (UID: \"5f384cf4-0f9f-4059-bc4e-15bb7fce1604\") " pod="openstack/dnsmasq-dns-6447ccbd8f-trw52" Nov 29 07:00:19 crc kubenswrapper[4947]: I1129 07:00:19.202247 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cp4gl\" (UniqueName: \"kubernetes.io/projected/5f384cf4-0f9f-4059-bc4e-15bb7fce1604-kube-api-access-cp4gl\") pod \"dnsmasq-dns-6447ccbd8f-trw52\" (UID: \"5f384cf4-0f9f-4059-bc4e-15bb7fce1604\") " pod="openstack/dnsmasq-dns-6447ccbd8f-trw52" Nov 29 07:00:19 crc kubenswrapper[4947]: I1129 07:00:19.202325 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f384cf4-0f9f-4059-bc4e-15bb7fce1604-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-trw52\" (UID: \"5f384cf4-0f9f-4059-bc4e-15bb7fce1604\") " pod="openstack/dnsmasq-dns-6447ccbd8f-trw52" Nov 29 07:00:19 crc kubenswrapper[4947]: I1129 07:00:19.202371 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f384cf4-0f9f-4059-bc4e-15bb7fce1604-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-trw52\" (UID: \"5f384cf4-0f9f-4059-bc4e-15bb7fce1604\") " pod="openstack/dnsmasq-dns-6447ccbd8f-trw52" Nov 29 07:00:19 crc kubenswrapper[4947]: I1129 07:00:19.202407 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f384cf4-0f9f-4059-bc4e-15bb7fce1604-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-trw52\" (UID: \"5f384cf4-0f9f-4059-bc4e-15bb7fce1604\") " pod="openstack/dnsmasq-dns-6447ccbd8f-trw52" Nov 29 07:00:19 crc kubenswrapper[4947]: I1129 07:00:19.202470 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5f384cf4-0f9f-4059-bc4e-15bb7fce1604-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-trw52\" (UID: \"5f384cf4-0f9f-4059-bc4e-15bb7fce1604\") " pod="openstack/dnsmasq-dns-6447ccbd8f-trw52" Nov 29 07:00:19 crc kubenswrapper[4947]: I1129 07:00:19.202514 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f384cf4-0f9f-4059-bc4e-15bb7fce1604-config\") pod \"dnsmasq-dns-6447ccbd8f-trw52\" (UID: \"5f384cf4-0f9f-4059-bc4e-15bb7fce1604\") " pod="openstack/dnsmasq-dns-6447ccbd8f-trw52" Nov 29 07:00:19 crc kubenswrapper[4947]: I1129 07:00:19.203739 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f384cf4-0f9f-4059-bc4e-15bb7fce1604-config\") pod \"dnsmasq-dns-6447ccbd8f-trw52\" (UID: \"5f384cf4-0f9f-4059-bc4e-15bb7fce1604\") " pod="openstack/dnsmasq-dns-6447ccbd8f-trw52" Nov 29 07:00:19 crc kubenswrapper[4947]: I1129 07:00:19.203739 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5f384cf4-0f9f-4059-bc4e-15bb7fce1604-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-trw52\" (UID: \"5f384cf4-0f9f-4059-bc4e-15bb7fce1604\") " pod="openstack/dnsmasq-dns-6447ccbd8f-trw52" Nov 29 07:00:19 crc kubenswrapper[4947]: I1129 07:00:19.204150 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f384cf4-0f9f-4059-bc4e-15bb7fce1604-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-trw52\" (UID: \"5f384cf4-0f9f-4059-bc4e-15bb7fce1604\") " pod="openstack/dnsmasq-dns-6447ccbd8f-trw52" Nov 29 07:00:19 crc kubenswrapper[4947]: I1129 07:00:19.204423 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f384cf4-0f9f-4059-bc4e-15bb7fce1604-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-trw52\" (UID: \"5f384cf4-0f9f-4059-bc4e-15bb7fce1604\") " pod="openstack/dnsmasq-dns-6447ccbd8f-trw52" Nov 29 07:00:19 crc kubenswrapper[4947]: I1129 07:00:19.204515 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f384cf4-0f9f-4059-bc4e-15bb7fce1604-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-trw52\" (UID: \"5f384cf4-0f9f-4059-bc4e-15bb7fce1604\") " pod="openstack/dnsmasq-dns-6447ccbd8f-trw52" Nov 29 07:00:19 crc kubenswrapper[4947]: I1129 07:00:19.241722 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cp4gl\" (UniqueName: \"kubernetes.io/projected/5f384cf4-0f9f-4059-bc4e-15bb7fce1604-kube-api-access-cp4gl\") pod \"dnsmasq-dns-6447ccbd8f-trw52\" (UID: \"5f384cf4-0f9f-4059-bc4e-15bb7fce1604\") " pod="openstack/dnsmasq-dns-6447ccbd8f-trw52" Nov 29 07:00:19 crc kubenswrapper[4947]: I1129 07:00:19.295243 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-trw52" Nov 29 07:00:19 crc kubenswrapper[4947]: W1129 07:00:19.784037 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f384cf4_0f9f_4059_bc4e_15bb7fce1604.slice/crio-443b8ec68259e14a13ba4fda95707d4b569089280482e4123e6b361100d4f22b WatchSource:0}: Error finding container 443b8ec68259e14a13ba4fda95707d4b569089280482e4123e6b361100d4f22b: Status 404 returned error can't find the container with id 443b8ec68259e14a13ba4fda95707d4b569089280482e4123e6b361100d4f22b Nov 29 07:00:19 crc kubenswrapper[4947]: I1129 07:00:19.785375 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-trw52"] Nov 29 07:00:20 crc kubenswrapper[4947]: I1129 07:00:20.022050 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-trw52" event={"ID":"5f384cf4-0f9f-4059-bc4e-15bb7fce1604","Type":"ContainerStarted","Data":"443b8ec68259e14a13ba4fda95707d4b569089280482e4123e6b361100d4f22b"} Nov 29 07:00:21 crc kubenswrapper[4947]: I1129 07:00:21.038409 4947 generic.go:334] "Generic (PLEG): container finished" podID="5f384cf4-0f9f-4059-bc4e-15bb7fce1604" containerID="17fa0a3013f9152dd052f67b14a8964192d143e1f2402668b80e9b9b58dca807" exitCode=0 Nov 29 07:00:21 crc kubenswrapper[4947]: I1129 07:00:21.038479 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-trw52" event={"ID":"5f384cf4-0f9f-4059-bc4e-15bb7fce1604","Type":"ContainerDied","Data":"17fa0a3013f9152dd052f67b14a8964192d143e1f2402668b80e9b9b58dca807"} Nov 29 07:00:22 crc kubenswrapper[4947]: I1129 07:00:22.065146 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-trw52" event={"ID":"5f384cf4-0f9f-4059-bc4e-15bb7fce1604","Type":"ContainerStarted","Data":"3419efa9340f6bd61e22ddf54d4cde6bbed6d901d0e5018d1d60846539e62f6c"} Nov 29 07:00:22 crc kubenswrapper[4947]: I1129 07:00:22.066089 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6447ccbd8f-trw52" Nov 29 07:00:22 crc kubenswrapper[4947]: I1129 07:00:22.100208 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6447ccbd8f-trw52" podStartSLOduration=4.100187107 podStartE2EDuration="4.100187107s" podCreationTimestamp="2025-11-29 07:00:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:00:22.088241807 +0000 UTC m=+1573.132623888" watchObservedRunningTime="2025-11-29 07:00:22.100187107 +0000 UTC m=+1573.144569188" Nov 29 07:00:29 crc kubenswrapper[4947]: I1129 07:00:29.298845 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6447ccbd8f-trw52" Nov 29 07:00:29 crc kubenswrapper[4947]: I1129 07:00:29.370574 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-8n7gk"] Nov 29 07:00:29 crc kubenswrapper[4947]: I1129 07:00:29.370911 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b856c5697-8n7gk" podUID="8d82e0fd-ef7f-47cd-b7f7-095c424197dd" containerName="dnsmasq-dns" containerID="cri-o://206c4cf53a60f433f8e48ff65f16043af4a48924eed77f760bda76029c869633" gracePeriod=10 Nov 29 07:00:29 crc kubenswrapper[4947]: I1129 07:00:29.641758 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-7dxrl"] Nov 29 07:00:29 crc kubenswrapper[4947]: I1129 07:00:29.644085 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-7dxrl" Nov 29 07:00:29 crc kubenswrapper[4947]: I1129 07:00:29.654385 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-7dxrl"] Nov 29 07:00:29 crc kubenswrapper[4947]: I1129 07:00:29.765260 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b76k\" (UniqueName: \"kubernetes.io/projected/a69f6907-c4a4-45a1-a873-ae5c0557ee41-kube-api-access-4b76k\") pod \"dnsmasq-dns-864d5fc68c-7dxrl\" (UID: \"a69f6907-c4a4-45a1-a873-ae5c0557ee41\") " pod="openstack/dnsmasq-dns-864d5fc68c-7dxrl" Nov 29 07:00:29 crc kubenswrapper[4947]: I1129 07:00:29.765446 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a69f6907-c4a4-45a1-a873-ae5c0557ee41-config\") pod \"dnsmasq-dns-864d5fc68c-7dxrl\" (UID: \"a69f6907-c4a4-45a1-a873-ae5c0557ee41\") " pod="openstack/dnsmasq-dns-864d5fc68c-7dxrl" Nov 29 07:00:29 crc kubenswrapper[4947]: I1129 07:00:29.765618 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a69f6907-c4a4-45a1-a873-ae5c0557ee41-dns-svc\") pod \"dnsmasq-dns-864d5fc68c-7dxrl\" (UID: \"a69f6907-c4a4-45a1-a873-ae5c0557ee41\") " pod="openstack/dnsmasq-dns-864d5fc68c-7dxrl" Nov 29 07:00:29 crc kubenswrapper[4947]: I1129 07:00:29.765823 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a69f6907-c4a4-45a1-a873-ae5c0557ee41-openstack-edpm-ipam\") pod \"dnsmasq-dns-864d5fc68c-7dxrl\" (UID: \"a69f6907-c4a4-45a1-a873-ae5c0557ee41\") " pod="openstack/dnsmasq-dns-864d5fc68c-7dxrl" Nov 29 07:00:29 crc kubenswrapper[4947]: I1129 07:00:29.765943 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a69f6907-c4a4-45a1-a873-ae5c0557ee41-ovsdbserver-nb\") pod \"dnsmasq-dns-864d5fc68c-7dxrl\" (UID: \"a69f6907-c4a4-45a1-a873-ae5c0557ee41\") " pod="openstack/dnsmasq-dns-864d5fc68c-7dxrl" Nov 29 07:00:29 crc kubenswrapper[4947]: I1129 07:00:29.765977 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a69f6907-c4a4-45a1-a873-ae5c0557ee41-ovsdbserver-sb\") pod \"dnsmasq-dns-864d5fc68c-7dxrl\" (UID: \"a69f6907-c4a4-45a1-a873-ae5c0557ee41\") " pod="openstack/dnsmasq-dns-864d5fc68c-7dxrl" Nov 29 07:00:29 crc kubenswrapper[4947]: I1129 07:00:29.868845 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a69f6907-c4a4-45a1-a873-ae5c0557ee41-openstack-edpm-ipam\") pod \"dnsmasq-dns-864d5fc68c-7dxrl\" (UID: \"a69f6907-c4a4-45a1-a873-ae5c0557ee41\") " pod="openstack/dnsmasq-dns-864d5fc68c-7dxrl" Nov 29 07:00:29 crc kubenswrapper[4947]: I1129 07:00:29.869385 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a69f6907-c4a4-45a1-a873-ae5c0557ee41-ovsdbserver-nb\") pod \"dnsmasq-dns-864d5fc68c-7dxrl\" (UID: \"a69f6907-c4a4-45a1-a873-ae5c0557ee41\") " pod="openstack/dnsmasq-dns-864d5fc68c-7dxrl" Nov 29 07:00:29 crc kubenswrapper[4947]: I1129 07:00:29.869406 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a69f6907-c4a4-45a1-a873-ae5c0557ee41-ovsdbserver-sb\") pod \"dnsmasq-dns-864d5fc68c-7dxrl\" (UID: \"a69f6907-c4a4-45a1-a873-ae5c0557ee41\") " pod="openstack/dnsmasq-dns-864d5fc68c-7dxrl" Nov 29 07:00:29 crc kubenswrapper[4947]: I1129 07:00:29.869534 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4b76k\" (UniqueName: \"kubernetes.io/projected/a69f6907-c4a4-45a1-a873-ae5c0557ee41-kube-api-access-4b76k\") pod \"dnsmasq-dns-864d5fc68c-7dxrl\" (UID: \"a69f6907-c4a4-45a1-a873-ae5c0557ee41\") " pod="openstack/dnsmasq-dns-864d5fc68c-7dxrl" Nov 29 07:00:29 crc kubenswrapper[4947]: I1129 07:00:29.869589 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a69f6907-c4a4-45a1-a873-ae5c0557ee41-config\") pod \"dnsmasq-dns-864d5fc68c-7dxrl\" (UID: \"a69f6907-c4a4-45a1-a873-ae5c0557ee41\") " pod="openstack/dnsmasq-dns-864d5fc68c-7dxrl" Nov 29 07:00:29 crc kubenswrapper[4947]: I1129 07:00:29.869639 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a69f6907-c4a4-45a1-a873-ae5c0557ee41-dns-svc\") pod \"dnsmasq-dns-864d5fc68c-7dxrl\" (UID: \"a69f6907-c4a4-45a1-a873-ae5c0557ee41\") " pod="openstack/dnsmasq-dns-864d5fc68c-7dxrl" Nov 29 07:00:29 crc kubenswrapper[4947]: I1129 07:00:29.870113 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a69f6907-c4a4-45a1-a873-ae5c0557ee41-openstack-edpm-ipam\") pod \"dnsmasq-dns-864d5fc68c-7dxrl\" (UID: \"a69f6907-c4a4-45a1-a873-ae5c0557ee41\") " pod="openstack/dnsmasq-dns-864d5fc68c-7dxrl" Nov 29 07:00:29 crc kubenswrapper[4947]: I1129 07:00:29.870259 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a69f6907-c4a4-45a1-a873-ae5c0557ee41-ovsdbserver-nb\") pod \"dnsmasq-dns-864d5fc68c-7dxrl\" (UID: \"a69f6907-c4a4-45a1-a873-ae5c0557ee41\") " pod="openstack/dnsmasq-dns-864d5fc68c-7dxrl" Nov 29 07:00:29 crc kubenswrapper[4947]: I1129 07:00:29.870824 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a69f6907-c4a4-45a1-a873-ae5c0557ee41-ovsdbserver-sb\") pod \"dnsmasq-dns-864d5fc68c-7dxrl\" (UID: \"a69f6907-c4a4-45a1-a873-ae5c0557ee41\") " pod="openstack/dnsmasq-dns-864d5fc68c-7dxrl" Nov 29 07:00:29 crc kubenswrapper[4947]: I1129 07:00:29.870909 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a69f6907-c4a4-45a1-a873-ae5c0557ee41-dns-svc\") pod \"dnsmasq-dns-864d5fc68c-7dxrl\" (UID: \"a69f6907-c4a4-45a1-a873-ae5c0557ee41\") " pod="openstack/dnsmasq-dns-864d5fc68c-7dxrl" Nov 29 07:00:29 crc kubenswrapper[4947]: I1129 07:00:29.870941 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a69f6907-c4a4-45a1-a873-ae5c0557ee41-config\") pod \"dnsmasq-dns-864d5fc68c-7dxrl\" (UID: \"a69f6907-c4a4-45a1-a873-ae5c0557ee41\") " pod="openstack/dnsmasq-dns-864d5fc68c-7dxrl" Nov 29 07:00:29 crc kubenswrapper[4947]: I1129 07:00:29.895712 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b76k\" (UniqueName: \"kubernetes.io/projected/a69f6907-c4a4-45a1-a873-ae5c0557ee41-kube-api-access-4b76k\") pod \"dnsmasq-dns-864d5fc68c-7dxrl\" (UID: \"a69f6907-c4a4-45a1-a873-ae5c0557ee41\") " pod="openstack/dnsmasq-dns-864d5fc68c-7dxrl" Nov 29 07:00:29 crc kubenswrapper[4947]: I1129 07:00:29.989796 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-7dxrl" Nov 29 07:00:30 crc kubenswrapper[4947]: I1129 07:00:30.130694 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5b856c5697-8n7gk" podUID="8d82e0fd-ef7f-47cd-b7f7-095c424197dd" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.182:5353: connect: connection refused" Nov 29 07:00:30 crc kubenswrapper[4947]: I1129 07:00:30.175129 4947 generic.go:334] "Generic (PLEG): container finished" podID="8d82e0fd-ef7f-47cd-b7f7-095c424197dd" containerID="206c4cf53a60f433f8e48ff65f16043af4a48924eed77f760bda76029c869633" exitCode=0 Nov 29 07:00:30 crc kubenswrapper[4947]: I1129 07:00:30.175188 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-8n7gk" event={"ID":"8d82e0fd-ef7f-47cd-b7f7-095c424197dd","Type":"ContainerDied","Data":"206c4cf53a60f433f8e48ff65f16043af4a48924eed77f760bda76029c869633"} Nov 29 07:00:30 crc kubenswrapper[4947]: I1129 07:00:30.526535 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-7dxrl"] Nov 29 07:00:30 crc kubenswrapper[4947]: W1129 07:00:30.540510 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda69f6907_c4a4_45a1_a873_ae5c0557ee41.slice/crio-2deac627820121d03fdcfbb05128bf3755e1caf821ec7c11ee5a404f94479ed5 WatchSource:0}: Error finding container 2deac627820121d03fdcfbb05128bf3755e1caf821ec7c11ee5a404f94479ed5: Status 404 returned error can't find the container with id 2deac627820121d03fdcfbb05128bf3755e1caf821ec7c11ee5a404f94479ed5 Nov 29 07:00:30 crc kubenswrapper[4947]: I1129 07:00:30.750461 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-8n7gk" Nov 29 07:00:30 crc kubenswrapper[4947]: I1129 07:00:30.906343 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvqkm\" (UniqueName: \"kubernetes.io/projected/8d82e0fd-ef7f-47cd-b7f7-095c424197dd-kube-api-access-jvqkm\") pod \"8d82e0fd-ef7f-47cd-b7f7-095c424197dd\" (UID: \"8d82e0fd-ef7f-47cd-b7f7-095c424197dd\") " Nov 29 07:00:30 crc kubenswrapper[4947]: I1129 07:00:30.906452 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d82e0fd-ef7f-47cd-b7f7-095c424197dd-dns-svc\") pod \"8d82e0fd-ef7f-47cd-b7f7-095c424197dd\" (UID: \"8d82e0fd-ef7f-47cd-b7f7-095c424197dd\") " Nov 29 07:00:30 crc kubenswrapper[4947]: I1129 07:00:30.906573 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d82e0fd-ef7f-47cd-b7f7-095c424197dd-config\") pod \"8d82e0fd-ef7f-47cd-b7f7-095c424197dd\" (UID: \"8d82e0fd-ef7f-47cd-b7f7-095c424197dd\") " Nov 29 07:00:30 crc kubenswrapper[4947]: I1129 07:00:30.906678 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8d82e0fd-ef7f-47cd-b7f7-095c424197dd-ovsdbserver-nb\") pod \"8d82e0fd-ef7f-47cd-b7f7-095c424197dd\" (UID: \"8d82e0fd-ef7f-47cd-b7f7-095c424197dd\") " Nov 29 07:00:30 crc kubenswrapper[4947]: I1129 07:00:30.906770 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8d82e0fd-ef7f-47cd-b7f7-095c424197dd-ovsdbserver-sb\") pod \"8d82e0fd-ef7f-47cd-b7f7-095c424197dd\" (UID: \"8d82e0fd-ef7f-47cd-b7f7-095c424197dd\") " Nov 29 07:00:30 crc kubenswrapper[4947]: I1129 07:00:30.932098 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d82e0fd-ef7f-47cd-b7f7-095c424197dd-kube-api-access-jvqkm" (OuterVolumeSpecName: "kube-api-access-jvqkm") pod "8d82e0fd-ef7f-47cd-b7f7-095c424197dd" (UID: "8d82e0fd-ef7f-47cd-b7f7-095c424197dd"). InnerVolumeSpecName "kube-api-access-jvqkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:00:31 crc kubenswrapper[4947]: I1129 07:00:31.010962 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvqkm\" (UniqueName: \"kubernetes.io/projected/8d82e0fd-ef7f-47cd-b7f7-095c424197dd-kube-api-access-jvqkm\") on node \"crc\" DevicePath \"\"" Nov 29 07:00:31 crc kubenswrapper[4947]: I1129 07:00:31.043267 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d82e0fd-ef7f-47cd-b7f7-095c424197dd-config" (OuterVolumeSpecName: "config") pod "8d82e0fd-ef7f-47cd-b7f7-095c424197dd" (UID: "8d82e0fd-ef7f-47cd-b7f7-095c424197dd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:00:31 crc kubenswrapper[4947]: I1129 07:00:31.047711 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d82e0fd-ef7f-47cd-b7f7-095c424197dd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8d82e0fd-ef7f-47cd-b7f7-095c424197dd" (UID: "8d82e0fd-ef7f-47cd-b7f7-095c424197dd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:00:31 crc kubenswrapper[4947]: I1129 07:00:31.049162 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d82e0fd-ef7f-47cd-b7f7-095c424197dd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8d82e0fd-ef7f-47cd-b7f7-095c424197dd" (UID: "8d82e0fd-ef7f-47cd-b7f7-095c424197dd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:00:31 crc kubenswrapper[4947]: I1129 07:00:31.053069 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d82e0fd-ef7f-47cd-b7f7-095c424197dd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8d82e0fd-ef7f-47cd-b7f7-095c424197dd" (UID: "8d82e0fd-ef7f-47cd-b7f7-095c424197dd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:00:31 crc kubenswrapper[4947]: I1129 07:00:31.113438 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d82e0fd-ef7f-47cd-b7f7-095c424197dd-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:00:31 crc kubenswrapper[4947]: I1129 07:00:31.113477 4947 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8d82e0fd-ef7f-47cd-b7f7-095c424197dd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 07:00:31 crc kubenswrapper[4947]: I1129 07:00:31.113487 4947 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8d82e0fd-ef7f-47cd-b7f7-095c424197dd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 07:00:31 crc kubenswrapper[4947]: I1129 07:00:31.113497 4947 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d82e0fd-ef7f-47cd-b7f7-095c424197dd-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 07:00:31 crc kubenswrapper[4947]: I1129 07:00:31.191714 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-8n7gk" Nov 29 07:00:31 crc kubenswrapper[4947]: I1129 07:00:31.214159 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-7dxrl" event={"ID":"a69f6907-c4a4-45a1-a873-ae5c0557ee41","Type":"ContainerStarted","Data":"2deac627820121d03fdcfbb05128bf3755e1caf821ec7c11ee5a404f94479ed5"} Nov 29 07:00:31 crc kubenswrapper[4947]: I1129 07:00:31.214709 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-8n7gk" event={"ID":"8d82e0fd-ef7f-47cd-b7f7-095c424197dd","Type":"ContainerDied","Data":"694b057e23fc50cef6251a82b22842acd3d4bfb613c1295bca17c17a1190aa31"} Nov 29 07:00:31 crc kubenswrapper[4947]: I1129 07:00:31.214749 4947 scope.go:117] "RemoveContainer" containerID="206c4cf53a60f433f8e48ff65f16043af4a48924eed77f760bda76029c869633" Nov 29 07:00:31 crc kubenswrapper[4947]: I1129 07:00:31.258864 4947 scope.go:117] "RemoveContainer" containerID="97b3127733878640f163af5537cd8db424d7267bc7b1a642e109a3719b211c84" Nov 29 07:00:31 crc kubenswrapper[4947]: I1129 07:00:31.259731 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-8n7gk"] Nov 29 07:00:31 crc kubenswrapper[4947]: I1129 07:00:31.270300 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-8n7gk"] Nov 29 07:00:32 crc kubenswrapper[4947]: I1129 07:00:32.205173 4947 generic.go:334] "Generic (PLEG): container finished" podID="a69f6907-c4a4-45a1-a873-ae5c0557ee41" containerID="885fd1bba3922cf325fcc284f243bf28a09153a64d2fbee9ed7e393bef95b4fa" exitCode=0 Nov 29 07:00:32 crc kubenswrapper[4947]: I1129 07:00:32.205301 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-7dxrl" event={"ID":"a69f6907-c4a4-45a1-a873-ae5c0557ee41","Type":"ContainerDied","Data":"885fd1bba3922cf325fcc284f243bf28a09153a64d2fbee9ed7e393bef95b4fa"} Nov 29 07:00:33 crc kubenswrapper[4947]: I1129 07:00:33.191494 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d82e0fd-ef7f-47cd-b7f7-095c424197dd" path="/var/lib/kubelet/pods/8d82e0fd-ef7f-47cd-b7f7-095c424197dd/volumes" Nov 29 07:00:33 crc kubenswrapper[4947]: I1129 07:00:33.216471 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-7dxrl" event={"ID":"a69f6907-c4a4-45a1-a873-ae5c0557ee41","Type":"ContainerStarted","Data":"39a1ba9fcc013c234ffb2f014b8cef95647a8fc3a9eee1d1a26044ddf9e7e441"} Nov 29 07:00:33 crc kubenswrapper[4947]: I1129 07:00:33.217474 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-864d5fc68c-7dxrl" Nov 29 07:00:33 crc kubenswrapper[4947]: I1129 07:00:33.245357 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-864d5fc68c-7dxrl" podStartSLOduration=4.245337015 podStartE2EDuration="4.245337015s" podCreationTimestamp="2025-11-29 07:00:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:00:33.23719894 +0000 UTC m=+1584.281581021" watchObservedRunningTime="2025-11-29 07:00:33.245337015 +0000 UTC m=+1584.289719096" Nov 29 07:00:39 crc kubenswrapper[4947]: I1129 07:00:39.993536 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-864d5fc68c-7dxrl" Nov 29 07:00:40 crc kubenswrapper[4947]: I1129 07:00:40.072484 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-trw52"] Nov 29 07:00:40 crc kubenswrapper[4947]: I1129 07:00:40.073065 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6447ccbd8f-trw52" podUID="5f384cf4-0f9f-4059-bc4e-15bb7fce1604" containerName="dnsmasq-dns" containerID="cri-o://3419efa9340f6bd61e22ddf54d4cde6bbed6d901d0e5018d1d60846539e62f6c" gracePeriod=10 Nov 29 07:00:41 crc kubenswrapper[4947]: I1129 07:00:41.229874 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-trw52" Nov 29 07:00:41 crc kubenswrapper[4947]: I1129 07:00:41.316985 4947 generic.go:334] "Generic (PLEG): container finished" podID="5f384cf4-0f9f-4059-bc4e-15bb7fce1604" containerID="3419efa9340f6bd61e22ddf54d4cde6bbed6d901d0e5018d1d60846539e62f6c" exitCode=0 Nov 29 07:00:41 crc kubenswrapper[4947]: I1129 07:00:41.317055 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-trw52" event={"ID":"5f384cf4-0f9f-4059-bc4e-15bb7fce1604","Type":"ContainerDied","Data":"3419efa9340f6bd61e22ddf54d4cde6bbed6d901d0e5018d1d60846539e62f6c"} Nov 29 07:00:41 crc kubenswrapper[4947]: I1129 07:00:41.317080 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-trw52" Nov 29 07:00:41 crc kubenswrapper[4947]: I1129 07:00:41.317103 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-trw52" event={"ID":"5f384cf4-0f9f-4059-bc4e-15bb7fce1604","Type":"ContainerDied","Data":"443b8ec68259e14a13ba4fda95707d4b569089280482e4123e6b361100d4f22b"} Nov 29 07:00:41 crc kubenswrapper[4947]: I1129 07:00:41.317128 4947 scope.go:117] "RemoveContainer" containerID="3419efa9340f6bd61e22ddf54d4cde6bbed6d901d0e5018d1d60846539e62f6c" Nov 29 07:00:41 crc kubenswrapper[4947]: I1129 07:00:41.331950 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f384cf4-0f9f-4059-bc4e-15bb7fce1604-ovsdbserver-nb\") pod \"5f384cf4-0f9f-4059-bc4e-15bb7fce1604\" (UID: \"5f384cf4-0f9f-4059-bc4e-15bb7fce1604\") " Nov 29 07:00:41 crc kubenswrapper[4947]: I1129 07:00:41.332028 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f384cf4-0f9f-4059-bc4e-15bb7fce1604-dns-svc\") pod \"5f384cf4-0f9f-4059-bc4e-15bb7fce1604\" (UID: \"5f384cf4-0f9f-4059-bc4e-15bb7fce1604\") " Nov 29 07:00:41 crc kubenswrapper[4947]: I1129 07:00:41.332057 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5f384cf4-0f9f-4059-bc4e-15bb7fce1604-openstack-edpm-ipam\") pod \"5f384cf4-0f9f-4059-bc4e-15bb7fce1604\" (UID: \"5f384cf4-0f9f-4059-bc4e-15bb7fce1604\") " Nov 29 07:00:41 crc kubenswrapper[4947]: I1129 07:00:41.332128 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cp4gl\" (UniqueName: \"kubernetes.io/projected/5f384cf4-0f9f-4059-bc4e-15bb7fce1604-kube-api-access-cp4gl\") pod \"5f384cf4-0f9f-4059-bc4e-15bb7fce1604\" (UID: \"5f384cf4-0f9f-4059-bc4e-15bb7fce1604\") " Nov 29 07:00:41 crc kubenswrapper[4947]: I1129 07:00:41.332190 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f384cf4-0f9f-4059-bc4e-15bb7fce1604-config\") pod \"5f384cf4-0f9f-4059-bc4e-15bb7fce1604\" (UID: \"5f384cf4-0f9f-4059-bc4e-15bb7fce1604\") " Nov 29 07:00:41 crc kubenswrapper[4947]: I1129 07:00:41.332341 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f384cf4-0f9f-4059-bc4e-15bb7fce1604-ovsdbserver-sb\") pod \"5f384cf4-0f9f-4059-bc4e-15bb7fce1604\" (UID: \"5f384cf4-0f9f-4059-bc4e-15bb7fce1604\") " Nov 29 07:00:41 crc kubenswrapper[4947]: I1129 07:00:41.341845 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f384cf4-0f9f-4059-bc4e-15bb7fce1604-kube-api-access-cp4gl" (OuterVolumeSpecName: "kube-api-access-cp4gl") pod "5f384cf4-0f9f-4059-bc4e-15bb7fce1604" (UID: "5f384cf4-0f9f-4059-bc4e-15bb7fce1604"). InnerVolumeSpecName "kube-api-access-cp4gl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:00:41 crc kubenswrapper[4947]: I1129 07:00:41.350885 4947 scope.go:117] "RemoveContainer" containerID="17fa0a3013f9152dd052f67b14a8964192d143e1f2402668b80e9b9b58dca807" Nov 29 07:00:41 crc kubenswrapper[4947]: I1129 07:00:41.399139 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f384cf4-0f9f-4059-bc4e-15bb7fce1604-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5f384cf4-0f9f-4059-bc4e-15bb7fce1604" (UID: "5f384cf4-0f9f-4059-bc4e-15bb7fce1604"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:00:41 crc kubenswrapper[4947]: I1129 07:00:41.400053 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f384cf4-0f9f-4059-bc4e-15bb7fce1604-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5f384cf4-0f9f-4059-bc4e-15bb7fce1604" (UID: "5f384cf4-0f9f-4059-bc4e-15bb7fce1604"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:00:41 crc kubenswrapper[4947]: I1129 07:00:41.401320 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f384cf4-0f9f-4059-bc4e-15bb7fce1604-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5f384cf4-0f9f-4059-bc4e-15bb7fce1604" (UID: "5f384cf4-0f9f-4059-bc4e-15bb7fce1604"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:00:41 crc kubenswrapper[4947]: I1129 07:00:41.411725 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f384cf4-0f9f-4059-bc4e-15bb7fce1604-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "5f384cf4-0f9f-4059-bc4e-15bb7fce1604" (UID: "5f384cf4-0f9f-4059-bc4e-15bb7fce1604"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:00:41 crc kubenswrapper[4947]: I1129 07:00:41.415464 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f384cf4-0f9f-4059-bc4e-15bb7fce1604-config" (OuterVolumeSpecName: "config") pod "5f384cf4-0f9f-4059-bc4e-15bb7fce1604" (UID: "5f384cf4-0f9f-4059-bc4e-15bb7fce1604"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:00:41 crc kubenswrapper[4947]: I1129 07:00:41.437598 4947 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f384cf4-0f9f-4059-bc4e-15bb7fce1604-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 07:00:41 crc kubenswrapper[4947]: I1129 07:00:41.437878 4947 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f384cf4-0f9f-4059-bc4e-15bb7fce1604-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 07:00:41 crc kubenswrapper[4947]: I1129 07:00:41.437962 4947 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5f384cf4-0f9f-4059-bc4e-15bb7fce1604-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 29 07:00:41 crc kubenswrapper[4947]: I1129 07:00:41.438119 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cp4gl\" (UniqueName: \"kubernetes.io/projected/5f384cf4-0f9f-4059-bc4e-15bb7fce1604-kube-api-access-cp4gl\") on node \"crc\" DevicePath \"\"" Nov 29 07:00:41 crc kubenswrapper[4947]: I1129 07:00:41.438287 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f384cf4-0f9f-4059-bc4e-15bb7fce1604-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:00:41 crc kubenswrapper[4947]: I1129 07:00:41.438404 4947 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f384cf4-0f9f-4059-bc4e-15bb7fce1604-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 07:00:41 crc kubenswrapper[4947]: I1129 07:00:41.515736 4947 scope.go:117] "RemoveContainer" containerID="3419efa9340f6bd61e22ddf54d4cde6bbed6d901d0e5018d1d60846539e62f6c" Nov 29 07:00:41 crc kubenswrapper[4947]: E1129 07:00:41.517259 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3419efa9340f6bd61e22ddf54d4cde6bbed6d901d0e5018d1d60846539e62f6c\": container with ID starting with 3419efa9340f6bd61e22ddf54d4cde6bbed6d901d0e5018d1d60846539e62f6c not found: ID does not exist" containerID="3419efa9340f6bd61e22ddf54d4cde6bbed6d901d0e5018d1d60846539e62f6c" Nov 29 07:00:41 crc kubenswrapper[4947]: I1129 07:00:41.517330 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3419efa9340f6bd61e22ddf54d4cde6bbed6d901d0e5018d1d60846539e62f6c"} err="failed to get container status \"3419efa9340f6bd61e22ddf54d4cde6bbed6d901d0e5018d1d60846539e62f6c\": rpc error: code = NotFound desc = could not find container \"3419efa9340f6bd61e22ddf54d4cde6bbed6d901d0e5018d1d60846539e62f6c\": container with ID starting with 3419efa9340f6bd61e22ddf54d4cde6bbed6d901d0e5018d1d60846539e62f6c not found: ID does not exist" Nov 29 07:00:41 crc kubenswrapper[4947]: I1129 07:00:41.517373 4947 scope.go:117] "RemoveContainer" containerID="17fa0a3013f9152dd052f67b14a8964192d143e1f2402668b80e9b9b58dca807" Nov 29 07:00:41 crc kubenswrapper[4947]: E1129 07:00:41.518038 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17fa0a3013f9152dd052f67b14a8964192d143e1f2402668b80e9b9b58dca807\": container with ID starting with 17fa0a3013f9152dd052f67b14a8964192d143e1f2402668b80e9b9b58dca807 not found: ID does not exist" containerID="17fa0a3013f9152dd052f67b14a8964192d143e1f2402668b80e9b9b58dca807" Nov 29 07:00:41 crc kubenswrapper[4947]: I1129 07:00:41.518095 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17fa0a3013f9152dd052f67b14a8964192d143e1f2402668b80e9b9b58dca807"} err="failed to get container status \"17fa0a3013f9152dd052f67b14a8964192d143e1f2402668b80e9b9b58dca807\": rpc error: code = NotFound desc = could not find container \"17fa0a3013f9152dd052f67b14a8964192d143e1f2402668b80e9b9b58dca807\": container with ID starting with 17fa0a3013f9152dd052f67b14a8964192d143e1f2402668b80e9b9b58dca807 not found: ID does not exist" Nov 29 07:00:41 crc kubenswrapper[4947]: I1129 07:00:41.662488 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-trw52"] Nov 29 07:00:41 crc kubenswrapper[4947]: I1129 07:00:41.672346 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-trw52"] Nov 29 07:00:43 crc kubenswrapper[4947]: I1129 07:00:43.191682 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f384cf4-0f9f-4059-bc4e-15bb7fce1604" path="/var/lib/kubelet/pods/5f384cf4-0f9f-4059-bc4e-15bb7fce1604/volumes" Nov 29 07:00:50 crc kubenswrapper[4947]: I1129 07:00:50.417726 4947 generic.go:334] "Generic (PLEG): container finished" podID="173b8534-1ee1-448a-bdd1-62369c58057b" containerID="7c1c38e9120330d778b0061fac8e20e8f5bee266a6a49ed89c1e193fb511e276" exitCode=0 Nov 29 07:00:50 crc kubenswrapper[4947]: I1129 07:00:50.417843 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"173b8534-1ee1-448a-bdd1-62369c58057b","Type":"ContainerDied","Data":"7c1c38e9120330d778b0061fac8e20e8f5bee266a6a49ed89c1e193fb511e276"} Nov 29 07:00:50 crc kubenswrapper[4947]: I1129 07:00:50.422814 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tc6t2"] Nov 29 07:00:50 crc kubenswrapper[4947]: E1129 07:00:50.423472 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f384cf4-0f9f-4059-bc4e-15bb7fce1604" containerName="init" Nov 29 07:00:50 crc kubenswrapper[4947]: I1129 07:00:50.423511 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f384cf4-0f9f-4059-bc4e-15bb7fce1604" containerName="init" Nov 29 07:00:50 crc kubenswrapper[4947]: E1129 07:00:50.423524 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d82e0fd-ef7f-47cd-b7f7-095c424197dd" containerName="dnsmasq-dns" Nov 29 07:00:50 crc kubenswrapper[4947]: I1129 07:00:50.423531 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d82e0fd-ef7f-47cd-b7f7-095c424197dd" containerName="dnsmasq-dns" Nov 29 07:00:50 crc kubenswrapper[4947]: E1129 07:00:50.423542 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f384cf4-0f9f-4059-bc4e-15bb7fce1604" containerName="dnsmasq-dns" Nov 29 07:00:50 crc kubenswrapper[4947]: I1129 07:00:50.423549 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f384cf4-0f9f-4059-bc4e-15bb7fce1604" containerName="dnsmasq-dns" Nov 29 07:00:50 crc kubenswrapper[4947]: E1129 07:00:50.423562 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d82e0fd-ef7f-47cd-b7f7-095c424197dd" containerName="init" Nov 29 07:00:50 crc kubenswrapper[4947]: I1129 07:00:50.423568 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d82e0fd-ef7f-47cd-b7f7-095c424197dd" containerName="init" Nov 29 07:00:50 crc kubenswrapper[4947]: I1129 07:00:50.423746 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d82e0fd-ef7f-47cd-b7f7-095c424197dd" containerName="dnsmasq-dns" Nov 29 07:00:50 crc kubenswrapper[4947]: I1129 07:00:50.423770 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f384cf4-0f9f-4059-bc4e-15bb7fce1604" containerName="dnsmasq-dns" Nov 29 07:00:50 crc kubenswrapper[4947]: I1129 07:00:50.424550 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tc6t2" Nov 29 07:00:50 crc kubenswrapper[4947]: I1129 07:00:50.428054 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:00:50 crc kubenswrapper[4947]: I1129 07:00:50.428103 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:00:50 crc kubenswrapper[4947]: I1129 07:00:50.428208 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xvljs" Nov 29 07:00:50 crc kubenswrapper[4947]: I1129 07:00:50.428306 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:00:50 crc kubenswrapper[4947]: I1129 07:00:50.439783 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tc6t2"] Nov 29 07:00:50 crc kubenswrapper[4947]: I1129 07:00:50.558952 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/624b369c-3174-4b17-86de-0950411e5ddf-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tc6t2\" (UID: \"624b369c-3174-4b17-86de-0950411e5ddf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tc6t2" Nov 29 07:00:50 crc kubenswrapper[4947]: I1129 07:00:50.559326 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/624b369c-3174-4b17-86de-0950411e5ddf-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tc6t2\" (UID: \"624b369c-3174-4b17-86de-0950411e5ddf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tc6t2" Nov 29 07:00:50 crc kubenswrapper[4947]: I1129 07:00:50.559789 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lr99\" (UniqueName: \"kubernetes.io/projected/624b369c-3174-4b17-86de-0950411e5ddf-kube-api-access-6lr99\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tc6t2\" (UID: \"624b369c-3174-4b17-86de-0950411e5ddf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tc6t2" Nov 29 07:00:50 crc kubenswrapper[4947]: I1129 07:00:50.560018 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/624b369c-3174-4b17-86de-0950411e5ddf-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tc6t2\" (UID: \"624b369c-3174-4b17-86de-0950411e5ddf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tc6t2" Nov 29 07:00:50 crc kubenswrapper[4947]: I1129 07:00:50.663105 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lr99\" (UniqueName: \"kubernetes.io/projected/624b369c-3174-4b17-86de-0950411e5ddf-kube-api-access-6lr99\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tc6t2\" (UID: \"624b369c-3174-4b17-86de-0950411e5ddf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tc6t2" Nov 29 07:00:50 crc kubenswrapper[4947]: I1129 07:00:50.663927 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/624b369c-3174-4b17-86de-0950411e5ddf-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tc6t2\" (UID: \"624b369c-3174-4b17-86de-0950411e5ddf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tc6t2" Nov 29 07:00:50 crc kubenswrapper[4947]: I1129 07:00:50.664015 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/624b369c-3174-4b17-86de-0950411e5ddf-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tc6t2\" (UID: \"624b369c-3174-4b17-86de-0950411e5ddf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tc6t2" Nov 29 07:00:50 crc kubenswrapper[4947]: I1129 07:00:50.664136 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/624b369c-3174-4b17-86de-0950411e5ddf-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tc6t2\" (UID: \"624b369c-3174-4b17-86de-0950411e5ddf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tc6t2" Nov 29 07:00:50 crc kubenswrapper[4947]: I1129 07:00:50.672552 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/624b369c-3174-4b17-86de-0950411e5ddf-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tc6t2\" (UID: \"624b369c-3174-4b17-86de-0950411e5ddf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tc6t2" Nov 29 07:00:50 crc kubenswrapper[4947]: I1129 07:00:50.678913 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/624b369c-3174-4b17-86de-0950411e5ddf-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tc6t2\" (UID: \"624b369c-3174-4b17-86de-0950411e5ddf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tc6t2" Nov 29 07:00:50 crc kubenswrapper[4947]: I1129 07:00:50.680996 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/624b369c-3174-4b17-86de-0950411e5ddf-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tc6t2\" (UID: \"624b369c-3174-4b17-86de-0950411e5ddf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tc6t2" Nov 29 07:00:50 crc kubenswrapper[4947]: I1129 07:00:50.685567 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lr99\" (UniqueName: \"kubernetes.io/projected/624b369c-3174-4b17-86de-0950411e5ddf-kube-api-access-6lr99\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tc6t2\" (UID: \"624b369c-3174-4b17-86de-0950411e5ddf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tc6t2" Nov 29 07:00:50 crc kubenswrapper[4947]: I1129 07:00:50.936303 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tc6t2" Nov 29 07:00:51 crc kubenswrapper[4947]: I1129 07:00:51.430350 4947 generic.go:334] "Generic (PLEG): container finished" podID="d4d9399d-41b3-40c1-89d4-8124e0966300" containerID="05facf2f4f72a86e3c22461a3abc89b10f558de1363b0f33d5cbddf1bd76b4ef" exitCode=0 Nov 29 07:00:51 crc kubenswrapper[4947]: I1129 07:00:51.430499 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d4d9399d-41b3-40c1-89d4-8124e0966300","Type":"ContainerDied","Data":"05facf2f4f72a86e3c22461a3abc89b10f558de1363b0f33d5cbddf1bd76b4ef"} Nov 29 07:00:51 crc kubenswrapper[4947]: I1129 07:00:51.434332 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"173b8534-1ee1-448a-bdd1-62369c58057b","Type":"ContainerStarted","Data":"d07010342ef78d8c8e27076e261dc02066d79653ceaeae9f083036e8406ffe0e"} Nov 29 07:00:51 crc kubenswrapper[4947]: I1129 07:00:51.434624 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 29 07:00:51 crc kubenswrapper[4947]: I1129 07:00:51.513052 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.513028048 podStartE2EDuration="37.513028048s" podCreationTimestamp="2025-11-29 07:00:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:00:51.494132643 +0000 UTC m=+1602.538514724" watchObservedRunningTime="2025-11-29 07:00:51.513028048 +0000 UTC m=+1602.557410149" Nov 29 07:00:51 crc kubenswrapper[4947]: I1129 07:00:51.592573 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tc6t2"] Nov 29 07:00:53 crc kubenswrapper[4947]: I1129 07:00:53.005633 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tc6t2" event={"ID":"624b369c-3174-4b17-86de-0950411e5ddf","Type":"ContainerStarted","Data":"c3491943ed4b2d67ca13386cb59c97f71242e09b2ea757724fd7a3308a07b47c"} Nov 29 07:00:53 crc kubenswrapper[4947]: I1129 07:00:53.026272 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d4d9399d-41b3-40c1-89d4-8124e0966300","Type":"ContainerStarted","Data":"7ad079de81dfc92a5bbf0d6248c7c3b57e0592a04d042c0526a013ce31be0f69"} Nov 29 07:00:53 crc kubenswrapper[4947]: I1129 07:00:53.027050 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:00:55 crc kubenswrapper[4947]: I1129 07:00:55.517054 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=39.517020388 podStartE2EDuration="39.517020388s" podCreationTimestamp="2025-11-29 07:00:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:00:53.073033947 +0000 UTC m=+1604.117416038" watchObservedRunningTime="2025-11-29 07:00:55.517020388 +0000 UTC m=+1606.561402469" Nov 29 07:00:55 crc kubenswrapper[4947]: I1129 07:00:55.538677 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jbtrv"] Nov 29 07:00:55 crc kubenswrapper[4947]: I1129 07:00:55.562238 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jbtrv"] Nov 29 07:00:55 crc kubenswrapper[4947]: I1129 07:00:55.562378 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jbtrv" Nov 29 07:00:55 crc kubenswrapper[4947]: I1129 07:00:55.630783 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6831a389-7b7f-43da-a2e7-0ae324b8c4a0-catalog-content\") pod \"community-operators-jbtrv\" (UID: \"6831a389-7b7f-43da-a2e7-0ae324b8c4a0\") " pod="openshift-marketplace/community-operators-jbtrv" Nov 29 07:00:55 crc kubenswrapper[4947]: I1129 07:00:55.631418 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6831a389-7b7f-43da-a2e7-0ae324b8c4a0-utilities\") pod \"community-operators-jbtrv\" (UID: \"6831a389-7b7f-43da-a2e7-0ae324b8c4a0\") " pod="openshift-marketplace/community-operators-jbtrv" Nov 29 07:00:55 crc kubenswrapper[4947]: I1129 07:00:55.631622 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn56m\" (UniqueName: \"kubernetes.io/projected/6831a389-7b7f-43da-a2e7-0ae324b8c4a0-kube-api-access-nn56m\") pod \"community-operators-jbtrv\" (UID: \"6831a389-7b7f-43da-a2e7-0ae324b8c4a0\") " pod="openshift-marketplace/community-operators-jbtrv" Nov 29 07:00:55 crc kubenswrapper[4947]: I1129 07:00:55.734185 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn56m\" (UniqueName: \"kubernetes.io/projected/6831a389-7b7f-43da-a2e7-0ae324b8c4a0-kube-api-access-nn56m\") pod \"community-operators-jbtrv\" (UID: \"6831a389-7b7f-43da-a2e7-0ae324b8c4a0\") " pod="openshift-marketplace/community-operators-jbtrv" Nov 29 07:00:55 crc kubenswrapper[4947]: I1129 07:00:55.734499 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6831a389-7b7f-43da-a2e7-0ae324b8c4a0-catalog-content\") pod \"community-operators-jbtrv\" (UID: \"6831a389-7b7f-43da-a2e7-0ae324b8c4a0\") " pod="openshift-marketplace/community-operators-jbtrv" Nov 29 07:00:55 crc kubenswrapper[4947]: I1129 07:00:55.734687 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6831a389-7b7f-43da-a2e7-0ae324b8c4a0-utilities\") pod \"community-operators-jbtrv\" (UID: \"6831a389-7b7f-43da-a2e7-0ae324b8c4a0\") " pod="openshift-marketplace/community-operators-jbtrv" Nov 29 07:00:55 crc kubenswrapper[4947]: I1129 07:00:55.736280 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6831a389-7b7f-43da-a2e7-0ae324b8c4a0-catalog-content\") pod \"community-operators-jbtrv\" (UID: \"6831a389-7b7f-43da-a2e7-0ae324b8c4a0\") " pod="openshift-marketplace/community-operators-jbtrv" Nov 29 07:00:55 crc kubenswrapper[4947]: I1129 07:00:55.736641 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6831a389-7b7f-43da-a2e7-0ae324b8c4a0-utilities\") pod \"community-operators-jbtrv\" (UID: \"6831a389-7b7f-43da-a2e7-0ae324b8c4a0\") " pod="openshift-marketplace/community-operators-jbtrv" Nov 29 07:00:55 crc kubenswrapper[4947]: I1129 07:00:55.763536 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn56m\" (UniqueName: \"kubernetes.io/projected/6831a389-7b7f-43da-a2e7-0ae324b8c4a0-kube-api-access-nn56m\") pod \"community-operators-jbtrv\" (UID: \"6831a389-7b7f-43da-a2e7-0ae324b8c4a0\") " pod="openshift-marketplace/community-operators-jbtrv" Nov 29 07:00:55 crc kubenswrapper[4947]: I1129 07:00:55.906040 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jbtrv" Nov 29 07:00:56 crc kubenswrapper[4947]: W1129 07:00:56.444461 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6831a389_7b7f_43da_a2e7_0ae324b8c4a0.slice/crio-8b8cc217e82a7bf0c9fedee7e930074d8c2329ff6f9ad2e526ac5a0b43efe60d WatchSource:0}: Error finding container 8b8cc217e82a7bf0c9fedee7e930074d8c2329ff6f9ad2e526ac5a0b43efe60d: Status 404 returned error can't find the container with id 8b8cc217e82a7bf0c9fedee7e930074d8c2329ff6f9ad2e526ac5a0b43efe60d Nov 29 07:00:56 crc kubenswrapper[4947]: I1129 07:00:56.455936 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jbtrv"] Nov 29 07:00:57 crc kubenswrapper[4947]: I1129 07:00:57.100977 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jbtrv" event={"ID":"6831a389-7b7f-43da-a2e7-0ae324b8c4a0","Type":"ContainerStarted","Data":"8b8cc217e82a7bf0c9fedee7e930074d8c2329ff6f9ad2e526ac5a0b43efe60d"} Nov 29 07:01:00 crc kubenswrapper[4947]: I1129 07:01:00.139446 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29406661-f885s"] Nov 29 07:01:00 crc kubenswrapper[4947]: I1129 07:01:00.141761 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29406661-f885s" Nov 29 07:01:00 crc kubenswrapper[4947]: I1129 07:01:00.165802 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29406661-f885s"] Nov 29 07:01:00 crc kubenswrapper[4947]: I1129 07:01:00.243294 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q7t7\" (UniqueName: \"kubernetes.io/projected/bb316ee8-4e92-455d-b09c-e6f1b57d9a98-kube-api-access-2q7t7\") pod \"keystone-cron-29406661-f885s\" (UID: \"bb316ee8-4e92-455d-b09c-e6f1b57d9a98\") " pod="openstack/keystone-cron-29406661-f885s" Nov 29 07:01:00 crc kubenswrapper[4947]: I1129 07:01:00.243430 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb316ee8-4e92-455d-b09c-e6f1b57d9a98-config-data\") pod \"keystone-cron-29406661-f885s\" (UID: \"bb316ee8-4e92-455d-b09c-e6f1b57d9a98\") " pod="openstack/keystone-cron-29406661-f885s" Nov 29 07:01:00 crc kubenswrapper[4947]: I1129 07:01:00.243482 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb316ee8-4e92-455d-b09c-e6f1b57d9a98-combined-ca-bundle\") pod \"keystone-cron-29406661-f885s\" (UID: \"bb316ee8-4e92-455d-b09c-e6f1b57d9a98\") " pod="openstack/keystone-cron-29406661-f885s" Nov 29 07:01:00 crc kubenswrapper[4947]: I1129 07:01:00.243675 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bb316ee8-4e92-455d-b09c-e6f1b57d9a98-fernet-keys\") pod \"keystone-cron-29406661-f885s\" (UID: \"bb316ee8-4e92-455d-b09c-e6f1b57d9a98\") " pod="openstack/keystone-cron-29406661-f885s" Nov 29 07:01:00 crc kubenswrapper[4947]: I1129 07:01:00.346036 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bb316ee8-4e92-455d-b09c-e6f1b57d9a98-fernet-keys\") pod \"keystone-cron-29406661-f885s\" (UID: \"bb316ee8-4e92-455d-b09c-e6f1b57d9a98\") " pod="openstack/keystone-cron-29406661-f885s" Nov 29 07:01:00 crc kubenswrapper[4947]: I1129 07:01:00.346207 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2q7t7\" (UniqueName: \"kubernetes.io/projected/bb316ee8-4e92-455d-b09c-e6f1b57d9a98-kube-api-access-2q7t7\") pod \"keystone-cron-29406661-f885s\" (UID: \"bb316ee8-4e92-455d-b09c-e6f1b57d9a98\") " pod="openstack/keystone-cron-29406661-f885s" Nov 29 07:01:00 crc kubenswrapper[4947]: I1129 07:01:00.346303 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb316ee8-4e92-455d-b09c-e6f1b57d9a98-config-data\") pod \"keystone-cron-29406661-f885s\" (UID: \"bb316ee8-4e92-455d-b09c-e6f1b57d9a98\") " pod="openstack/keystone-cron-29406661-f885s" Nov 29 07:01:00 crc kubenswrapper[4947]: I1129 07:01:00.346366 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb316ee8-4e92-455d-b09c-e6f1b57d9a98-combined-ca-bundle\") pod \"keystone-cron-29406661-f885s\" (UID: \"bb316ee8-4e92-455d-b09c-e6f1b57d9a98\") " pod="openstack/keystone-cron-29406661-f885s" Nov 29 07:01:00 crc kubenswrapper[4947]: I1129 07:01:00.354748 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb316ee8-4e92-455d-b09c-e6f1b57d9a98-combined-ca-bundle\") pod \"keystone-cron-29406661-f885s\" (UID: \"bb316ee8-4e92-455d-b09c-e6f1b57d9a98\") " pod="openstack/keystone-cron-29406661-f885s" Nov 29 07:01:00 crc kubenswrapper[4947]: I1129 07:01:00.355798 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb316ee8-4e92-455d-b09c-e6f1b57d9a98-config-data\") pod \"keystone-cron-29406661-f885s\" (UID: \"bb316ee8-4e92-455d-b09c-e6f1b57d9a98\") " pod="openstack/keystone-cron-29406661-f885s" Nov 29 07:01:00 crc kubenswrapper[4947]: I1129 07:01:00.365402 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bb316ee8-4e92-455d-b09c-e6f1b57d9a98-fernet-keys\") pod \"keystone-cron-29406661-f885s\" (UID: \"bb316ee8-4e92-455d-b09c-e6f1b57d9a98\") " pod="openstack/keystone-cron-29406661-f885s" Nov 29 07:01:00 crc kubenswrapper[4947]: I1129 07:01:00.368926 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2q7t7\" (UniqueName: \"kubernetes.io/projected/bb316ee8-4e92-455d-b09c-e6f1b57d9a98-kube-api-access-2q7t7\") pod \"keystone-cron-29406661-f885s\" (UID: \"bb316ee8-4e92-455d-b09c-e6f1b57d9a98\") " pod="openstack/keystone-cron-29406661-f885s" Nov 29 07:01:00 crc kubenswrapper[4947]: I1129 07:01:00.496234 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29406661-f885s" Nov 29 07:01:03 crc kubenswrapper[4947]: I1129 07:01:03.809844 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29406661-f885s"] Nov 29 07:01:04 crc kubenswrapper[4947]: I1129 07:01:04.244718 4947 generic.go:334] "Generic (PLEG): container finished" podID="6831a389-7b7f-43da-a2e7-0ae324b8c4a0" containerID="f0df1a2b183e7de819551c41e58a3609089015bf52f351dd20d15a7f61d40da6" exitCode=0 Nov 29 07:01:04 crc kubenswrapper[4947]: I1129 07:01:04.244795 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jbtrv" event={"ID":"6831a389-7b7f-43da-a2e7-0ae324b8c4a0","Type":"ContainerDied","Data":"f0df1a2b183e7de819551c41e58a3609089015bf52f351dd20d15a7f61d40da6"} Nov 29 07:01:05 crc kubenswrapper[4947]: I1129 07:01:05.454140 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="173b8534-1ee1-448a-bdd1-62369c58057b" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.190:5671: connect: connection refused" Nov 29 07:01:06 crc kubenswrapper[4947]: W1129 07:01:06.118799 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb316ee8_4e92_455d_b09c_e6f1b57d9a98.slice/crio-6803a428383cd286370529721a2e66ec205dd784bf8a2e4053f03448790d2e99 WatchSource:0}: Error finding container 6803a428383cd286370529721a2e66ec205dd784bf8a2e4053f03448790d2e99: Status 404 returned error can't find the container with id 6803a428383cd286370529721a2e66ec205dd784bf8a2e4053f03448790d2e99 Nov 29 07:01:06 crc kubenswrapper[4947]: I1129 07:01:06.287856 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29406661-f885s" event={"ID":"bb316ee8-4e92-455d-b09c-e6f1b57d9a98","Type":"ContainerStarted","Data":"6803a428383cd286370529721a2e66ec205dd784bf8a2e4053f03448790d2e99"} Nov 29 07:01:06 crc kubenswrapper[4947]: I1129 07:01:06.569800 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="d4d9399d-41b3-40c1-89d4-8124e0966300" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.191:5671: connect: connection refused" Nov 29 07:01:07 crc kubenswrapper[4947]: I1129 07:01:07.303720 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29406661-f885s" event={"ID":"bb316ee8-4e92-455d-b09c-e6f1b57d9a98","Type":"ContainerStarted","Data":"a995f7533f368b445307be2b6c641562d53607846e6956fa28b7ea2f5bdd5cb3"} Nov 29 07:01:07 crc kubenswrapper[4947]: I1129 07:01:07.307496 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jbtrv" event={"ID":"6831a389-7b7f-43da-a2e7-0ae324b8c4a0","Type":"ContainerStarted","Data":"7918cc0c027dcf8f6d57523a30e184f1b54253a9a16f40e0df11153dd469008f"} Nov 29 07:01:07 crc kubenswrapper[4947]: I1129 07:01:07.311867 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tc6t2" event={"ID":"624b369c-3174-4b17-86de-0950411e5ddf","Type":"ContainerStarted","Data":"32fd31c4b506abe6f0a2a68b5a465bb2acf785e68de42627a5ecebb5253bd0b5"} Nov 29 07:01:07 crc kubenswrapper[4947]: I1129 07:01:07.330695 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29406661-f885s" podStartSLOduration=7.330669492 podStartE2EDuration="7.330669492s" podCreationTimestamp="2025-11-29 07:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:01:07.328721623 +0000 UTC m=+1618.373103704" watchObservedRunningTime="2025-11-29 07:01:07.330669492 +0000 UTC m=+1618.375051583" Nov 29 07:01:07 crc kubenswrapper[4947]: I1129 07:01:07.401818 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tc6t2" podStartSLOduration=2.820537989 podStartE2EDuration="17.40179246s" podCreationTimestamp="2025-11-29 07:00:50 +0000 UTC" firstStartedPulling="2025-11-29 07:00:51.593293966 +0000 UTC m=+1602.637676047" lastFinishedPulling="2025-11-29 07:01:06.174548437 +0000 UTC m=+1617.218930518" observedRunningTime="2025-11-29 07:01:07.387758207 +0000 UTC m=+1618.432140298" watchObservedRunningTime="2025-11-29 07:01:07.40179246 +0000 UTC m=+1618.446174541" Nov 29 07:01:08 crc kubenswrapper[4947]: I1129 07:01:08.324721 4947 generic.go:334] "Generic (PLEG): container finished" podID="6831a389-7b7f-43da-a2e7-0ae324b8c4a0" containerID="7918cc0c027dcf8f6d57523a30e184f1b54253a9a16f40e0df11153dd469008f" exitCode=0 Nov 29 07:01:08 crc kubenswrapper[4947]: I1129 07:01:08.324791 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jbtrv" event={"ID":"6831a389-7b7f-43da-a2e7-0ae324b8c4a0","Type":"ContainerDied","Data":"7918cc0c027dcf8f6d57523a30e184f1b54253a9a16f40e0df11153dd469008f"} Nov 29 07:01:12 crc kubenswrapper[4947]: I1129 07:01:12.369335 4947 generic.go:334] "Generic (PLEG): container finished" podID="bb316ee8-4e92-455d-b09c-e6f1b57d9a98" containerID="a995f7533f368b445307be2b6c641562d53607846e6956fa28b7ea2f5bdd5cb3" exitCode=0 Nov 29 07:01:12 crc kubenswrapper[4947]: I1129 07:01:12.369405 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29406661-f885s" event={"ID":"bb316ee8-4e92-455d-b09c-e6f1b57d9a98","Type":"ContainerDied","Data":"a995f7533f368b445307be2b6c641562d53607846e6956fa28b7ea2f5bdd5cb3"} Nov 29 07:01:13 crc kubenswrapper[4947]: I1129 07:01:13.386981 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jbtrv" event={"ID":"6831a389-7b7f-43da-a2e7-0ae324b8c4a0","Type":"ContainerStarted","Data":"3eceb01065b7740779d7e82c3033deaf325d52c341a95609ba126e0f6d437562"} Nov 29 07:01:13 crc kubenswrapper[4947]: I1129 07:01:13.885969 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29406661-f885s" Nov 29 07:01:13 crc kubenswrapper[4947]: I1129 07:01:13.910803 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jbtrv" podStartSLOduration=12.550350907 podStartE2EDuration="18.910779668s" podCreationTimestamp="2025-11-29 07:00:55 +0000 UTC" firstStartedPulling="2025-11-29 07:01:06.115675747 +0000 UTC m=+1617.160057828" lastFinishedPulling="2025-11-29 07:01:12.476104508 +0000 UTC m=+1623.520486589" observedRunningTime="2025-11-29 07:01:13.424647877 +0000 UTC m=+1624.469029978" watchObservedRunningTime="2025-11-29 07:01:13.910779668 +0000 UTC m=+1624.955161749" Nov 29 07:01:13 crc kubenswrapper[4947]: I1129 07:01:13.984121 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bb316ee8-4e92-455d-b09c-e6f1b57d9a98-fernet-keys\") pod \"bb316ee8-4e92-455d-b09c-e6f1b57d9a98\" (UID: \"bb316ee8-4e92-455d-b09c-e6f1b57d9a98\") " Nov 29 07:01:13 crc kubenswrapper[4947]: I1129 07:01:13.984205 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb316ee8-4e92-455d-b09c-e6f1b57d9a98-combined-ca-bundle\") pod \"bb316ee8-4e92-455d-b09c-e6f1b57d9a98\" (UID: \"bb316ee8-4e92-455d-b09c-e6f1b57d9a98\") " Nov 29 07:01:13 crc kubenswrapper[4947]: I1129 07:01:13.984291 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb316ee8-4e92-455d-b09c-e6f1b57d9a98-config-data\") pod \"bb316ee8-4e92-455d-b09c-e6f1b57d9a98\" (UID: \"bb316ee8-4e92-455d-b09c-e6f1b57d9a98\") " Nov 29 07:01:13 crc kubenswrapper[4947]: I1129 07:01:13.984522 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2q7t7\" (UniqueName: \"kubernetes.io/projected/bb316ee8-4e92-455d-b09c-e6f1b57d9a98-kube-api-access-2q7t7\") pod \"bb316ee8-4e92-455d-b09c-e6f1b57d9a98\" (UID: \"bb316ee8-4e92-455d-b09c-e6f1b57d9a98\") " Nov 29 07:01:13 crc kubenswrapper[4947]: I1129 07:01:13.993654 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb316ee8-4e92-455d-b09c-e6f1b57d9a98-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "bb316ee8-4e92-455d-b09c-e6f1b57d9a98" (UID: "bb316ee8-4e92-455d-b09c-e6f1b57d9a98"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:13 crc kubenswrapper[4947]: I1129 07:01:13.994757 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb316ee8-4e92-455d-b09c-e6f1b57d9a98-kube-api-access-2q7t7" (OuterVolumeSpecName: "kube-api-access-2q7t7") pod "bb316ee8-4e92-455d-b09c-e6f1b57d9a98" (UID: "bb316ee8-4e92-455d-b09c-e6f1b57d9a98"). InnerVolumeSpecName "kube-api-access-2q7t7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:14 crc kubenswrapper[4947]: I1129 07:01:14.017701 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb316ee8-4e92-455d-b09c-e6f1b57d9a98-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb316ee8-4e92-455d-b09c-e6f1b57d9a98" (UID: "bb316ee8-4e92-455d-b09c-e6f1b57d9a98"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:14 crc kubenswrapper[4947]: I1129 07:01:14.057477 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb316ee8-4e92-455d-b09c-e6f1b57d9a98-config-data" (OuterVolumeSpecName: "config-data") pod "bb316ee8-4e92-455d-b09c-e6f1b57d9a98" (UID: "bb316ee8-4e92-455d-b09c-e6f1b57d9a98"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:14 crc kubenswrapper[4947]: I1129 07:01:14.087197 4947 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bb316ee8-4e92-455d-b09c-e6f1b57d9a98-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:14 crc kubenswrapper[4947]: I1129 07:01:14.087263 4947 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb316ee8-4e92-455d-b09c-e6f1b57d9a98-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:14 crc kubenswrapper[4947]: I1129 07:01:14.087282 4947 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb316ee8-4e92-455d-b09c-e6f1b57d9a98-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:14 crc kubenswrapper[4947]: I1129 07:01:14.087297 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2q7t7\" (UniqueName: \"kubernetes.io/projected/bb316ee8-4e92-455d-b09c-e6f1b57d9a98-kube-api-access-2q7t7\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:14 crc kubenswrapper[4947]: I1129 07:01:14.400320 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29406661-f885s" Nov 29 07:01:14 crc kubenswrapper[4947]: I1129 07:01:14.400321 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29406661-f885s" event={"ID":"bb316ee8-4e92-455d-b09c-e6f1b57d9a98","Type":"ContainerDied","Data":"6803a428383cd286370529721a2e66ec205dd784bf8a2e4053f03448790d2e99"} Nov 29 07:01:14 crc kubenswrapper[4947]: I1129 07:01:14.402072 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6803a428383cd286370529721a2e66ec205dd784bf8a2e4053f03448790d2e99" Nov 29 07:01:15 crc kubenswrapper[4947]: I1129 07:01:15.453413 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 29 07:01:15 crc kubenswrapper[4947]: I1129 07:01:15.906625 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jbtrv" Nov 29 07:01:15 crc kubenswrapper[4947]: I1129 07:01:15.907026 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jbtrv" Nov 29 07:01:15 crc kubenswrapper[4947]: I1129 07:01:15.968538 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jbtrv" Nov 29 07:01:16 crc kubenswrapper[4947]: I1129 07:01:16.564682 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:01:22 crc kubenswrapper[4947]: I1129 07:01:22.495822 4947 generic.go:334] "Generic (PLEG): container finished" podID="624b369c-3174-4b17-86de-0950411e5ddf" containerID="32fd31c4b506abe6f0a2a68b5a465bb2acf785e68de42627a5ecebb5253bd0b5" exitCode=0 Nov 29 07:01:22 crc kubenswrapper[4947]: I1129 07:01:22.495927 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tc6t2" event={"ID":"624b369c-3174-4b17-86de-0950411e5ddf","Type":"ContainerDied","Data":"32fd31c4b506abe6f0a2a68b5a465bb2acf785e68de42627a5ecebb5253bd0b5"} Nov 29 07:01:22 crc kubenswrapper[4947]: I1129 07:01:22.988653 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:01:22 crc kubenswrapper[4947]: I1129 07:01:22.988763 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:01:24 crc kubenswrapper[4947]: I1129 07:01:24.125460 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tc6t2" Nov 29 07:01:24 crc kubenswrapper[4947]: I1129 07:01:24.231912 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lr99\" (UniqueName: \"kubernetes.io/projected/624b369c-3174-4b17-86de-0950411e5ddf-kube-api-access-6lr99\") pod \"624b369c-3174-4b17-86de-0950411e5ddf\" (UID: \"624b369c-3174-4b17-86de-0950411e5ddf\") " Nov 29 07:01:24 crc kubenswrapper[4947]: I1129 07:01:24.232066 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/624b369c-3174-4b17-86de-0950411e5ddf-inventory\") pod \"624b369c-3174-4b17-86de-0950411e5ddf\" (UID: \"624b369c-3174-4b17-86de-0950411e5ddf\") " Nov 29 07:01:24 crc kubenswrapper[4947]: I1129 07:01:24.232122 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/624b369c-3174-4b17-86de-0950411e5ddf-ssh-key\") pod \"624b369c-3174-4b17-86de-0950411e5ddf\" (UID: \"624b369c-3174-4b17-86de-0950411e5ddf\") " Nov 29 07:01:24 crc kubenswrapper[4947]: I1129 07:01:24.232180 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/624b369c-3174-4b17-86de-0950411e5ddf-repo-setup-combined-ca-bundle\") pod \"624b369c-3174-4b17-86de-0950411e5ddf\" (UID: \"624b369c-3174-4b17-86de-0950411e5ddf\") " Nov 29 07:01:24 crc kubenswrapper[4947]: I1129 07:01:24.239771 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/624b369c-3174-4b17-86de-0950411e5ddf-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "624b369c-3174-4b17-86de-0950411e5ddf" (UID: "624b369c-3174-4b17-86de-0950411e5ddf"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:24 crc kubenswrapper[4947]: I1129 07:01:24.239924 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/624b369c-3174-4b17-86de-0950411e5ddf-kube-api-access-6lr99" (OuterVolumeSpecName: "kube-api-access-6lr99") pod "624b369c-3174-4b17-86de-0950411e5ddf" (UID: "624b369c-3174-4b17-86de-0950411e5ddf"). InnerVolumeSpecName "kube-api-access-6lr99". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:24 crc kubenswrapper[4947]: I1129 07:01:24.266519 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/624b369c-3174-4b17-86de-0950411e5ddf-inventory" (OuterVolumeSpecName: "inventory") pod "624b369c-3174-4b17-86de-0950411e5ddf" (UID: "624b369c-3174-4b17-86de-0950411e5ddf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:24 crc kubenswrapper[4947]: I1129 07:01:24.271303 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/624b369c-3174-4b17-86de-0950411e5ddf-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "624b369c-3174-4b17-86de-0950411e5ddf" (UID: "624b369c-3174-4b17-86de-0950411e5ddf"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:01:24 crc kubenswrapper[4947]: I1129 07:01:24.335339 4947 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/624b369c-3174-4b17-86de-0950411e5ddf-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:24 crc kubenswrapper[4947]: I1129 07:01:24.335429 4947 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/624b369c-3174-4b17-86de-0950411e5ddf-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:24 crc kubenswrapper[4947]: I1129 07:01:24.335446 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lr99\" (UniqueName: \"kubernetes.io/projected/624b369c-3174-4b17-86de-0950411e5ddf-kube-api-access-6lr99\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:24 crc kubenswrapper[4947]: I1129 07:01:24.335459 4947 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/624b369c-3174-4b17-86de-0950411e5ddf-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:24 crc kubenswrapper[4947]: I1129 07:01:24.526170 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tc6t2" event={"ID":"624b369c-3174-4b17-86de-0950411e5ddf","Type":"ContainerDied","Data":"c3491943ed4b2d67ca13386cb59c97f71242e09b2ea757724fd7a3308a07b47c"} Nov 29 07:01:24 crc kubenswrapper[4947]: I1129 07:01:24.526680 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3491943ed4b2d67ca13386cb59c97f71242e09b2ea757724fd7a3308a07b47c" Nov 29 07:01:24 crc kubenswrapper[4947]: I1129 07:01:24.526306 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tc6t2" Nov 29 07:01:24 crc kubenswrapper[4947]: I1129 07:01:24.707514 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zx6rs"] Nov 29 07:01:24 crc kubenswrapper[4947]: E1129 07:01:24.708283 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb316ee8-4e92-455d-b09c-e6f1b57d9a98" containerName="keystone-cron" Nov 29 07:01:24 crc kubenswrapper[4947]: I1129 07:01:24.708305 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb316ee8-4e92-455d-b09c-e6f1b57d9a98" containerName="keystone-cron" Nov 29 07:01:24 crc kubenswrapper[4947]: E1129 07:01:24.708349 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="624b369c-3174-4b17-86de-0950411e5ddf" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 29 07:01:24 crc kubenswrapper[4947]: I1129 07:01:24.708363 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="624b369c-3174-4b17-86de-0950411e5ddf" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 29 07:01:24 crc kubenswrapper[4947]: I1129 07:01:24.708611 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="624b369c-3174-4b17-86de-0950411e5ddf" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 29 07:01:24 crc kubenswrapper[4947]: I1129 07:01:24.708643 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb316ee8-4e92-455d-b09c-e6f1b57d9a98" containerName="keystone-cron" Nov 29 07:01:24 crc kubenswrapper[4947]: I1129 07:01:24.709650 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zx6rs" Nov 29 07:01:24 crc kubenswrapper[4947]: I1129 07:01:24.713954 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:01:24 crc kubenswrapper[4947]: I1129 07:01:24.714517 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:01:24 crc kubenswrapper[4947]: I1129 07:01:24.723019 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zx6rs"] Nov 29 07:01:24 crc kubenswrapper[4947]: I1129 07:01:24.727159 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xvljs" Nov 29 07:01:24 crc kubenswrapper[4947]: I1129 07:01:24.728295 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:01:24 crc kubenswrapper[4947]: I1129 07:01:24.746028 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2c0f3f4c-3de4-4c42-b53f-57845082dd20-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zx6rs\" (UID: \"2c0f3f4c-3de4-4c42-b53f-57845082dd20\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zx6rs" Nov 29 07:01:24 crc kubenswrapper[4947]: I1129 07:01:24.746156 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2c0f3f4c-3de4-4c42-b53f-57845082dd20-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zx6rs\" (UID: \"2c0f3f4c-3de4-4c42-b53f-57845082dd20\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zx6rs" Nov 29 07:01:24 crc kubenswrapper[4947]: I1129 07:01:24.746201 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c0f3f4c-3de4-4c42-b53f-57845082dd20-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zx6rs\" (UID: \"2c0f3f4c-3de4-4c42-b53f-57845082dd20\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zx6rs" Nov 29 07:01:24 crc kubenswrapper[4947]: I1129 07:01:24.746345 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vqw9\" (UniqueName: \"kubernetes.io/projected/2c0f3f4c-3de4-4c42-b53f-57845082dd20-kube-api-access-6vqw9\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zx6rs\" (UID: \"2c0f3f4c-3de4-4c42-b53f-57845082dd20\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zx6rs" Nov 29 07:01:24 crc kubenswrapper[4947]: I1129 07:01:24.850379 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2c0f3f4c-3de4-4c42-b53f-57845082dd20-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zx6rs\" (UID: \"2c0f3f4c-3de4-4c42-b53f-57845082dd20\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zx6rs" Nov 29 07:01:24 crc kubenswrapper[4947]: I1129 07:01:24.850520 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2c0f3f4c-3de4-4c42-b53f-57845082dd20-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zx6rs\" (UID: \"2c0f3f4c-3de4-4c42-b53f-57845082dd20\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zx6rs" Nov 29 07:01:24 crc kubenswrapper[4947]: I1129 07:01:24.850584 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c0f3f4c-3de4-4c42-b53f-57845082dd20-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zx6rs\" (UID: \"2c0f3f4c-3de4-4c42-b53f-57845082dd20\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zx6rs" Nov 29 07:01:24 crc kubenswrapper[4947]: I1129 07:01:24.850639 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vqw9\" (UniqueName: \"kubernetes.io/projected/2c0f3f4c-3de4-4c42-b53f-57845082dd20-kube-api-access-6vqw9\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zx6rs\" (UID: \"2c0f3f4c-3de4-4c42-b53f-57845082dd20\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zx6rs" Nov 29 07:01:24 crc kubenswrapper[4947]: I1129 07:01:24.857562 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2c0f3f4c-3de4-4c42-b53f-57845082dd20-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zx6rs\" (UID: \"2c0f3f4c-3de4-4c42-b53f-57845082dd20\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zx6rs" Nov 29 07:01:24 crc kubenswrapper[4947]: I1129 07:01:24.857831 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c0f3f4c-3de4-4c42-b53f-57845082dd20-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zx6rs\" (UID: \"2c0f3f4c-3de4-4c42-b53f-57845082dd20\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zx6rs" Nov 29 07:01:24 crc kubenswrapper[4947]: I1129 07:01:24.858559 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2c0f3f4c-3de4-4c42-b53f-57845082dd20-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zx6rs\" (UID: \"2c0f3f4c-3de4-4c42-b53f-57845082dd20\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zx6rs" Nov 29 07:01:24 crc kubenswrapper[4947]: I1129 07:01:24.878969 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vqw9\" (UniqueName: \"kubernetes.io/projected/2c0f3f4c-3de4-4c42-b53f-57845082dd20-kube-api-access-6vqw9\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zx6rs\" (UID: \"2c0f3f4c-3de4-4c42-b53f-57845082dd20\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zx6rs" Nov 29 07:01:25 crc kubenswrapper[4947]: I1129 07:01:25.038757 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zx6rs" Nov 29 07:01:25 crc kubenswrapper[4947]: I1129 07:01:25.701770 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zx6rs"] Nov 29 07:01:25 crc kubenswrapper[4947]: W1129 07:01:25.712922 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c0f3f4c_3de4_4c42_b53f_57845082dd20.slice/crio-a3b76e49a6be7d87ca99d685804bff18cbfddcd3e90c3246bcb849df34ab1e53 WatchSource:0}: Error finding container a3b76e49a6be7d87ca99d685804bff18cbfddcd3e90c3246bcb849df34ab1e53: Status 404 returned error can't find the container with id a3b76e49a6be7d87ca99d685804bff18cbfddcd3e90c3246bcb849df34ab1e53 Nov 29 07:01:25 crc kubenswrapper[4947]: I1129 07:01:25.990530 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jbtrv" Nov 29 07:01:26 crc kubenswrapper[4947]: I1129 07:01:26.075971 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jbtrv"] Nov 29 07:01:26 crc kubenswrapper[4947]: I1129 07:01:26.552415 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zx6rs" event={"ID":"2c0f3f4c-3de4-4c42-b53f-57845082dd20","Type":"ContainerStarted","Data":"a3b76e49a6be7d87ca99d685804bff18cbfddcd3e90c3246bcb849df34ab1e53"} Nov 29 07:01:26 crc kubenswrapper[4947]: I1129 07:01:26.553299 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jbtrv" podUID="6831a389-7b7f-43da-a2e7-0ae324b8c4a0" containerName="registry-server" containerID="cri-o://3eceb01065b7740779d7e82c3033deaf325d52c341a95609ba126e0f6d437562" gracePeriod=2 Nov 29 07:01:27 crc kubenswrapper[4947]: I1129 07:01:27.130468 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jbtrv" Nov 29 07:01:27 crc kubenswrapper[4947]: I1129 07:01:27.246474 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6831a389-7b7f-43da-a2e7-0ae324b8c4a0-catalog-content\") pod \"6831a389-7b7f-43da-a2e7-0ae324b8c4a0\" (UID: \"6831a389-7b7f-43da-a2e7-0ae324b8c4a0\") " Nov 29 07:01:27 crc kubenswrapper[4947]: I1129 07:01:27.246636 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6831a389-7b7f-43da-a2e7-0ae324b8c4a0-utilities\") pod \"6831a389-7b7f-43da-a2e7-0ae324b8c4a0\" (UID: \"6831a389-7b7f-43da-a2e7-0ae324b8c4a0\") " Nov 29 07:01:27 crc kubenswrapper[4947]: I1129 07:01:27.246807 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nn56m\" (UniqueName: \"kubernetes.io/projected/6831a389-7b7f-43da-a2e7-0ae324b8c4a0-kube-api-access-nn56m\") pod \"6831a389-7b7f-43da-a2e7-0ae324b8c4a0\" (UID: \"6831a389-7b7f-43da-a2e7-0ae324b8c4a0\") " Nov 29 07:01:27 crc kubenswrapper[4947]: I1129 07:01:27.250295 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6831a389-7b7f-43da-a2e7-0ae324b8c4a0-utilities" (OuterVolumeSpecName: "utilities") pod "6831a389-7b7f-43da-a2e7-0ae324b8c4a0" (UID: "6831a389-7b7f-43da-a2e7-0ae324b8c4a0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:01:27 crc kubenswrapper[4947]: I1129 07:01:27.256349 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6831a389-7b7f-43da-a2e7-0ae324b8c4a0-kube-api-access-nn56m" (OuterVolumeSpecName: "kube-api-access-nn56m") pod "6831a389-7b7f-43da-a2e7-0ae324b8c4a0" (UID: "6831a389-7b7f-43da-a2e7-0ae324b8c4a0"). InnerVolumeSpecName "kube-api-access-nn56m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:01:27 crc kubenswrapper[4947]: I1129 07:01:27.314293 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6831a389-7b7f-43da-a2e7-0ae324b8c4a0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6831a389-7b7f-43da-a2e7-0ae324b8c4a0" (UID: "6831a389-7b7f-43da-a2e7-0ae324b8c4a0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:01:27 crc kubenswrapper[4947]: I1129 07:01:27.350759 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6831a389-7b7f-43da-a2e7-0ae324b8c4a0-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:27 crc kubenswrapper[4947]: I1129 07:01:27.350818 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nn56m\" (UniqueName: \"kubernetes.io/projected/6831a389-7b7f-43da-a2e7-0ae324b8c4a0-kube-api-access-nn56m\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:27 crc kubenswrapper[4947]: I1129 07:01:27.350830 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6831a389-7b7f-43da-a2e7-0ae324b8c4a0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:01:27 crc kubenswrapper[4947]: I1129 07:01:27.569832 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zx6rs" event={"ID":"2c0f3f4c-3de4-4c42-b53f-57845082dd20","Type":"ContainerStarted","Data":"c70804da874a89e394a2eb609846fd6eb81bba660410545dfb851a76fb638483"} Nov 29 07:01:27 crc kubenswrapper[4947]: I1129 07:01:27.581491 4947 generic.go:334] "Generic (PLEG): container finished" podID="6831a389-7b7f-43da-a2e7-0ae324b8c4a0" containerID="3eceb01065b7740779d7e82c3033deaf325d52c341a95609ba126e0f6d437562" exitCode=0 Nov 29 07:01:27 crc kubenswrapper[4947]: I1129 07:01:27.581568 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jbtrv" event={"ID":"6831a389-7b7f-43da-a2e7-0ae324b8c4a0","Type":"ContainerDied","Data":"3eceb01065b7740779d7e82c3033deaf325d52c341a95609ba126e0f6d437562"} Nov 29 07:01:27 crc kubenswrapper[4947]: I1129 07:01:27.581638 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jbtrv" Nov 29 07:01:27 crc kubenswrapper[4947]: I1129 07:01:27.581655 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jbtrv" event={"ID":"6831a389-7b7f-43da-a2e7-0ae324b8c4a0","Type":"ContainerDied","Data":"8b8cc217e82a7bf0c9fedee7e930074d8c2329ff6f9ad2e526ac5a0b43efe60d"} Nov 29 07:01:27 crc kubenswrapper[4947]: I1129 07:01:27.581745 4947 scope.go:117] "RemoveContainer" containerID="3eceb01065b7740779d7e82c3033deaf325d52c341a95609ba126e0f6d437562" Nov 29 07:01:27 crc kubenswrapper[4947]: I1129 07:01:27.607053 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zx6rs" podStartSLOduration=2.7735674120000002 podStartE2EDuration="3.607031682s" podCreationTimestamp="2025-11-29 07:01:24 +0000 UTC" firstStartedPulling="2025-11-29 07:01:25.718658061 +0000 UTC m=+1636.763040142" lastFinishedPulling="2025-11-29 07:01:26.552122331 +0000 UTC m=+1637.596504412" observedRunningTime="2025-11-29 07:01:27.595168736 +0000 UTC m=+1638.639550837" watchObservedRunningTime="2025-11-29 07:01:27.607031682 +0000 UTC m=+1638.651413763" Nov 29 07:01:27 crc kubenswrapper[4947]: I1129 07:01:27.658824 4947 scope.go:117] "RemoveContainer" containerID="7918cc0c027dcf8f6d57523a30e184f1b54253a9a16f40e0df11153dd469008f" Nov 29 07:01:27 crc kubenswrapper[4947]: I1129 07:01:27.668646 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jbtrv"] Nov 29 07:01:27 crc kubenswrapper[4947]: I1129 07:01:27.679396 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jbtrv"] Nov 29 07:01:27 crc kubenswrapper[4947]: I1129 07:01:27.692201 4947 scope.go:117] "RemoveContainer" containerID="f0df1a2b183e7de819551c41e58a3609089015bf52f351dd20d15a7f61d40da6" Nov 29 07:01:27 crc kubenswrapper[4947]: I1129 07:01:27.760463 4947 scope.go:117] "RemoveContainer" containerID="3eceb01065b7740779d7e82c3033deaf325d52c341a95609ba126e0f6d437562" Nov 29 07:01:27 crc kubenswrapper[4947]: E1129 07:01:27.761158 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3eceb01065b7740779d7e82c3033deaf325d52c341a95609ba126e0f6d437562\": container with ID starting with 3eceb01065b7740779d7e82c3033deaf325d52c341a95609ba126e0f6d437562 not found: ID does not exist" containerID="3eceb01065b7740779d7e82c3033deaf325d52c341a95609ba126e0f6d437562" Nov 29 07:01:27 crc kubenswrapper[4947]: I1129 07:01:27.761200 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3eceb01065b7740779d7e82c3033deaf325d52c341a95609ba126e0f6d437562"} err="failed to get container status \"3eceb01065b7740779d7e82c3033deaf325d52c341a95609ba126e0f6d437562\": rpc error: code = NotFound desc = could not find container \"3eceb01065b7740779d7e82c3033deaf325d52c341a95609ba126e0f6d437562\": container with ID starting with 3eceb01065b7740779d7e82c3033deaf325d52c341a95609ba126e0f6d437562 not found: ID does not exist" Nov 29 07:01:27 crc kubenswrapper[4947]: I1129 07:01:27.761333 4947 scope.go:117] "RemoveContainer" containerID="7918cc0c027dcf8f6d57523a30e184f1b54253a9a16f40e0df11153dd469008f" Nov 29 07:01:27 crc kubenswrapper[4947]: E1129 07:01:27.762319 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7918cc0c027dcf8f6d57523a30e184f1b54253a9a16f40e0df11153dd469008f\": container with ID starting with 7918cc0c027dcf8f6d57523a30e184f1b54253a9a16f40e0df11153dd469008f not found: ID does not exist" containerID="7918cc0c027dcf8f6d57523a30e184f1b54253a9a16f40e0df11153dd469008f" Nov 29 07:01:27 crc kubenswrapper[4947]: I1129 07:01:27.762411 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7918cc0c027dcf8f6d57523a30e184f1b54253a9a16f40e0df11153dd469008f"} err="failed to get container status \"7918cc0c027dcf8f6d57523a30e184f1b54253a9a16f40e0df11153dd469008f\": rpc error: code = NotFound desc = could not find container \"7918cc0c027dcf8f6d57523a30e184f1b54253a9a16f40e0df11153dd469008f\": container with ID starting with 7918cc0c027dcf8f6d57523a30e184f1b54253a9a16f40e0df11153dd469008f not found: ID does not exist" Nov 29 07:01:27 crc kubenswrapper[4947]: I1129 07:01:27.762448 4947 scope.go:117] "RemoveContainer" containerID="f0df1a2b183e7de819551c41e58a3609089015bf52f351dd20d15a7f61d40da6" Nov 29 07:01:27 crc kubenswrapper[4947]: E1129 07:01:27.763076 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0df1a2b183e7de819551c41e58a3609089015bf52f351dd20d15a7f61d40da6\": container with ID starting with f0df1a2b183e7de819551c41e58a3609089015bf52f351dd20d15a7f61d40da6 not found: ID does not exist" containerID="f0df1a2b183e7de819551c41e58a3609089015bf52f351dd20d15a7f61d40da6" Nov 29 07:01:27 crc kubenswrapper[4947]: I1129 07:01:27.763108 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0df1a2b183e7de819551c41e58a3609089015bf52f351dd20d15a7f61d40da6"} err="failed to get container status \"f0df1a2b183e7de819551c41e58a3609089015bf52f351dd20d15a7f61d40da6\": rpc error: code = NotFound desc = could not find container \"f0df1a2b183e7de819551c41e58a3609089015bf52f351dd20d15a7f61d40da6\": container with ID starting with f0df1a2b183e7de819551c41e58a3609089015bf52f351dd20d15a7f61d40da6 not found: ID does not exist" Nov 29 07:01:29 crc kubenswrapper[4947]: I1129 07:01:29.191165 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6831a389-7b7f-43da-a2e7-0ae324b8c4a0" path="/var/lib/kubelet/pods/6831a389-7b7f-43da-a2e7-0ae324b8c4a0/volumes" Nov 29 07:01:35 crc kubenswrapper[4947]: I1129 07:01:35.640103 4947 scope.go:117] "RemoveContainer" containerID="581deea5a04147a566075b9d640e15eff4822d496d12e2c9c238dbd60a9b5d18" Nov 29 07:01:52 crc kubenswrapper[4947]: I1129 07:01:52.987417 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:01:52 crc kubenswrapper[4947]: I1129 07:01:52.988170 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:02:22 crc kubenswrapper[4947]: I1129 07:02:22.988188 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:02:22 crc kubenswrapper[4947]: I1129 07:02:22.988951 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:02:22 crc kubenswrapper[4947]: I1129 07:02:22.989015 4947 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" Nov 29 07:02:22 crc kubenswrapper[4947]: I1129 07:02:22.990017 4947 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4e08579e8ab72d8a7c4f3261905a80a5e108e5f74d8ab7f6a91c9b8476999fd3"} pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 07:02:22 crc kubenswrapper[4947]: I1129 07:02:22.990085 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" containerID="cri-o://4e08579e8ab72d8a7c4f3261905a80a5e108e5f74d8ab7f6a91c9b8476999fd3" gracePeriod=600 Nov 29 07:02:23 crc kubenswrapper[4947]: E1129 07:02:23.692596 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:02:24 crc kubenswrapper[4947]: I1129 07:02:24.186068 4947 generic.go:334] "Generic (PLEG): container finished" podID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerID="4e08579e8ab72d8a7c4f3261905a80a5e108e5f74d8ab7f6a91c9b8476999fd3" exitCode=0 Nov 29 07:02:24 crc kubenswrapper[4947]: I1129 07:02:24.186131 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" event={"ID":"5f4d791f-bb61-4aaa-a09c-3007b59645a7","Type":"ContainerDied","Data":"4e08579e8ab72d8a7c4f3261905a80a5e108e5f74d8ab7f6a91c9b8476999fd3"} Nov 29 07:02:24 crc kubenswrapper[4947]: I1129 07:02:24.186188 4947 scope.go:117] "RemoveContainer" containerID="df51260596870c91ccb9712810f435518b0c8fd5a5c15540a25aafaee5eb1aa5" Nov 29 07:02:24 crc kubenswrapper[4947]: I1129 07:02:24.186689 4947 scope.go:117] "RemoveContainer" containerID="4e08579e8ab72d8a7c4f3261905a80a5e108e5f74d8ab7f6a91c9b8476999fd3" Nov 29 07:02:24 crc kubenswrapper[4947]: E1129 07:02:24.186947 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:02:35 crc kubenswrapper[4947]: I1129 07:02:35.178926 4947 scope.go:117] "RemoveContainer" containerID="4e08579e8ab72d8a7c4f3261905a80a5e108e5f74d8ab7f6a91c9b8476999fd3" Nov 29 07:02:35 crc kubenswrapper[4947]: E1129 07:02:35.179811 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:02:35 crc kubenswrapper[4947]: I1129 07:02:35.771058 4947 scope.go:117] "RemoveContainer" containerID="7bd62e2877550c20eabd05cf2f63e2b561345bf6d8e8494a1e9c3b047ba4ea8a" Nov 29 07:02:35 crc kubenswrapper[4947]: I1129 07:02:35.826814 4947 scope.go:117] "RemoveContainer" containerID="8731b73dcd468f70161bee9662a54eaf91c21f855da0fbd1d56ae2c80052d69f" Nov 29 07:02:35 crc kubenswrapper[4947]: I1129 07:02:35.851655 4947 scope.go:117] "RemoveContainer" containerID="914c2766feded3e7f76c8a625f6b44f5dc881a4aa99796559acc9afb28a89a37" Nov 29 07:02:47 crc kubenswrapper[4947]: I1129 07:02:47.179676 4947 scope.go:117] "RemoveContainer" containerID="4e08579e8ab72d8a7c4f3261905a80a5e108e5f74d8ab7f6a91c9b8476999fd3" Nov 29 07:02:47 crc kubenswrapper[4947]: E1129 07:02:47.180702 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:03:00 crc kubenswrapper[4947]: I1129 07:03:00.179074 4947 scope.go:117] "RemoveContainer" containerID="4e08579e8ab72d8a7c4f3261905a80a5e108e5f74d8ab7f6a91c9b8476999fd3" Nov 29 07:03:00 crc kubenswrapper[4947]: E1129 07:03:00.180233 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:03:03 crc kubenswrapper[4947]: I1129 07:03:03.709538 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tlldl"] Nov 29 07:03:03 crc kubenswrapper[4947]: E1129 07:03:03.710971 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6831a389-7b7f-43da-a2e7-0ae324b8c4a0" containerName="extract-content" Nov 29 07:03:03 crc kubenswrapper[4947]: I1129 07:03:03.710993 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="6831a389-7b7f-43da-a2e7-0ae324b8c4a0" containerName="extract-content" Nov 29 07:03:03 crc kubenswrapper[4947]: E1129 07:03:03.711036 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6831a389-7b7f-43da-a2e7-0ae324b8c4a0" containerName="extract-utilities" Nov 29 07:03:03 crc kubenswrapper[4947]: I1129 07:03:03.711045 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="6831a389-7b7f-43da-a2e7-0ae324b8c4a0" containerName="extract-utilities" Nov 29 07:03:03 crc kubenswrapper[4947]: E1129 07:03:03.711062 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6831a389-7b7f-43da-a2e7-0ae324b8c4a0" containerName="registry-server" Nov 29 07:03:03 crc kubenswrapper[4947]: I1129 07:03:03.711075 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="6831a389-7b7f-43da-a2e7-0ae324b8c4a0" containerName="registry-server" Nov 29 07:03:03 crc kubenswrapper[4947]: I1129 07:03:03.711340 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="6831a389-7b7f-43da-a2e7-0ae324b8c4a0" containerName="registry-server" Nov 29 07:03:03 crc kubenswrapper[4947]: I1129 07:03:03.713186 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tlldl" Nov 29 07:03:03 crc kubenswrapper[4947]: I1129 07:03:03.732854 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tlldl"] Nov 29 07:03:03 crc kubenswrapper[4947]: I1129 07:03:03.859057 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bc26e42-9673-458e-9d27-c18b701e50fb-catalog-content\") pod \"redhat-marketplace-tlldl\" (UID: \"6bc26e42-9673-458e-9d27-c18b701e50fb\") " pod="openshift-marketplace/redhat-marketplace-tlldl" Nov 29 07:03:03 crc kubenswrapper[4947]: I1129 07:03:03.859420 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bc26e42-9673-458e-9d27-c18b701e50fb-utilities\") pod \"redhat-marketplace-tlldl\" (UID: \"6bc26e42-9673-458e-9d27-c18b701e50fb\") " pod="openshift-marketplace/redhat-marketplace-tlldl" Nov 29 07:03:03 crc kubenswrapper[4947]: I1129 07:03:03.859468 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbsvh\" (UniqueName: \"kubernetes.io/projected/6bc26e42-9673-458e-9d27-c18b701e50fb-kube-api-access-nbsvh\") pod \"redhat-marketplace-tlldl\" (UID: \"6bc26e42-9673-458e-9d27-c18b701e50fb\") " pod="openshift-marketplace/redhat-marketplace-tlldl" Nov 29 07:03:03 crc kubenswrapper[4947]: I1129 07:03:03.962133 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bc26e42-9673-458e-9d27-c18b701e50fb-catalog-content\") pod \"redhat-marketplace-tlldl\" (UID: \"6bc26e42-9673-458e-9d27-c18b701e50fb\") " pod="openshift-marketplace/redhat-marketplace-tlldl" Nov 29 07:03:03 crc kubenswrapper[4947]: I1129 07:03:03.962276 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bc26e42-9673-458e-9d27-c18b701e50fb-utilities\") pod \"redhat-marketplace-tlldl\" (UID: \"6bc26e42-9673-458e-9d27-c18b701e50fb\") " pod="openshift-marketplace/redhat-marketplace-tlldl" Nov 29 07:03:03 crc kubenswrapper[4947]: I1129 07:03:03.962317 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbsvh\" (UniqueName: \"kubernetes.io/projected/6bc26e42-9673-458e-9d27-c18b701e50fb-kube-api-access-nbsvh\") pod \"redhat-marketplace-tlldl\" (UID: \"6bc26e42-9673-458e-9d27-c18b701e50fb\") " pod="openshift-marketplace/redhat-marketplace-tlldl" Nov 29 07:03:03 crc kubenswrapper[4947]: I1129 07:03:03.963029 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bc26e42-9673-458e-9d27-c18b701e50fb-catalog-content\") pod \"redhat-marketplace-tlldl\" (UID: \"6bc26e42-9673-458e-9d27-c18b701e50fb\") " pod="openshift-marketplace/redhat-marketplace-tlldl" Nov 29 07:03:03 crc kubenswrapper[4947]: I1129 07:03:03.963065 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bc26e42-9673-458e-9d27-c18b701e50fb-utilities\") pod \"redhat-marketplace-tlldl\" (UID: \"6bc26e42-9673-458e-9d27-c18b701e50fb\") " pod="openshift-marketplace/redhat-marketplace-tlldl" Nov 29 07:03:03 crc kubenswrapper[4947]: I1129 07:03:03.988585 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbsvh\" (UniqueName: \"kubernetes.io/projected/6bc26e42-9673-458e-9d27-c18b701e50fb-kube-api-access-nbsvh\") pod \"redhat-marketplace-tlldl\" (UID: \"6bc26e42-9673-458e-9d27-c18b701e50fb\") " pod="openshift-marketplace/redhat-marketplace-tlldl" Nov 29 07:03:04 crc kubenswrapper[4947]: I1129 07:03:04.039454 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tlldl" Nov 29 07:03:04 crc kubenswrapper[4947]: I1129 07:03:04.576369 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tlldl"] Nov 29 07:03:04 crc kubenswrapper[4947]: I1129 07:03:04.622661 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tlldl" event={"ID":"6bc26e42-9673-458e-9d27-c18b701e50fb","Type":"ContainerStarted","Data":"e0df58421c5fe0f1b0c3f7834fa68d766b684e4405e770b075d5b62ce53ed80e"} Nov 29 07:03:05 crc kubenswrapper[4947]: I1129 07:03:05.635819 4947 generic.go:334] "Generic (PLEG): container finished" podID="6bc26e42-9673-458e-9d27-c18b701e50fb" containerID="5e7262a9620535453c204799a346434f86a28b63c53ce8e553dea996341ac929" exitCode=0 Nov 29 07:03:05 crc kubenswrapper[4947]: I1129 07:03:05.635891 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tlldl" event={"ID":"6bc26e42-9673-458e-9d27-c18b701e50fb","Type":"ContainerDied","Data":"5e7262a9620535453c204799a346434f86a28b63c53ce8e553dea996341ac929"} Nov 29 07:03:05 crc kubenswrapper[4947]: I1129 07:03:05.640240 4947 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 07:03:09 crc kubenswrapper[4947]: I1129 07:03:09.700855 4947 generic.go:334] "Generic (PLEG): container finished" podID="6bc26e42-9673-458e-9d27-c18b701e50fb" containerID="30a2f6ae018059db5ff5e2c688b2562af9ebe8f636e053c82cf46c23c60676fb" exitCode=0 Nov 29 07:03:09 crc kubenswrapper[4947]: I1129 07:03:09.700970 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tlldl" event={"ID":"6bc26e42-9673-458e-9d27-c18b701e50fb","Type":"ContainerDied","Data":"30a2f6ae018059db5ff5e2c688b2562af9ebe8f636e053c82cf46c23c60676fb"} Nov 29 07:03:11 crc kubenswrapper[4947]: I1129 07:03:11.179673 4947 scope.go:117] "RemoveContainer" containerID="4e08579e8ab72d8a7c4f3261905a80a5e108e5f74d8ab7f6a91c9b8476999fd3" Nov 29 07:03:11 crc kubenswrapper[4947]: E1129 07:03:11.180413 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:03:13 crc kubenswrapper[4947]: I1129 07:03:13.753307 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tlldl" event={"ID":"6bc26e42-9673-458e-9d27-c18b701e50fb","Type":"ContainerStarted","Data":"31fbcff9e892e0db9c4cd1a57a22abb82283185d19754c993ced6ac6d88335db"} Nov 29 07:03:13 crc kubenswrapper[4947]: I1129 07:03:13.783696 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tlldl" podStartSLOduration=3.44120874 podStartE2EDuration="10.783670777s" podCreationTimestamp="2025-11-29 07:03:03 +0000 UTC" firstStartedPulling="2025-11-29 07:03:05.639954932 +0000 UTC m=+1736.684337013" lastFinishedPulling="2025-11-29 07:03:12.982416969 +0000 UTC m=+1744.026799050" observedRunningTime="2025-11-29 07:03:13.773473782 +0000 UTC m=+1744.817855873" watchObservedRunningTime="2025-11-29 07:03:13.783670777 +0000 UTC m=+1744.828052858" Nov 29 07:03:14 crc kubenswrapper[4947]: I1129 07:03:14.040336 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tlldl" Nov 29 07:03:14 crc kubenswrapper[4947]: I1129 07:03:14.040412 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tlldl" Nov 29 07:03:15 crc kubenswrapper[4947]: I1129 07:03:15.090888 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-tlldl" podUID="6bc26e42-9673-458e-9d27-c18b701e50fb" containerName="registry-server" probeResult="failure" output=< Nov 29 07:03:15 crc kubenswrapper[4947]: timeout: failed to connect service ":50051" within 1s Nov 29 07:03:15 crc kubenswrapper[4947]: > Nov 29 07:03:23 crc kubenswrapper[4947]: I1129 07:03:23.184290 4947 scope.go:117] "RemoveContainer" containerID="4e08579e8ab72d8a7c4f3261905a80a5e108e5f74d8ab7f6a91c9b8476999fd3" Nov 29 07:03:23 crc kubenswrapper[4947]: E1129 07:03:23.185027 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:03:24 crc kubenswrapper[4947]: I1129 07:03:24.093371 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tlldl" Nov 29 07:03:24 crc kubenswrapper[4947]: I1129 07:03:24.158588 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tlldl" Nov 29 07:03:24 crc kubenswrapper[4947]: I1129 07:03:24.335488 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tlldl"] Nov 29 07:03:25 crc kubenswrapper[4947]: I1129 07:03:25.886117 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tlldl" podUID="6bc26e42-9673-458e-9d27-c18b701e50fb" containerName="registry-server" containerID="cri-o://31fbcff9e892e0db9c4cd1a57a22abb82283185d19754c993ced6ac6d88335db" gracePeriod=2 Nov 29 07:03:26 crc kubenswrapper[4947]: I1129 07:03:26.904145 4947 generic.go:334] "Generic (PLEG): container finished" podID="6bc26e42-9673-458e-9d27-c18b701e50fb" containerID="31fbcff9e892e0db9c4cd1a57a22abb82283185d19754c993ced6ac6d88335db" exitCode=0 Nov 29 07:03:26 crc kubenswrapper[4947]: I1129 07:03:26.904233 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tlldl" event={"ID":"6bc26e42-9673-458e-9d27-c18b701e50fb","Type":"ContainerDied","Data":"31fbcff9e892e0db9c4cd1a57a22abb82283185d19754c993ced6ac6d88335db"} Nov 29 07:03:27 crc kubenswrapper[4947]: I1129 07:03:27.870654 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tlldl" Nov 29 07:03:27 crc kubenswrapper[4947]: I1129 07:03:27.923752 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tlldl" event={"ID":"6bc26e42-9673-458e-9d27-c18b701e50fb","Type":"ContainerDied","Data":"e0df58421c5fe0f1b0c3f7834fa68d766b684e4405e770b075d5b62ce53ed80e"} Nov 29 07:03:27 crc kubenswrapper[4947]: I1129 07:03:27.923807 4947 scope.go:117] "RemoveContainer" containerID="31fbcff9e892e0db9c4cd1a57a22abb82283185d19754c993ced6ac6d88335db" Nov 29 07:03:27 crc kubenswrapper[4947]: I1129 07:03:27.923934 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tlldl" Nov 29 07:03:27 crc kubenswrapper[4947]: I1129 07:03:27.951593 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bc26e42-9673-458e-9d27-c18b701e50fb-utilities\") pod \"6bc26e42-9673-458e-9d27-c18b701e50fb\" (UID: \"6bc26e42-9673-458e-9d27-c18b701e50fb\") " Nov 29 07:03:27 crc kubenswrapper[4947]: I1129 07:03:27.951736 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbsvh\" (UniqueName: \"kubernetes.io/projected/6bc26e42-9673-458e-9d27-c18b701e50fb-kube-api-access-nbsvh\") pod \"6bc26e42-9673-458e-9d27-c18b701e50fb\" (UID: \"6bc26e42-9673-458e-9d27-c18b701e50fb\") " Nov 29 07:03:27 crc kubenswrapper[4947]: I1129 07:03:27.951926 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bc26e42-9673-458e-9d27-c18b701e50fb-catalog-content\") pod \"6bc26e42-9673-458e-9d27-c18b701e50fb\" (UID: \"6bc26e42-9673-458e-9d27-c18b701e50fb\") " Nov 29 07:03:27 crc kubenswrapper[4947]: I1129 07:03:27.956441 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bc26e42-9673-458e-9d27-c18b701e50fb-utilities" (OuterVolumeSpecName: "utilities") pod "6bc26e42-9673-458e-9d27-c18b701e50fb" (UID: "6bc26e42-9673-458e-9d27-c18b701e50fb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:03:27 crc kubenswrapper[4947]: I1129 07:03:27.968661 4947 scope.go:117] "RemoveContainer" containerID="30a2f6ae018059db5ff5e2c688b2562af9ebe8f636e053c82cf46c23c60676fb" Nov 29 07:03:27 crc kubenswrapper[4947]: I1129 07:03:27.969001 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bc26e42-9673-458e-9d27-c18b701e50fb-kube-api-access-nbsvh" (OuterVolumeSpecName: "kube-api-access-nbsvh") pod "6bc26e42-9673-458e-9d27-c18b701e50fb" (UID: "6bc26e42-9673-458e-9d27-c18b701e50fb"). InnerVolumeSpecName "kube-api-access-nbsvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:03:27 crc kubenswrapper[4947]: I1129 07:03:27.979356 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bc26e42-9673-458e-9d27-c18b701e50fb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6bc26e42-9673-458e-9d27-c18b701e50fb" (UID: "6bc26e42-9673-458e-9d27-c18b701e50fb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:03:28 crc kubenswrapper[4947]: I1129 07:03:28.032581 4947 scope.go:117] "RemoveContainer" containerID="5e7262a9620535453c204799a346434f86a28b63c53ce8e553dea996341ac929" Nov 29 07:03:28 crc kubenswrapper[4947]: I1129 07:03:28.058391 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bc26e42-9673-458e-9d27-c18b701e50fb-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:03:28 crc kubenswrapper[4947]: I1129 07:03:28.058558 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bc26e42-9673-458e-9d27-c18b701e50fb-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:03:28 crc kubenswrapper[4947]: I1129 07:03:28.058644 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbsvh\" (UniqueName: \"kubernetes.io/projected/6bc26e42-9673-458e-9d27-c18b701e50fb-kube-api-access-nbsvh\") on node \"crc\" DevicePath \"\"" Nov 29 07:03:28 crc kubenswrapper[4947]: I1129 07:03:28.263018 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tlldl"] Nov 29 07:03:28 crc kubenswrapper[4947]: I1129 07:03:28.274178 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tlldl"] Nov 29 07:03:29 crc kubenswrapper[4947]: I1129 07:03:29.190881 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bc26e42-9673-458e-9d27-c18b701e50fb" path="/var/lib/kubelet/pods/6bc26e42-9673-458e-9d27-c18b701e50fb/volumes" Nov 29 07:03:38 crc kubenswrapper[4947]: I1129 07:03:38.179659 4947 scope.go:117] "RemoveContainer" containerID="4e08579e8ab72d8a7c4f3261905a80a5e108e5f74d8ab7f6a91c9b8476999fd3" Nov 29 07:03:38 crc kubenswrapper[4947]: E1129 07:03:38.180754 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:03:50 crc kubenswrapper[4947]: I1129 07:03:50.179081 4947 scope.go:117] "RemoveContainer" containerID="4e08579e8ab72d8a7c4f3261905a80a5e108e5f74d8ab7f6a91c9b8476999fd3" Nov 29 07:03:50 crc kubenswrapper[4947]: E1129 07:03:50.180340 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:04:02 crc kubenswrapper[4947]: I1129 07:04:02.179538 4947 scope.go:117] "RemoveContainer" containerID="4e08579e8ab72d8a7c4f3261905a80a5e108e5f74d8ab7f6a91c9b8476999fd3" Nov 29 07:04:02 crc kubenswrapper[4947]: E1129 07:04:02.180563 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:04:16 crc kubenswrapper[4947]: I1129 07:04:16.178943 4947 scope.go:117] "RemoveContainer" containerID="4e08579e8ab72d8a7c4f3261905a80a5e108e5f74d8ab7f6a91c9b8476999fd3" Nov 29 07:04:16 crc kubenswrapper[4947]: E1129 07:04:16.180180 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:04:29 crc kubenswrapper[4947]: I1129 07:04:29.187808 4947 scope.go:117] "RemoveContainer" containerID="4e08579e8ab72d8a7c4f3261905a80a5e108e5f74d8ab7f6a91c9b8476999fd3" Nov 29 07:04:29 crc kubenswrapper[4947]: E1129 07:04:29.188794 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:04:35 crc kubenswrapper[4947]: I1129 07:04:35.961905 4947 scope.go:117] "RemoveContainer" containerID="7e2ce4a8b293b02dd386adfb738bc53fd5c8f38a3c48fb6e1f08ffd7a020b066" Nov 29 07:04:35 crc kubenswrapper[4947]: I1129 07:04:35.989045 4947 scope.go:117] "RemoveContainer" containerID="dc5ae4f2a27f16dec9f3a6d76b66cf9965cfd88347f9939f55c5178c1b18c063" Nov 29 07:04:42 crc kubenswrapper[4947]: I1129 07:04:42.179689 4947 scope.go:117] "RemoveContainer" containerID="4e08579e8ab72d8a7c4f3261905a80a5e108e5f74d8ab7f6a91c9b8476999fd3" Nov 29 07:04:42 crc kubenswrapper[4947]: E1129 07:04:42.180789 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:04:55 crc kubenswrapper[4947]: I1129 07:04:55.181201 4947 scope.go:117] "RemoveContainer" containerID="4e08579e8ab72d8a7c4f3261905a80a5e108e5f74d8ab7f6a91c9b8476999fd3" Nov 29 07:04:55 crc kubenswrapper[4947]: E1129 07:04:55.182181 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:05:04 crc kubenswrapper[4947]: I1129 07:05:04.047967 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-50f4-account-create-update-9rpwl"] Nov 29 07:05:04 crc kubenswrapper[4947]: I1129 07:05:04.059038 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-6pgp4"] Nov 29 07:05:04 crc kubenswrapper[4947]: I1129 07:05:04.072961 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-d5thj"] Nov 29 07:05:04 crc kubenswrapper[4947]: I1129 07:05:04.085982 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-d5thj"] Nov 29 07:05:04 crc kubenswrapper[4947]: I1129 07:05:04.118821 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-50f4-account-create-update-9rpwl"] Nov 29 07:05:04 crc kubenswrapper[4947]: I1129 07:05:04.135858 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-6pgp4"] Nov 29 07:05:05 crc kubenswrapper[4947]: I1129 07:05:05.036675 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-2380-account-create-update-8fbcp"] Nov 29 07:05:05 crc kubenswrapper[4947]: I1129 07:05:05.045985 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-2380-account-create-update-8fbcp"] Nov 29 07:05:05 crc kubenswrapper[4947]: I1129 07:05:05.190783 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f8504e7-7313-440f-87b1-13a06167f241" path="/var/lib/kubelet/pods/0f8504e7-7313-440f-87b1-13a06167f241/volumes" Nov 29 07:05:05 crc kubenswrapper[4947]: I1129 07:05:05.191724 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28eb01a7-ba6c-4709-ba2c-031ef8fe4b17" path="/var/lib/kubelet/pods/28eb01a7-ba6c-4709-ba2c-031ef8fe4b17/volumes" Nov 29 07:05:05 crc kubenswrapper[4947]: I1129 07:05:05.192378 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3da3753b-36fd-41b4-a3d9-34ad05c5e2f1" path="/var/lib/kubelet/pods/3da3753b-36fd-41b4-a3d9-34ad05c5e2f1/volumes" Nov 29 07:05:05 crc kubenswrapper[4947]: I1129 07:05:05.193063 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75a2658c-c424-413f-9b8e-40a8cf3e6aeb" path="/var/lib/kubelet/pods/75a2658c-c424-413f-9b8e-40a8cf3e6aeb/volumes" Nov 29 07:05:06 crc kubenswrapper[4947]: I1129 07:05:06.179656 4947 scope.go:117] "RemoveContainer" containerID="4e08579e8ab72d8a7c4f3261905a80a5e108e5f74d8ab7f6a91c9b8476999fd3" Nov 29 07:05:06 crc kubenswrapper[4947]: E1129 07:05:06.180314 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:05:08 crc kubenswrapper[4947]: I1129 07:05:08.055593 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-g5qfw"] Nov 29 07:05:08 crc kubenswrapper[4947]: I1129 07:05:08.064527 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-g5qfw"] Nov 29 07:05:09 crc kubenswrapper[4947]: I1129 07:05:09.038329 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-f74a-account-create-update-jsfv8"] Nov 29 07:05:09 crc kubenswrapper[4947]: I1129 07:05:09.049412 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-f74a-account-create-update-jsfv8"] Nov 29 07:05:09 crc kubenswrapper[4947]: I1129 07:05:09.204067 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4647fd5c-6d46-4947-be25-554fa8a74cff" path="/var/lib/kubelet/pods/4647fd5c-6d46-4947-be25-554fa8a74cff/volumes" Nov 29 07:05:09 crc kubenswrapper[4947]: I1129 07:05:09.204961 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d8e8106-6ded-493a-8df9-9c798209d461" path="/var/lib/kubelet/pods/7d8e8106-6ded-493a-8df9-9c798209d461/volumes" Nov 29 07:05:13 crc kubenswrapper[4947]: I1129 07:05:13.038587 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-g7bk7"] Nov 29 07:05:13 crc kubenswrapper[4947]: I1129 07:05:13.048681 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-g7bk7"] Nov 29 07:05:13 crc kubenswrapper[4947]: I1129 07:05:13.192704 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1884516f-3c31-4dc9-8d46-4cceeefeb6e2" path="/var/lib/kubelet/pods/1884516f-3c31-4dc9-8d46-4cceeefeb6e2/volumes" Nov 29 07:05:14 crc kubenswrapper[4947]: I1129 07:05:14.039861 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-274hz"] Nov 29 07:05:14 crc kubenswrapper[4947]: I1129 07:05:14.048722 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-274hz"] Nov 29 07:05:15 crc kubenswrapper[4947]: I1129 07:05:15.191838 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8a15105-2612-40d5-b685-5ee0c0ad58a8" path="/var/lib/kubelet/pods/e8a15105-2612-40d5-b685-5ee0c0ad58a8/volumes" Nov 29 07:05:17 crc kubenswrapper[4947]: I1129 07:05:17.179743 4947 scope.go:117] "RemoveContainer" containerID="4e08579e8ab72d8a7c4f3261905a80a5e108e5f74d8ab7f6a91c9b8476999fd3" Nov 29 07:05:17 crc kubenswrapper[4947]: E1129 07:05:17.180666 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:05:25 crc kubenswrapper[4947]: I1129 07:05:25.042408 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-b914-account-create-update-c249x"] Nov 29 07:05:25 crc kubenswrapper[4947]: I1129 07:05:25.054810 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-b914-account-create-update-c249x"] Nov 29 07:05:25 crc kubenswrapper[4947]: I1129 07:05:25.191470 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c59e9c9-8ffc-422e-8565-7aa51d7ae12d" path="/var/lib/kubelet/pods/8c59e9c9-8ffc-422e-8565-7aa51d7ae12d/volumes" Nov 29 07:05:25 crc kubenswrapper[4947]: I1129 07:05:25.380737 4947 generic.go:334] "Generic (PLEG): container finished" podID="2c0f3f4c-3de4-4c42-b53f-57845082dd20" containerID="c70804da874a89e394a2eb609846fd6eb81bba660410545dfb851a76fb638483" exitCode=0 Nov 29 07:05:25 crc kubenswrapper[4947]: I1129 07:05:25.380806 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zx6rs" event={"ID":"2c0f3f4c-3de4-4c42-b53f-57845082dd20","Type":"ContainerDied","Data":"c70804da874a89e394a2eb609846fd6eb81bba660410545dfb851a76fb638483"} Nov 29 07:05:26 crc kubenswrapper[4947]: I1129 07:05:26.861554 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zx6rs" Nov 29 07:05:26 crc kubenswrapper[4947]: I1129 07:05:26.968655 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2c0f3f4c-3de4-4c42-b53f-57845082dd20-ssh-key\") pod \"2c0f3f4c-3de4-4c42-b53f-57845082dd20\" (UID: \"2c0f3f4c-3de4-4c42-b53f-57845082dd20\") " Nov 29 07:05:26 crc kubenswrapper[4947]: I1129 07:05:26.968781 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2c0f3f4c-3de4-4c42-b53f-57845082dd20-inventory\") pod \"2c0f3f4c-3de4-4c42-b53f-57845082dd20\" (UID: \"2c0f3f4c-3de4-4c42-b53f-57845082dd20\") " Nov 29 07:05:26 crc kubenswrapper[4947]: I1129 07:05:26.970035 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c0f3f4c-3de4-4c42-b53f-57845082dd20-bootstrap-combined-ca-bundle\") pod \"2c0f3f4c-3de4-4c42-b53f-57845082dd20\" (UID: \"2c0f3f4c-3de4-4c42-b53f-57845082dd20\") " Nov 29 07:05:26 crc kubenswrapper[4947]: I1129 07:05:26.970333 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vqw9\" (UniqueName: \"kubernetes.io/projected/2c0f3f4c-3de4-4c42-b53f-57845082dd20-kube-api-access-6vqw9\") pod \"2c0f3f4c-3de4-4c42-b53f-57845082dd20\" (UID: \"2c0f3f4c-3de4-4c42-b53f-57845082dd20\") " Nov 29 07:05:26 crc kubenswrapper[4947]: I1129 07:05:26.977351 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c0f3f4c-3de4-4c42-b53f-57845082dd20-kube-api-access-6vqw9" (OuterVolumeSpecName: "kube-api-access-6vqw9") pod "2c0f3f4c-3de4-4c42-b53f-57845082dd20" (UID: "2c0f3f4c-3de4-4c42-b53f-57845082dd20"). InnerVolumeSpecName "kube-api-access-6vqw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:05:26 crc kubenswrapper[4947]: I1129 07:05:26.977863 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c0f3f4c-3de4-4c42-b53f-57845082dd20-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "2c0f3f4c-3de4-4c42-b53f-57845082dd20" (UID: "2c0f3f4c-3de4-4c42-b53f-57845082dd20"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:05:27 crc kubenswrapper[4947]: I1129 07:05:27.007616 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c0f3f4c-3de4-4c42-b53f-57845082dd20-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2c0f3f4c-3de4-4c42-b53f-57845082dd20" (UID: "2c0f3f4c-3de4-4c42-b53f-57845082dd20"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:05:27 crc kubenswrapper[4947]: I1129 07:05:27.008064 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c0f3f4c-3de4-4c42-b53f-57845082dd20-inventory" (OuterVolumeSpecName: "inventory") pod "2c0f3f4c-3de4-4c42-b53f-57845082dd20" (UID: "2c0f3f4c-3de4-4c42-b53f-57845082dd20"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:05:27 crc kubenswrapper[4947]: I1129 07:05:27.072295 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vqw9\" (UniqueName: \"kubernetes.io/projected/2c0f3f4c-3de4-4c42-b53f-57845082dd20-kube-api-access-6vqw9\") on node \"crc\" DevicePath \"\"" Nov 29 07:05:27 crc kubenswrapper[4947]: I1129 07:05:27.072429 4947 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2c0f3f4c-3de4-4c42-b53f-57845082dd20-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:05:27 crc kubenswrapper[4947]: I1129 07:05:27.072445 4947 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2c0f3f4c-3de4-4c42-b53f-57845082dd20-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:05:27 crc kubenswrapper[4947]: I1129 07:05:27.072457 4947 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c0f3f4c-3de4-4c42-b53f-57845082dd20-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:05:27 crc kubenswrapper[4947]: I1129 07:05:27.410498 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zx6rs" event={"ID":"2c0f3f4c-3de4-4c42-b53f-57845082dd20","Type":"ContainerDied","Data":"a3b76e49a6be7d87ca99d685804bff18cbfddcd3e90c3246bcb849df34ab1e53"} Nov 29 07:05:27 crc kubenswrapper[4947]: I1129 07:05:27.410872 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3b76e49a6be7d87ca99d685804bff18cbfddcd3e90c3246bcb849df34ab1e53" Nov 29 07:05:27 crc kubenswrapper[4947]: I1129 07:05:27.410632 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zx6rs" Nov 29 07:05:27 crc kubenswrapper[4947]: I1129 07:05:27.519970 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qp8m5"] Nov 29 07:05:27 crc kubenswrapper[4947]: E1129 07:05:27.537093 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bc26e42-9673-458e-9d27-c18b701e50fb" containerName="extract-utilities" Nov 29 07:05:27 crc kubenswrapper[4947]: I1129 07:05:27.537181 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bc26e42-9673-458e-9d27-c18b701e50fb" containerName="extract-utilities" Nov 29 07:05:27 crc kubenswrapper[4947]: E1129 07:05:27.537236 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bc26e42-9673-458e-9d27-c18b701e50fb" containerName="registry-server" Nov 29 07:05:27 crc kubenswrapper[4947]: I1129 07:05:27.537249 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bc26e42-9673-458e-9d27-c18b701e50fb" containerName="registry-server" Nov 29 07:05:27 crc kubenswrapper[4947]: E1129 07:05:27.537319 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bc26e42-9673-458e-9d27-c18b701e50fb" containerName="extract-content" Nov 29 07:05:27 crc kubenswrapper[4947]: I1129 07:05:27.537330 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bc26e42-9673-458e-9d27-c18b701e50fb" containerName="extract-content" Nov 29 07:05:27 crc kubenswrapper[4947]: E1129 07:05:27.537368 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c0f3f4c-3de4-4c42-b53f-57845082dd20" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 29 07:05:27 crc kubenswrapper[4947]: I1129 07:05:27.537379 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c0f3f4c-3de4-4c42-b53f-57845082dd20" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 29 07:05:27 crc kubenswrapper[4947]: I1129 07:05:27.538536 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c0f3f4c-3de4-4c42-b53f-57845082dd20" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 29 07:05:27 crc kubenswrapper[4947]: I1129 07:05:27.538634 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bc26e42-9673-458e-9d27-c18b701e50fb" containerName="registry-server" Nov 29 07:05:27 crc kubenswrapper[4947]: I1129 07:05:27.544053 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qp8m5" Nov 29 07:05:27 crc kubenswrapper[4947]: I1129 07:05:27.546859 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qp8m5"] Nov 29 07:05:27 crc kubenswrapper[4947]: I1129 07:05:27.549490 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:05:27 crc kubenswrapper[4947]: I1129 07:05:27.549591 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xvljs" Nov 29 07:05:27 crc kubenswrapper[4947]: I1129 07:05:27.552049 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:05:27 crc kubenswrapper[4947]: I1129 07:05:27.562120 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:05:27 crc kubenswrapper[4947]: I1129 07:05:27.684789 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfrsd\" (UniqueName: \"kubernetes.io/projected/05604b27-055f-4688-9382-f1c91615bc46-kube-api-access-dfrsd\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qp8m5\" (UID: \"05604b27-055f-4688-9382-f1c91615bc46\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qp8m5" Nov 29 07:05:27 crc kubenswrapper[4947]: I1129 07:05:27.685333 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05604b27-055f-4688-9382-f1c91615bc46-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qp8m5\" (UID: \"05604b27-055f-4688-9382-f1c91615bc46\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qp8m5" Nov 29 07:05:27 crc kubenswrapper[4947]: I1129 07:05:27.685506 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/05604b27-055f-4688-9382-f1c91615bc46-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qp8m5\" (UID: \"05604b27-055f-4688-9382-f1c91615bc46\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qp8m5" Nov 29 07:05:27 crc kubenswrapper[4947]: I1129 07:05:27.787178 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfrsd\" (UniqueName: \"kubernetes.io/projected/05604b27-055f-4688-9382-f1c91615bc46-kube-api-access-dfrsd\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qp8m5\" (UID: \"05604b27-055f-4688-9382-f1c91615bc46\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qp8m5" Nov 29 07:05:27 crc kubenswrapper[4947]: I1129 07:05:27.787339 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05604b27-055f-4688-9382-f1c91615bc46-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qp8m5\" (UID: \"05604b27-055f-4688-9382-f1c91615bc46\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qp8m5" Nov 29 07:05:27 crc kubenswrapper[4947]: I1129 07:05:27.787428 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/05604b27-055f-4688-9382-f1c91615bc46-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qp8m5\" (UID: \"05604b27-055f-4688-9382-f1c91615bc46\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qp8m5" Nov 29 07:05:27 crc kubenswrapper[4947]: I1129 07:05:27.797681 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05604b27-055f-4688-9382-f1c91615bc46-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qp8m5\" (UID: \"05604b27-055f-4688-9382-f1c91615bc46\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qp8m5" Nov 29 07:05:27 crc kubenswrapper[4947]: I1129 07:05:27.797687 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/05604b27-055f-4688-9382-f1c91615bc46-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qp8m5\" (UID: \"05604b27-055f-4688-9382-f1c91615bc46\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qp8m5" Nov 29 07:05:27 crc kubenswrapper[4947]: I1129 07:05:27.810236 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfrsd\" (UniqueName: \"kubernetes.io/projected/05604b27-055f-4688-9382-f1c91615bc46-kube-api-access-dfrsd\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qp8m5\" (UID: \"05604b27-055f-4688-9382-f1c91615bc46\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qp8m5" Nov 29 07:05:27 crc kubenswrapper[4947]: I1129 07:05:27.876533 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qp8m5" Nov 29 07:05:28 crc kubenswrapper[4947]: I1129 07:05:28.276350 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qp8m5"] Nov 29 07:05:28 crc kubenswrapper[4947]: I1129 07:05:28.425322 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qp8m5" event={"ID":"05604b27-055f-4688-9382-f1c91615bc46","Type":"ContainerStarted","Data":"8f7d871b5c460ed41d9d5265937f6e7a831cef7b7f83f15d03dce1d2616f6f3c"} Nov 29 07:05:30 crc kubenswrapper[4947]: I1129 07:05:30.467028 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qp8m5" event={"ID":"05604b27-055f-4688-9382-f1c91615bc46","Type":"ContainerStarted","Data":"29411b86a185aa97f61a263fd791092a82016efe9d9faf00cde2b5639c3fdae4"} Nov 29 07:05:30 crc kubenswrapper[4947]: I1129 07:05:30.487349 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qp8m5" podStartSLOduration=2.430485954 podStartE2EDuration="3.487327424s" podCreationTimestamp="2025-11-29 07:05:27 +0000 UTC" firstStartedPulling="2025-11-29 07:05:28.289190889 +0000 UTC m=+1879.333572970" lastFinishedPulling="2025-11-29 07:05:29.346032339 +0000 UTC m=+1880.390414440" observedRunningTime="2025-11-29 07:05:30.486662637 +0000 UTC m=+1881.531044718" watchObservedRunningTime="2025-11-29 07:05:30.487327424 +0000 UTC m=+1881.531709505" Nov 29 07:05:32 crc kubenswrapper[4947]: I1129 07:05:32.180110 4947 scope.go:117] "RemoveContainer" containerID="4e08579e8ab72d8a7c4f3261905a80a5e108e5f74d8ab7f6a91c9b8476999fd3" Nov 29 07:05:32 crc kubenswrapper[4947]: E1129 07:05:32.181541 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:05:33 crc kubenswrapper[4947]: I1129 07:05:33.043489 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-p4xsj"] Nov 29 07:05:33 crc kubenswrapper[4947]: I1129 07:05:33.057944 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-04ce-account-create-update-jjbtk"] Nov 29 07:05:33 crc kubenswrapper[4947]: I1129 07:05:33.069740 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-p4xsj"] Nov 29 07:05:33 crc kubenswrapper[4947]: I1129 07:05:33.081175 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-04ce-account-create-update-jjbtk"] Nov 29 07:05:33 crc kubenswrapper[4947]: I1129 07:05:33.091281 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-8407-account-create-update-b65nc"] Nov 29 07:05:33 crc kubenswrapper[4947]: I1129 07:05:33.100738 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-8407-account-create-update-b65nc"] Nov 29 07:05:33 crc kubenswrapper[4947]: I1129 07:05:33.211081 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ff65d93-9651-43e9-9309-49ed52f33a3c" path="/var/lib/kubelet/pods/7ff65d93-9651-43e9-9309-49ed52f33a3c/volumes" Nov 29 07:05:33 crc kubenswrapper[4947]: I1129 07:05:33.211942 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c243880-90ea-479d-bba2-a12f36ad3e82" path="/var/lib/kubelet/pods/9c243880-90ea-479d-bba2-a12f36ad3e82/volumes" Nov 29 07:05:33 crc kubenswrapper[4947]: I1129 07:05:33.212826 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a315def0-b8c0-4a35-95e3-faf969bb901c" path="/var/lib/kubelet/pods/a315def0-b8c0-4a35-95e3-faf969bb901c/volumes" Nov 29 07:05:36 crc kubenswrapper[4947]: I1129 07:05:36.054764 4947 scope.go:117] "RemoveContainer" containerID="70c1c80e0cceb47f2f19077b4c6b7eed8596e31cd8ca081697a71fb968008929" Nov 29 07:05:36 crc kubenswrapper[4947]: I1129 07:05:36.097323 4947 scope.go:117] "RemoveContainer" containerID="489752600eba33d8113eba1165505f1a4acd04e6273f14c16462bf120ce22f63" Nov 29 07:05:36 crc kubenswrapper[4947]: I1129 07:05:36.148060 4947 scope.go:117] "RemoveContainer" containerID="d848c790d67b09dce2b34a90b7ce52c5ec4bf0f0296069b9065fb2b090e897af" Nov 29 07:05:36 crc kubenswrapper[4947]: I1129 07:05:36.192213 4947 scope.go:117] "RemoveContainer" containerID="076f2fa9cbe8cfa6fdb60e5d12fc40cde04b899b6a01e3ccce60342199d349f3" Nov 29 07:05:36 crc kubenswrapper[4947]: I1129 07:05:36.243791 4947 scope.go:117] "RemoveContainer" containerID="b11a1bac308817c416b51baae92a04ac7cfd167554827b16476b89c6d45fbff7" Nov 29 07:05:36 crc kubenswrapper[4947]: I1129 07:05:36.298056 4947 scope.go:117] "RemoveContainer" containerID="a7dff5927bee0a0a68415cc71926f507e73174840345428cacecb266d68514e2" Nov 29 07:05:36 crc kubenswrapper[4947]: I1129 07:05:36.321561 4947 scope.go:117] "RemoveContainer" containerID="f7e61fea305057921b77e15282c15cc48935a75159f3fdc4ee995e01881439ed" Nov 29 07:05:36 crc kubenswrapper[4947]: I1129 07:05:36.373254 4947 scope.go:117] "RemoveContainer" containerID="604d1f2f681d00aabfa37f506853ca330af00f3d7877a66d7bdff7733b89f7b0" Nov 29 07:05:36 crc kubenswrapper[4947]: I1129 07:05:36.397527 4947 scope.go:117] "RemoveContainer" containerID="f3f1bb31c83c16ea36b22c1cf9f374c59a994a8d564b6b94b19b9e3af24689e0" Nov 29 07:05:36 crc kubenswrapper[4947]: I1129 07:05:36.422604 4947 scope.go:117] "RemoveContainer" containerID="0947df8cdc146d3c4e325d26f8b32b5b0fb2b3f775107d120af0202291f237b6" Nov 29 07:05:36 crc kubenswrapper[4947]: I1129 07:05:36.443803 4947 scope.go:117] "RemoveContainer" containerID="dd31444ca7c2a006fc538c1f4c099000022fbe7ea2752e9ee126c5e23064a312" Nov 29 07:05:36 crc kubenswrapper[4947]: I1129 07:05:36.472692 4947 scope.go:117] "RemoveContainer" containerID="0b180403347745205a169de81adc02b2ffc9fc88eddec7ab486e5a9ab0bade32" Nov 29 07:05:36 crc kubenswrapper[4947]: I1129 07:05:36.500408 4947 scope.go:117] "RemoveContainer" containerID="3a45ff862944a36f00d1b34cdd784774b21726d2522be0cabb02e89822623af4" Nov 29 07:05:36 crc kubenswrapper[4947]: I1129 07:05:36.529449 4947 scope.go:117] "RemoveContainer" containerID="3ab396327df99ffa84c29b3623168cbf3e22dbe0f86b662c4b147a5cf43b0979" Nov 29 07:05:46 crc kubenswrapper[4947]: I1129 07:05:46.064962 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-svhxq"] Nov 29 07:05:46 crc kubenswrapper[4947]: I1129 07:05:46.080021 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-svhxq"] Nov 29 07:05:46 crc kubenswrapper[4947]: I1129 07:05:46.178591 4947 scope.go:117] "RemoveContainer" containerID="4e08579e8ab72d8a7c4f3261905a80a5e108e5f74d8ab7f6a91c9b8476999fd3" Nov 29 07:05:46 crc kubenswrapper[4947]: E1129 07:05:46.179180 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:05:47 crc kubenswrapper[4947]: I1129 07:05:47.191137 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96a21275-0215-4d96-adcd-6e60d7a2c900" path="/var/lib/kubelet/pods/96a21275-0215-4d96-adcd-6e60d7a2c900/volumes" Nov 29 07:05:59 crc kubenswrapper[4947]: I1129 07:05:59.187132 4947 scope.go:117] "RemoveContainer" containerID="4e08579e8ab72d8a7c4f3261905a80a5e108e5f74d8ab7f6a91c9b8476999fd3" Nov 29 07:05:59 crc kubenswrapper[4947]: E1129 07:05:59.188436 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:06:10 crc kubenswrapper[4947]: I1129 07:06:10.179244 4947 scope.go:117] "RemoveContainer" containerID="4e08579e8ab72d8a7c4f3261905a80a5e108e5f74d8ab7f6a91c9b8476999fd3" Nov 29 07:06:10 crc kubenswrapper[4947]: E1129 07:06:10.180396 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:06:22 crc kubenswrapper[4947]: I1129 07:06:22.179622 4947 scope.go:117] "RemoveContainer" containerID="4e08579e8ab72d8a7c4f3261905a80a5e108e5f74d8ab7f6a91c9b8476999fd3" Nov 29 07:06:22 crc kubenswrapper[4947]: E1129 07:06:22.180709 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:06:33 crc kubenswrapper[4947]: I1129 07:06:33.613931 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-466pl"] Nov 29 07:06:33 crc kubenswrapper[4947]: I1129 07:06:33.617289 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-466pl" Nov 29 07:06:33 crc kubenswrapper[4947]: I1129 07:06:33.629883 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-466pl"] Nov 29 07:06:33 crc kubenswrapper[4947]: I1129 07:06:33.718746 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d5c9d7a-2105-48a6-bec7-d46e3141f581-catalog-content\") pod \"certified-operators-466pl\" (UID: \"3d5c9d7a-2105-48a6-bec7-d46e3141f581\") " pod="openshift-marketplace/certified-operators-466pl" Nov 29 07:06:33 crc kubenswrapper[4947]: I1129 07:06:33.718805 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4ppv\" (UniqueName: \"kubernetes.io/projected/3d5c9d7a-2105-48a6-bec7-d46e3141f581-kube-api-access-w4ppv\") pod \"certified-operators-466pl\" (UID: \"3d5c9d7a-2105-48a6-bec7-d46e3141f581\") " pod="openshift-marketplace/certified-operators-466pl" Nov 29 07:06:33 crc kubenswrapper[4947]: I1129 07:06:33.718960 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d5c9d7a-2105-48a6-bec7-d46e3141f581-utilities\") pod \"certified-operators-466pl\" (UID: \"3d5c9d7a-2105-48a6-bec7-d46e3141f581\") " pod="openshift-marketplace/certified-operators-466pl" Nov 29 07:06:33 crc kubenswrapper[4947]: I1129 07:06:33.821466 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d5c9d7a-2105-48a6-bec7-d46e3141f581-utilities\") pod \"certified-operators-466pl\" (UID: \"3d5c9d7a-2105-48a6-bec7-d46e3141f581\") " pod="openshift-marketplace/certified-operators-466pl" Nov 29 07:06:33 crc kubenswrapper[4947]: I1129 07:06:33.821664 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d5c9d7a-2105-48a6-bec7-d46e3141f581-catalog-content\") pod \"certified-operators-466pl\" (UID: \"3d5c9d7a-2105-48a6-bec7-d46e3141f581\") " pod="openshift-marketplace/certified-operators-466pl" Nov 29 07:06:33 crc kubenswrapper[4947]: I1129 07:06:33.821699 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4ppv\" (UniqueName: \"kubernetes.io/projected/3d5c9d7a-2105-48a6-bec7-d46e3141f581-kube-api-access-w4ppv\") pod \"certified-operators-466pl\" (UID: \"3d5c9d7a-2105-48a6-bec7-d46e3141f581\") " pod="openshift-marketplace/certified-operators-466pl" Nov 29 07:06:33 crc kubenswrapper[4947]: I1129 07:06:33.822139 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d5c9d7a-2105-48a6-bec7-d46e3141f581-utilities\") pod \"certified-operators-466pl\" (UID: \"3d5c9d7a-2105-48a6-bec7-d46e3141f581\") " pod="openshift-marketplace/certified-operators-466pl" Nov 29 07:06:33 crc kubenswrapper[4947]: I1129 07:06:33.822924 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d5c9d7a-2105-48a6-bec7-d46e3141f581-catalog-content\") pod \"certified-operators-466pl\" (UID: \"3d5c9d7a-2105-48a6-bec7-d46e3141f581\") " pod="openshift-marketplace/certified-operators-466pl" Nov 29 07:06:33 crc kubenswrapper[4947]: I1129 07:06:33.844905 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4ppv\" (UniqueName: \"kubernetes.io/projected/3d5c9d7a-2105-48a6-bec7-d46e3141f581-kube-api-access-w4ppv\") pod \"certified-operators-466pl\" (UID: \"3d5c9d7a-2105-48a6-bec7-d46e3141f581\") " pod="openshift-marketplace/certified-operators-466pl" Nov 29 07:06:33 crc kubenswrapper[4947]: I1129 07:06:33.945706 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-466pl" Nov 29 07:06:34 crc kubenswrapper[4947]: I1129 07:06:34.489573 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-466pl"] Nov 29 07:06:35 crc kubenswrapper[4947]: I1129 07:06:35.195419 4947 generic.go:334] "Generic (PLEG): container finished" podID="3d5c9d7a-2105-48a6-bec7-d46e3141f581" containerID="298c3ca1662b0b7a7f24ac822a0802b2ec6b54285e897597960fb0684500ebae" exitCode=0 Nov 29 07:06:35 crc kubenswrapper[4947]: I1129 07:06:35.195693 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-466pl" event={"ID":"3d5c9d7a-2105-48a6-bec7-d46e3141f581","Type":"ContainerDied","Data":"298c3ca1662b0b7a7f24ac822a0802b2ec6b54285e897597960fb0684500ebae"} Nov 29 07:06:35 crc kubenswrapper[4947]: I1129 07:06:35.196014 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-466pl" event={"ID":"3d5c9d7a-2105-48a6-bec7-d46e3141f581","Type":"ContainerStarted","Data":"a493e9585b5a37bc09674875b08f7056ed11847dc8db6225eaedd9d261417ac8"} Nov 29 07:06:36 crc kubenswrapper[4947]: I1129 07:06:36.048798 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-qqj6m"] Nov 29 07:06:36 crc kubenswrapper[4947]: I1129 07:06:36.059952 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-qqj6m"] Nov 29 07:06:36 crc kubenswrapper[4947]: I1129 07:06:36.179637 4947 scope.go:117] "RemoveContainer" containerID="4e08579e8ab72d8a7c4f3261905a80a5e108e5f74d8ab7f6a91c9b8476999fd3" Nov 29 07:06:36 crc kubenswrapper[4947]: E1129 07:06:36.180440 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:06:36 crc kubenswrapper[4947]: I1129 07:06:36.793705 4947 scope.go:117] "RemoveContainer" containerID="8c174c1525a70ec1156fba04120e45e8cd4417c637710bc58bcea553dfcb6d67" Nov 29 07:06:37 crc kubenswrapper[4947]: I1129 07:06:37.040430 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-55q5b"] Nov 29 07:06:37 crc kubenswrapper[4947]: I1129 07:06:37.051049 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-55q5b"] Nov 29 07:06:37 crc kubenswrapper[4947]: I1129 07:06:37.192059 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42494b25-5db7-478c-b2a9-e14c7d990c0c" path="/var/lib/kubelet/pods/42494b25-5db7-478c-b2a9-e14c7d990c0c/volumes" Nov 29 07:06:37 crc kubenswrapper[4947]: I1129 07:06:37.193379 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b" path="/var/lib/kubelet/pods/8b1bf3c1-71b7-4f5e-bcf0-46cf736d8c5b/volumes" Nov 29 07:06:44 crc kubenswrapper[4947]: I1129 07:06:44.295430 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-466pl" event={"ID":"3d5c9d7a-2105-48a6-bec7-d46e3141f581","Type":"ContainerStarted","Data":"d340b37aaed61cc26812b2ea7f96ef1e877d7822c0f941a69690f11b27b5ad12"} Nov 29 07:06:45 crc kubenswrapper[4947]: I1129 07:06:45.314098 4947 generic.go:334] "Generic (PLEG): container finished" podID="3d5c9d7a-2105-48a6-bec7-d46e3141f581" containerID="d340b37aaed61cc26812b2ea7f96ef1e877d7822c0f941a69690f11b27b5ad12" exitCode=0 Nov 29 07:06:45 crc kubenswrapper[4947]: I1129 07:06:45.314174 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-466pl" event={"ID":"3d5c9d7a-2105-48a6-bec7-d46e3141f581","Type":"ContainerDied","Data":"d340b37aaed61cc26812b2ea7f96ef1e877d7822c0f941a69690f11b27b5ad12"} Nov 29 07:06:48 crc kubenswrapper[4947]: I1129 07:06:48.346662 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-466pl" event={"ID":"3d5c9d7a-2105-48a6-bec7-d46e3141f581","Type":"ContainerStarted","Data":"1c62ff2ea14fb8d0172e2985ccbab7de799b21a0cf3f008db1badd00ddf2f346"} Nov 29 07:06:48 crc kubenswrapper[4947]: I1129 07:06:48.369087 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-466pl" podStartSLOduration=4.3521316 podStartE2EDuration="15.369061734s" podCreationTimestamp="2025-11-29 07:06:33 +0000 UTC" firstStartedPulling="2025-11-29 07:06:36.211242162 +0000 UTC m=+1947.255624283" lastFinishedPulling="2025-11-29 07:06:47.228172296 +0000 UTC m=+1958.272554417" observedRunningTime="2025-11-29 07:06:48.364892569 +0000 UTC m=+1959.409274670" watchObservedRunningTime="2025-11-29 07:06:48.369061734 +0000 UTC m=+1959.413443835" Nov 29 07:06:49 crc kubenswrapper[4947]: I1129 07:06:49.187746 4947 scope.go:117] "RemoveContainer" containerID="4e08579e8ab72d8a7c4f3261905a80a5e108e5f74d8ab7f6a91c9b8476999fd3" Nov 29 07:06:49 crc kubenswrapper[4947]: E1129 07:06:49.188132 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:06:53 crc kubenswrapper[4947]: I1129 07:06:53.946178 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-466pl" Nov 29 07:06:53 crc kubenswrapper[4947]: I1129 07:06:53.947098 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-466pl" Nov 29 07:06:53 crc kubenswrapper[4947]: I1129 07:06:53.996190 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-466pl" Nov 29 07:06:54 crc kubenswrapper[4947]: I1129 07:06:54.063794 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-6njvc"] Nov 29 07:06:54 crc kubenswrapper[4947]: I1129 07:06:54.073880 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-6njvc"] Nov 29 07:06:54 crc kubenswrapper[4947]: I1129 07:06:54.463773 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-466pl" Nov 29 07:06:54 crc kubenswrapper[4947]: I1129 07:06:54.522957 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-466pl"] Nov 29 07:06:55 crc kubenswrapper[4947]: I1129 07:06:55.045405 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-l67pc"] Nov 29 07:06:55 crc kubenswrapper[4947]: I1129 07:06:55.056911 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-l67pc"] Nov 29 07:06:55 crc kubenswrapper[4947]: I1129 07:06:55.191290 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d3675d1-9c60-4463-936e-95953f64b250" path="/var/lib/kubelet/pods/7d3675d1-9c60-4463-936e-95953f64b250/volumes" Nov 29 07:06:55 crc kubenswrapper[4947]: I1129 07:06:55.192147 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b341c7eb-214f-49d0-ae91-a27c56857739" path="/var/lib/kubelet/pods/b341c7eb-214f-49d0-ae91-a27c56857739/volumes" Nov 29 07:06:56 crc kubenswrapper[4947]: I1129 07:06:56.040289 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-77d4t"] Nov 29 07:06:56 crc kubenswrapper[4947]: I1129 07:06:56.055965 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-77d4t"] Nov 29 07:06:56 crc kubenswrapper[4947]: I1129 07:06:56.427955 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-466pl" podUID="3d5c9d7a-2105-48a6-bec7-d46e3141f581" containerName="registry-server" containerID="cri-o://1c62ff2ea14fb8d0172e2985ccbab7de799b21a0cf3f008db1badd00ddf2f346" gracePeriod=2 Nov 29 07:06:57 crc kubenswrapper[4947]: I1129 07:06:57.191820 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f26cf011-4f52-4d26-a248-b92906824399" path="/var/lib/kubelet/pods/f26cf011-4f52-4d26-a248-b92906824399/volumes" Nov 29 07:06:58 crc kubenswrapper[4947]: I1129 07:06:58.459169 4947 generic.go:334] "Generic (PLEG): container finished" podID="3d5c9d7a-2105-48a6-bec7-d46e3141f581" containerID="1c62ff2ea14fb8d0172e2985ccbab7de799b21a0cf3f008db1badd00ddf2f346" exitCode=0 Nov 29 07:06:58 crc kubenswrapper[4947]: I1129 07:06:58.459353 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-466pl" event={"ID":"3d5c9d7a-2105-48a6-bec7-d46e3141f581","Type":"ContainerDied","Data":"1c62ff2ea14fb8d0172e2985ccbab7de799b21a0cf3f008db1badd00ddf2f346"} Nov 29 07:06:58 crc kubenswrapper[4947]: I1129 07:06:58.747728 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-466pl" Nov 29 07:06:58 crc kubenswrapper[4947]: I1129 07:06:58.895178 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4ppv\" (UniqueName: \"kubernetes.io/projected/3d5c9d7a-2105-48a6-bec7-d46e3141f581-kube-api-access-w4ppv\") pod \"3d5c9d7a-2105-48a6-bec7-d46e3141f581\" (UID: \"3d5c9d7a-2105-48a6-bec7-d46e3141f581\") " Nov 29 07:06:58 crc kubenswrapper[4947]: I1129 07:06:58.895330 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d5c9d7a-2105-48a6-bec7-d46e3141f581-catalog-content\") pod \"3d5c9d7a-2105-48a6-bec7-d46e3141f581\" (UID: \"3d5c9d7a-2105-48a6-bec7-d46e3141f581\") " Nov 29 07:06:58 crc kubenswrapper[4947]: I1129 07:06:58.895406 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d5c9d7a-2105-48a6-bec7-d46e3141f581-utilities\") pod \"3d5c9d7a-2105-48a6-bec7-d46e3141f581\" (UID: \"3d5c9d7a-2105-48a6-bec7-d46e3141f581\") " Nov 29 07:06:58 crc kubenswrapper[4947]: I1129 07:06:58.897428 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d5c9d7a-2105-48a6-bec7-d46e3141f581-utilities" (OuterVolumeSpecName: "utilities") pod "3d5c9d7a-2105-48a6-bec7-d46e3141f581" (UID: "3d5c9d7a-2105-48a6-bec7-d46e3141f581"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:06:58 crc kubenswrapper[4947]: I1129 07:06:58.909854 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d5c9d7a-2105-48a6-bec7-d46e3141f581-kube-api-access-w4ppv" (OuterVolumeSpecName: "kube-api-access-w4ppv") pod "3d5c9d7a-2105-48a6-bec7-d46e3141f581" (UID: "3d5c9d7a-2105-48a6-bec7-d46e3141f581"). InnerVolumeSpecName "kube-api-access-w4ppv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:06:58 crc kubenswrapper[4947]: I1129 07:06:58.944148 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d5c9d7a-2105-48a6-bec7-d46e3141f581-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3d5c9d7a-2105-48a6-bec7-d46e3141f581" (UID: "3d5c9d7a-2105-48a6-bec7-d46e3141f581"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:06:58 crc kubenswrapper[4947]: I1129 07:06:58.998150 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4ppv\" (UniqueName: \"kubernetes.io/projected/3d5c9d7a-2105-48a6-bec7-d46e3141f581-kube-api-access-w4ppv\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:58 crc kubenswrapper[4947]: I1129 07:06:58.998245 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d5c9d7a-2105-48a6-bec7-d46e3141f581-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:58 crc kubenswrapper[4947]: I1129 07:06:58.998262 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d5c9d7a-2105-48a6-bec7-d46e3141f581-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:06:59 crc kubenswrapper[4947]: I1129 07:06:59.474475 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-466pl" event={"ID":"3d5c9d7a-2105-48a6-bec7-d46e3141f581","Type":"ContainerDied","Data":"a493e9585b5a37bc09674875b08f7056ed11847dc8db6225eaedd9d261417ac8"} Nov 29 07:06:59 crc kubenswrapper[4947]: I1129 07:06:59.474565 4947 scope.go:117] "RemoveContainer" containerID="1c62ff2ea14fb8d0172e2985ccbab7de799b21a0cf3f008db1badd00ddf2f346" Nov 29 07:06:59 crc kubenswrapper[4947]: I1129 07:06:59.474794 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-466pl" Nov 29 07:06:59 crc kubenswrapper[4947]: I1129 07:06:59.504390 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-466pl"] Nov 29 07:06:59 crc kubenswrapper[4947]: I1129 07:06:59.512547 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-466pl"] Nov 29 07:06:59 crc kubenswrapper[4947]: I1129 07:06:59.516804 4947 scope.go:117] "RemoveContainer" containerID="d340b37aaed61cc26812b2ea7f96ef1e877d7822c0f941a69690f11b27b5ad12" Nov 29 07:06:59 crc kubenswrapper[4947]: I1129 07:06:59.547606 4947 scope.go:117] "RemoveContainer" containerID="298c3ca1662b0b7a7f24ac822a0802b2ec6b54285e897597960fb0684500ebae" Nov 29 07:07:00 crc kubenswrapper[4947]: I1129 07:07:00.180400 4947 scope.go:117] "RemoveContainer" containerID="4e08579e8ab72d8a7c4f3261905a80a5e108e5f74d8ab7f6a91c9b8476999fd3" Nov 29 07:07:00 crc kubenswrapper[4947]: E1129 07:07:00.181267 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:07:01 crc kubenswrapper[4947]: I1129 07:07:01.216074 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d5c9d7a-2105-48a6-bec7-d46e3141f581" path="/var/lib/kubelet/pods/3d5c9d7a-2105-48a6-bec7-d46e3141f581/volumes" Nov 29 07:07:03 crc kubenswrapper[4947]: I1129 07:07:03.040986 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-4sjrf"] Nov 29 07:07:03 crc kubenswrapper[4947]: I1129 07:07:03.053316 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-4sjrf"] Nov 29 07:07:03 crc kubenswrapper[4947]: I1129 07:07:03.192765 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af3ad5b2-503c-4d1c-927c-0feab47e5212" path="/var/lib/kubelet/pods/af3ad5b2-503c-4d1c-927c-0feab47e5212/volumes" Nov 29 07:07:06 crc kubenswrapper[4947]: I1129 07:07:06.551488 4947 generic.go:334] "Generic (PLEG): container finished" podID="05604b27-055f-4688-9382-f1c91615bc46" containerID="29411b86a185aa97f61a263fd791092a82016efe9d9faf00cde2b5639c3fdae4" exitCode=0 Nov 29 07:07:06 crc kubenswrapper[4947]: I1129 07:07:06.551636 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qp8m5" event={"ID":"05604b27-055f-4688-9382-f1c91615bc46","Type":"ContainerDied","Data":"29411b86a185aa97f61a263fd791092a82016efe9d9faf00cde2b5639c3fdae4"} Nov 29 07:07:08 crc kubenswrapper[4947]: I1129 07:07:08.092304 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qp8m5" Nov 29 07:07:08 crc kubenswrapper[4947]: I1129 07:07:08.210346 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05604b27-055f-4688-9382-f1c91615bc46-inventory\") pod \"05604b27-055f-4688-9382-f1c91615bc46\" (UID: \"05604b27-055f-4688-9382-f1c91615bc46\") " Nov 29 07:07:08 crc kubenswrapper[4947]: I1129 07:07:08.210981 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfrsd\" (UniqueName: \"kubernetes.io/projected/05604b27-055f-4688-9382-f1c91615bc46-kube-api-access-dfrsd\") pod \"05604b27-055f-4688-9382-f1c91615bc46\" (UID: \"05604b27-055f-4688-9382-f1c91615bc46\") " Nov 29 07:07:08 crc kubenswrapper[4947]: I1129 07:07:08.211023 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/05604b27-055f-4688-9382-f1c91615bc46-ssh-key\") pod \"05604b27-055f-4688-9382-f1c91615bc46\" (UID: \"05604b27-055f-4688-9382-f1c91615bc46\") " Nov 29 07:07:08 crc kubenswrapper[4947]: I1129 07:07:08.232549 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05604b27-055f-4688-9382-f1c91615bc46-kube-api-access-dfrsd" (OuterVolumeSpecName: "kube-api-access-dfrsd") pod "05604b27-055f-4688-9382-f1c91615bc46" (UID: "05604b27-055f-4688-9382-f1c91615bc46"). InnerVolumeSpecName "kube-api-access-dfrsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:07:08 crc kubenswrapper[4947]: I1129 07:07:08.269664 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05604b27-055f-4688-9382-f1c91615bc46-inventory" (OuterVolumeSpecName: "inventory") pod "05604b27-055f-4688-9382-f1c91615bc46" (UID: "05604b27-055f-4688-9382-f1c91615bc46"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:07:08 crc kubenswrapper[4947]: I1129 07:07:08.274572 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05604b27-055f-4688-9382-f1c91615bc46-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "05604b27-055f-4688-9382-f1c91615bc46" (UID: "05604b27-055f-4688-9382-f1c91615bc46"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:07:08 crc kubenswrapper[4947]: I1129 07:07:08.313454 4947 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05604b27-055f-4688-9382-f1c91615bc46-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:07:08 crc kubenswrapper[4947]: I1129 07:07:08.313508 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfrsd\" (UniqueName: \"kubernetes.io/projected/05604b27-055f-4688-9382-f1c91615bc46-kube-api-access-dfrsd\") on node \"crc\" DevicePath \"\"" Nov 29 07:07:08 crc kubenswrapper[4947]: I1129 07:07:08.313523 4947 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/05604b27-055f-4688-9382-f1c91615bc46-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:07:08 crc kubenswrapper[4947]: I1129 07:07:08.580634 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qp8m5" event={"ID":"05604b27-055f-4688-9382-f1c91615bc46","Type":"ContainerDied","Data":"8f7d871b5c460ed41d9d5265937f6e7a831cef7b7f83f15d03dce1d2616f6f3c"} Nov 29 07:07:08 crc kubenswrapper[4947]: I1129 07:07:08.580713 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f7d871b5c460ed41d9d5265937f6e7a831cef7b7f83f15d03dce1d2616f6f3c" Nov 29 07:07:08 crc kubenswrapper[4947]: I1129 07:07:08.580713 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qp8m5" Nov 29 07:07:08 crc kubenswrapper[4947]: I1129 07:07:08.686794 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q78dl"] Nov 29 07:07:08 crc kubenswrapper[4947]: E1129 07:07:08.687333 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05604b27-055f-4688-9382-f1c91615bc46" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 29 07:07:08 crc kubenswrapper[4947]: I1129 07:07:08.687361 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="05604b27-055f-4688-9382-f1c91615bc46" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 29 07:07:08 crc kubenswrapper[4947]: E1129 07:07:08.687383 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d5c9d7a-2105-48a6-bec7-d46e3141f581" containerName="registry-server" Nov 29 07:07:08 crc kubenswrapper[4947]: I1129 07:07:08.687392 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d5c9d7a-2105-48a6-bec7-d46e3141f581" containerName="registry-server" Nov 29 07:07:08 crc kubenswrapper[4947]: E1129 07:07:08.687402 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d5c9d7a-2105-48a6-bec7-d46e3141f581" containerName="extract-utilities" Nov 29 07:07:08 crc kubenswrapper[4947]: I1129 07:07:08.687408 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d5c9d7a-2105-48a6-bec7-d46e3141f581" containerName="extract-utilities" Nov 29 07:07:08 crc kubenswrapper[4947]: E1129 07:07:08.687421 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d5c9d7a-2105-48a6-bec7-d46e3141f581" containerName="extract-content" Nov 29 07:07:08 crc kubenswrapper[4947]: I1129 07:07:08.687427 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d5c9d7a-2105-48a6-bec7-d46e3141f581" containerName="extract-content" Nov 29 07:07:08 crc kubenswrapper[4947]: I1129 07:07:08.687670 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d5c9d7a-2105-48a6-bec7-d46e3141f581" containerName="registry-server" Nov 29 07:07:08 crc kubenswrapper[4947]: I1129 07:07:08.687694 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="05604b27-055f-4688-9382-f1c91615bc46" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 29 07:07:08 crc kubenswrapper[4947]: I1129 07:07:08.688746 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q78dl" Nov 29 07:07:08 crc kubenswrapper[4947]: I1129 07:07:08.691360 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xvljs" Nov 29 07:07:08 crc kubenswrapper[4947]: I1129 07:07:08.691674 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:07:08 crc kubenswrapper[4947]: I1129 07:07:08.692108 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:07:08 crc kubenswrapper[4947]: I1129 07:07:08.692203 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:07:08 crc kubenswrapper[4947]: I1129 07:07:08.708345 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q78dl"] Nov 29 07:07:08 crc kubenswrapper[4947]: I1129 07:07:08.822741 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eafd7e68-2ecc-4ad5-aacd-844c7a197e36-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-q78dl\" (UID: \"eafd7e68-2ecc-4ad5-aacd-844c7a197e36\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q78dl" Nov 29 07:07:08 crc kubenswrapper[4947]: I1129 07:07:08.822827 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vswx7\" (UniqueName: \"kubernetes.io/projected/eafd7e68-2ecc-4ad5-aacd-844c7a197e36-kube-api-access-vswx7\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-q78dl\" (UID: \"eafd7e68-2ecc-4ad5-aacd-844c7a197e36\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q78dl" Nov 29 07:07:08 crc kubenswrapper[4947]: I1129 07:07:08.822977 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eafd7e68-2ecc-4ad5-aacd-844c7a197e36-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-q78dl\" (UID: \"eafd7e68-2ecc-4ad5-aacd-844c7a197e36\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q78dl" Nov 29 07:07:08 crc kubenswrapper[4947]: I1129 07:07:08.924900 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eafd7e68-2ecc-4ad5-aacd-844c7a197e36-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-q78dl\" (UID: \"eafd7e68-2ecc-4ad5-aacd-844c7a197e36\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q78dl" Nov 29 07:07:08 crc kubenswrapper[4947]: I1129 07:07:08.925070 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eafd7e68-2ecc-4ad5-aacd-844c7a197e36-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-q78dl\" (UID: \"eafd7e68-2ecc-4ad5-aacd-844c7a197e36\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q78dl" Nov 29 07:07:08 crc kubenswrapper[4947]: I1129 07:07:08.925163 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vswx7\" (UniqueName: \"kubernetes.io/projected/eafd7e68-2ecc-4ad5-aacd-844c7a197e36-kube-api-access-vswx7\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-q78dl\" (UID: \"eafd7e68-2ecc-4ad5-aacd-844c7a197e36\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q78dl" Nov 29 07:07:08 crc kubenswrapper[4947]: I1129 07:07:08.930097 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eafd7e68-2ecc-4ad5-aacd-844c7a197e36-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-q78dl\" (UID: \"eafd7e68-2ecc-4ad5-aacd-844c7a197e36\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q78dl" Nov 29 07:07:08 crc kubenswrapper[4947]: I1129 07:07:08.933198 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eafd7e68-2ecc-4ad5-aacd-844c7a197e36-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-q78dl\" (UID: \"eafd7e68-2ecc-4ad5-aacd-844c7a197e36\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q78dl" Nov 29 07:07:08 crc kubenswrapper[4947]: I1129 07:07:08.948814 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vswx7\" (UniqueName: \"kubernetes.io/projected/eafd7e68-2ecc-4ad5-aacd-844c7a197e36-kube-api-access-vswx7\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-q78dl\" (UID: \"eafd7e68-2ecc-4ad5-aacd-844c7a197e36\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q78dl" Nov 29 07:07:09 crc kubenswrapper[4947]: I1129 07:07:09.020199 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q78dl" Nov 29 07:07:09 crc kubenswrapper[4947]: I1129 07:07:09.637428 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q78dl"] Nov 29 07:07:10 crc kubenswrapper[4947]: I1129 07:07:10.584875 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:07:10 crc kubenswrapper[4947]: I1129 07:07:10.618233 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q78dl" event={"ID":"eafd7e68-2ecc-4ad5-aacd-844c7a197e36","Type":"ContainerStarted","Data":"9907c908fe22ca598df84f5c5a659b7ee9c6f6d62f36a9160ecd271b14571dc3"} Nov 29 07:07:11 crc kubenswrapper[4947]: I1129 07:07:11.633531 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q78dl" event={"ID":"eafd7e68-2ecc-4ad5-aacd-844c7a197e36","Type":"ContainerStarted","Data":"0dce44565c5c2d6515956c2cbdf4578c47b76881db51440c12130fe66215b881"} Nov 29 07:07:11 crc kubenswrapper[4947]: I1129 07:07:11.667855 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q78dl" podStartSLOduration=2.734272314 podStartE2EDuration="3.667824871s" podCreationTimestamp="2025-11-29 07:07:08 +0000 UTC" firstStartedPulling="2025-11-29 07:07:09.64838676 +0000 UTC m=+1980.692768841" lastFinishedPulling="2025-11-29 07:07:10.581939317 +0000 UTC m=+1981.626321398" observedRunningTime="2025-11-29 07:07:11.658027375 +0000 UTC m=+1982.702409456" watchObservedRunningTime="2025-11-29 07:07:11.667824871 +0000 UTC m=+1982.712206952" Nov 29 07:07:13 crc kubenswrapper[4947]: I1129 07:07:13.179743 4947 scope.go:117] "RemoveContainer" containerID="4e08579e8ab72d8a7c4f3261905a80a5e108e5f74d8ab7f6a91c9b8476999fd3" Nov 29 07:07:13 crc kubenswrapper[4947]: E1129 07:07:13.180239 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:07:17 crc kubenswrapper[4947]: I1129 07:07:17.692673 4947 generic.go:334] "Generic (PLEG): container finished" podID="eafd7e68-2ecc-4ad5-aacd-844c7a197e36" containerID="0dce44565c5c2d6515956c2cbdf4578c47b76881db51440c12130fe66215b881" exitCode=0 Nov 29 07:07:17 crc kubenswrapper[4947]: I1129 07:07:17.692739 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q78dl" event={"ID":"eafd7e68-2ecc-4ad5-aacd-844c7a197e36","Type":"ContainerDied","Data":"0dce44565c5c2d6515956c2cbdf4578c47b76881db51440c12130fe66215b881"} Nov 29 07:07:19 crc kubenswrapper[4947]: I1129 07:07:19.198060 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q78dl" Nov 29 07:07:19 crc kubenswrapper[4947]: I1129 07:07:19.272545 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vswx7\" (UniqueName: \"kubernetes.io/projected/eafd7e68-2ecc-4ad5-aacd-844c7a197e36-kube-api-access-vswx7\") pod \"eafd7e68-2ecc-4ad5-aacd-844c7a197e36\" (UID: \"eafd7e68-2ecc-4ad5-aacd-844c7a197e36\") " Nov 29 07:07:19 crc kubenswrapper[4947]: I1129 07:07:19.272657 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eafd7e68-2ecc-4ad5-aacd-844c7a197e36-inventory\") pod \"eafd7e68-2ecc-4ad5-aacd-844c7a197e36\" (UID: \"eafd7e68-2ecc-4ad5-aacd-844c7a197e36\") " Nov 29 07:07:19 crc kubenswrapper[4947]: I1129 07:07:19.272688 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eafd7e68-2ecc-4ad5-aacd-844c7a197e36-ssh-key\") pod \"eafd7e68-2ecc-4ad5-aacd-844c7a197e36\" (UID: \"eafd7e68-2ecc-4ad5-aacd-844c7a197e36\") " Nov 29 07:07:19 crc kubenswrapper[4947]: I1129 07:07:19.281907 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eafd7e68-2ecc-4ad5-aacd-844c7a197e36-kube-api-access-vswx7" (OuterVolumeSpecName: "kube-api-access-vswx7") pod "eafd7e68-2ecc-4ad5-aacd-844c7a197e36" (UID: "eafd7e68-2ecc-4ad5-aacd-844c7a197e36"). InnerVolumeSpecName "kube-api-access-vswx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:07:19 crc kubenswrapper[4947]: I1129 07:07:19.308725 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eafd7e68-2ecc-4ad5-aacd-844c7a197e36-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "eafd7e68-2ecc-4ad5-aacd-844c7a197e36" (UID: "eafd7e68-2ecc-4ad5-aacd-844c7a197e36"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:07:19 crc kubenswrapper[4947]: I1129 07:07:19.324036 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eafd7e68-2ecc-4ad5-aacd-844c7a197e36-inventory" (OuterVolumeSpecName: "inventory") pod "eafd7e68-2ecc-4ad5-aacd-844c7a197e36" (UID: "eafd7e68-2ecc-4ad5-aacd-844c7a197e36"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:07:19 crc kubenswrapper[4947]: I1129 07:07:19.375652 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vswx7\" (UniqueName: \"kubernetes.io/projected/eafd7e68-2ecc-4ad5-aacd-844c7a197e36-kube-api-access-vswx7\") on node \"crc\" DevicePath \"\"" Nov 29 07:07:19 crc kubenswrapper[4947]: I1129 07:07:19.375838 4947 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eafd7e68-2ecc-4ad5-aacd-844c7a197e36-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:07:19 crc kubenswrapper[4947]: I1129 07:07:19.375933 4947 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eafd7e68-2ecc-4ad5-aacd-844c7a197e36-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:07:19 crc kubenswrapper[4947]: I1129 07:07:19.726693 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q78dl" event={"ID":"eafd7e68-2ecc-4ad5-aacd-844c7a197e36","Type":"ContainerDied","Data":"9907c908fe22ca598df84f5c5a659b7ee9c6f6d62f36a9160ecd271b14571dc3"} Nov 29 07:07:19 crc kubenswrapper[4947]: I1129 07:07:19.726754 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9907c908fe22ca598df84f5c5a659b7ee9c6f6d62f36a9160ecd271b14571dc3" Nov 29 07:07:19 crc kubenswrapper[4947]: I1129 07:07:19.727265 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q78dl" Nov 29 07:07:19 crc kubenswrapper[4947]: I1129 07:07:19.795700 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcn4g"] Nov 29 07:07:19 crc kubenswrapper[4947]: E1129 07:07:19.796343 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eafd7e68-2ecc-4ad5-aacd-844c7a197e36" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 29 07:07:19 crc kubenswrapper[4947]: I1129 07:07:19.796372 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="eafd7e68-2ecc-4ad5-aacd-844c7a197e36" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 29 07:07:19 crc kubenswrapper[4947]: I1129 07:07:19.796648 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="eafd7e68-2ecc-4ad5-aacd-844c7a197e36" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 29 07:07:19 crc kubenswrapper[4947]: I1129 07:07:19.797430 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcn4g" Nov 29 07:07:19 crc kubenswrapper[4947]: I1129 07:07:19.802175 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:07:19 crc kubenswrapper[4947]: I1129 07:07:19.802738 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:07:19 crc kubenswrapper[4947]: I1129 07:07:19.802931 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xvljs" Nov 29 07:07:19 crc kubenswrapper[4947]: I1129 07:07:19.803079 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:07:19 crc kubenswrapper[4947]: I1129 07:07:19.813163 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcn4g"] Nov 29 07:07:19 crc kubenswrapper[4947]: I1129 07:07:19.886841 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ed39e78d-b4a5-4a28-95c8-a608d580b517-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lcn4g\" (UID: \"ed39e78d-b4a5-4a28-95c8-a608d580b517\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcn4g" Nov 29 07:07:19 crc kubenswrapper[4947]: I1129 07:07:19.886925 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncx7d\" (UniqueName: \"kubernetes.io/projected/ed39e78d-b4a5-4a28-95c8-a608d580b517-kube-api-access-ncx7d\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lcn4g\" (UID: \"ed39e78d-b4a5-4a28-95c8-a608d580b517\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcn4g" Nov 29 07:07:19 crc kubenswrapper[4947]: I1129 07:07:19.886961 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed39e78d-b4a5-4a28-95c8-a608d580b517-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lcn4g\" (UID: \"ed39e78d-b4a5-4a28-95c8-a608d580b517\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcn4g" Nov 29 07:07:19 crc kubenswrapper[4947]: I1129 07:07:19.991437 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed39e78d-b4a5-4a28-95c8-a608d580b517-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lcn4g\" (UID: \"ed39e78d-b4a5-4a28-95c8-a608d580b517\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcn4g" Nov 29 07:07:19 crc kubenswrapper[4947]: I1129 07:07:19.992054 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ed39e78d-b4a5-4a28-95c8-a608d580b517-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lcn4g\" (UID: \"ed39e78d-b4a5-4a28-95c8-a608d580b517\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcn4g" Nov 29 07:07:19 crc kubenswrapper[4947]: I1129 07:07:19.992363 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncx7d\" (UniqueName: \"kubernetes.io/projected/ed39e78d-b4a5-4a28-95c8-a608d580b517-kube-api-access-ncx7d\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lcn4g\" (UID: \"ed39e78d-b4a5-4a28-95c8-a608d580b517\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcn4g" Nov 29 07:07:19 crc kubenswrapper[4947]: I1129 07:07:19.997518 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed39e78d-b4a5-4a28-95c8-a608d580b517-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lcn4g\" (UID: \"ed39e78d-b4a5-4a28-95c8-a608d580b517\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcn4g" Nov 29 07:07:19 crc kubenswrapper[4947]: I1129 07:07:19.999597 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ed39e78d-b4a5-4a28-95c8-a608d580b517-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lcn4g\" (UID: \"ed39e78d-b4a5-4a28-95c8-a608d580b517\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcn4g" Nov 29 07:07:20 crc kubenswrapper[4947]: I1129 07:07:20.021010 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncx7d\" (UniqueName: \"kubernetes.io/projected/ed39e78d-b4a5-4a28-95c8-a608d580b517-kube-api-access-ncx7d\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lcn4g\" (UID: \"ed39e78d-b4a5-4a28-95c8-a608d580b517\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcn4g" Nov 29 07:07:20 crc kubenswrapper[4947]: I1129 07:07:20.124607 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcn4g" Nov 29 07:07:20 crc kubenswrapper[4947]: I1129 07:07:20.773625 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcn4g"] Nov 29 07:07:21 crc kubenswrapper[4947]: I1129 07:07:21.757733 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcn4g" event={"ID":"ed39e78d-b4a5-4a28-95c8-a608d580b517","Type":"ContainerStarted","Data":"87fcb7ffdf251a30907e0a52cd35fa38328d72a4e8577f1d0075310a2b9428f1"} Nov 29 07:07:22 crc kubenswrapper[4947]: I1129 07:07:22.771668 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcn4g" event={"ID":"ed39e78d-b4a5-4a28-95c8-a608d580b517","Type":"ContainerStarted","Data":"99fd562d9b583388ae9893c0a79ea064ac319133912fd97c4a7a770a995db776"} Nov 29 07:07:22 crc kubenswrapper[4947]: I1129 07:07:22.796368 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcn4g" podStartSLOduration=3.270529555 podStartE2EDuration="3.796341922s" podCreationTimestamp="2025-11-29 07:07:19 +0000 UTC" firstStartedPulling="2025-11-29 07:07:20.750467087 +0000 UTC m=+1991.794849158" lastFinishedPulling="2025-11-29 07:07:21.276279444 +0000 UTC m=+1992.320661525" observedRunningTime="2025-11-29 07:07:22.79546088 +0000 UTC m=+1993.839842961" watchObservedRunningTime="2025-11-29 07:07:22.796341922 +0000 UTC m=+1993.840724003" Nov 29 07:07:27 crc kubenswrapper[4947]: I1129 07:07:27.180396 4947 scope.go:117] "RemoveContainer" containerID="4e08579e8ab72d8a7c4f3261905a80a5e108e5f74d8ab7f6a91c9b8476999fd3" Nov 29 07:07:27 crc kubenswrapper[4947]: I1129 07:07:27.974975 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" event={"ID":"5f4d791f-bb61-4aaa-a09c-3007b59645a7","Type":"ContainerStarted","Data":"3f350df373165ad762649db5ed6ef7635f995dc53553f1a53de7c07149d00d23"} Nov 29 07:07:32 crc kubenswrapper[4947]: I1129 07:07:32.059090 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-lwblx"] Nov 29 07:07:32 crc kubenswrapper[4947]: I1129 07:07:32.071643 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-lwblx"] Nov 29 07:07:33 crc kubenswrapper[4947]: I1129 07:07:33.045510 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-d5xx7"] Nov 29 07:07:33 crc kubenswrapper[4947]: I1129 07:07:33.063964 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-d40e-account-create-update-rx5gn"] Nov 29 07:07:33 crc kubenswrapper[4947]: I1129 07:07:33.092497 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-29srh"] Nov 29 07:07:33 crc kubenswrapper[4947]: I1129 07:07:33.112994 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-e513-account-create-update-nsfwk"] Nov 29 07:07:33 crc kubenswrapper[4947]: I1129 07:07:33.130294 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-ecc6-account-create-update-299dm"] Nov 29 07:07:33 crc kubenswrapper[4947]: I1129 07:07:33.141724 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-29srh"] Nov 29 07:07:33 crc kubenswrapper[4947]: I1129 07:07:33.152414 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-e513-account-create-update-nsfwk"] Nov 29 07:07:33 crc kubenswrapper[4947]: I1129 07:07:33.162456 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-d5xx7"] Nov 29 07:07:33 crc kubenswrapper[4947]: I1129 07:07:33.171960 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-d40e-account-create-update-rx5gn"] Nov 29 07:07:33 crc kubenswrapper[4947]: I1129 07:07:33.194506 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d5a6e2f-204f-4356-b140-e1a58c242965" path="/var/lib/kubelet/pods/5d5a6e2f-204f-4356-b140-e1a58c242965/volumes" Nov 29 07:07:33 crc kubenswrapper[4947]: I1129 07:07:33.195468 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69c8e8b7-9d6e-4918-acb3-77788534fba2" path="/var/lib/kubelet/pods/69c8e8b7-9d6e-4918-acb3-77788534fba2/volumes" Nov 29 07:07:33 crc kubenswrapper[4947]: I1129 07:07:33.196190 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e2b4d19-d04b-4a05-9967-6c08d6b7b3a7" path="/var/lib/kubelet/pods/8e2b4d19-d04b-4a05-9967-6c08d6b7b3a7/volumes" Nov 29 07:07:33 crc kubenswrapper[4947]: I1129 07:07:33.196911 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92c4d360-3806-49fc-85be-f8e1ea6d5975" path="/var/lib/kubelet/pods/92c4d360-3806-49fc-85be-f8e1ea6d5975/volumes" Nov 29 07:07:33 crc kubenswrapper[4947]: I1129 07:07:33.199864 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aadd9963-9c1e-4c5e-b03e-6577b3f1f139" path="/var/lib/kubelet/pods/aadd9963-9c1e-4c5e-b03e-6577b3f1f139/volumes" Nov 29 07:07:33 crc kubenswrapper[4947]: I1129 07:07:33.200852 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-ecc6-account-create-update-299dm"] Nov 29 07:07:35 crc kubenswrapper[4947]: I1129 07:07:35.191837 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b100e9b-0224-436a-a3a5-73587eda6743" path="/var/lib/kubelet/pods/5b100e9b-0224-436a-a3a5-73587eda6743/volumes" Nov 29 07:07:36 crc kubenswrapper[4947]: I1129 07:07:36.865310 4947 scope.go:117] "RemoveContainer" containerID="1eb2fad9b9dc5d6de43102ccb18fc1faf81b53850822467b8df8390b76fb827b" Nov 29 07:07:36 crc kubenswrapper[4947]: I1129 07:07:36.941044 4947 scope.go:117] "RemoveContainer" containerID="ce8f8cb9067aa12271fb0cb6afd6975372be8f57fc10889e30cd152f2cbc058d" Nov 29 07:07:36 crc kubenswrapper[4947]: I1129 07:07:36.990132 4947 scope.go:117] "RemoveContainer" containerID="dea28b99f1041cfd6ccad7f37b448d068eec67196b39a0e12bbb99e845020b24" Nov 29 07:07:37 crc kubenswrapper[4947]: I1129 07:07:37.052411 4947 scope.go:117] "RemoveContainer" containerID="9da98ec2a732bde0b44163e194a27d5414f8a25f43c72dab0f8f03eeea461c10" Nov 29 07:07:37 crc kubenswrapper[4947]: I1129 07:07:37.088716 4947 scope.go:117] "RemoveContainer" containerID="8f40eab40b5eb7d2e182604267d1bd92ea1eb89641fe6d90f1afa133e486177c" Nov 29 07:07:37 crc kubenswrapper[4947]: I1129 07:07:37.147689 4947 scope.go:117] "RemoveContainer" containerID="61b442d67041139b67a4033884792482a78ae8f47cfcce09d6795a7f7837fd8f" Nov 29 07:07:37 crc kubenswrapper[4947]: I1129 07:07:37.203675 4947 scope.go:117] "RemoveContainer" containerID="e3996ec78a4407f0f39b3068a197d553922ee5f1b0a9155bead9dca46afe0267" Nov 29 07:07:37 crc kubenswrapper[4947]: I1129 07:07:37.261928 4947 scope.go:117] "RemoveContainer" containerID="9f34816641510593c4f314d556b63978ad12e56bf50a7d9fd70958607ffbf0d4" Nov 29 07:07:37 crc kubenswrapper[4947]: I1129 07:07:37.303179 4947 scope.go:117] "RemoveContainer" containerID="f5adf08f1c8170eaf109bc2ebac79fcb7d835e3f7563f1e03ca4be766e37dffc" Nov 29 07:07:37 crc kubenswrapper[4947]: I1129 07:07:37.338160 4947 scope.go:117] "RemoveContainer" containerID="51f1569b74daff9ffa35e5d003b27155e38ea55be2d197756e6ddf3a87ae8a41" Nov 29 07:07:37 crc kubenswrapper[4947]: I1129 07:07:37.375533 4947 scope.go:117] "RemoveContainer" containerID="d094cbacc0562b9e4a52068ebcc2eb01cb11df7772d6028bab72f1ff549b5121" Nov 29 07:07:37 crc kubenswrapper[4947]: I1129 07:07:37.433469 4947 scope.go:117] "RemoveContainer" containerID="e6bb25a7208b2914ae8e17881601d353d2e2fb5cd5549fd394606776c2443dc9" Nov 29 07:08:04 crc kubenswrapper[4947]: I1129 07:08:04.374639 4947 generic.go:334] "Generic (PLEG): container finished" podID="ed39e78d-b4a5-4a28-95c8-a608d580b517" containerID="99fd562d9b583388ae9893c0a79ea064ac319133912fd97c4a7a770a995db776" exitCode=0 Nov 29 07:08:04 crc kubenswrapper[4947]: I1129 07:08:04.374724 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcn4g" event={"ID":"ed39e78d-b4a5-4a28-95c8-a608d580b517","Type":"ContainerDied","Data":"99fd562d9b583388ae9893c0a79ea064ac319133912fd97c4a7a770a995db776"} Nov 29 07:08:13 crc kubenswrapper[4947]: I1129 07:08:05.830518 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcn4g" Nov 29 07:08:13 crc kubenswrapper[4947]: I1129 07:08:05.948651 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ed39e78d-b4a5-4a28-95c8-a608d580b517-ssh-key\") pod \"ed39e78d-b4a5-4a28-95c8-a608d580b517\" (UID: \"ed39e78d-b4a5-4a28-95c8-a608d580b517\") " Nov 29 07:08:13 crc kubenswrapper[4947]: I1129 07:08:05.949377 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed39e78d-b4a5-4a28-95c8-a608d580b517-inventory\") pod \"ed39e78d-b4a5-4a28-95c8-a608d580b517\" (UID: \"ed39e78d-b4a5-4a28-95c8-a608d580b517\") " Nov 29 07:08:13 crc kubenswrapper[4947]: I1129 07:08:05.949639 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncx7d\" (UniqueName: \"kubernetes.io/projected/ed39e78d-b4a5-4a28-95c8-a608d580b517-kube-api-access-ncx7d\") pod \"ed39e78d-b4a5-4a28-95c8-a608d580b517\" (UID: \"ed39e78d-b4a5-4a28-95c8-a608d580b517\") " Nov 29 07:08:13 crc kubenswrapper[4947]: I1129 07:08:05.956761 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed39e78d-b4a5-4a28-95c8-a608d580b517-kube-api-access-ncx7d" (OuterVolumeSpecName: "kube-api-access-ncx7d") pod "ed39e78d-b4a5-4a28-95c8-a608d580b517" (UID: "ed39e78d-b4a5-4a28-95c8-a608d580b517"). InnerVolumeSpecName "kube-api-access-ncx7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:08:13 crc kubenswrapper[4947]: I1129 07:08:05.981231 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed39e78d-b4a5-4a28-95c8-a608d580b517-inventory" (OuterVolumeSpecName: "inventory") pod "ed39e78d-b4a5-4a28-95c8-a608d580b517" (UID: "ed39e78d-b4a5-4a28-95c8-a608d580b517"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:08:13 crc kubenswrapper[4947]: I1129 07:08:05.982560 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed39e78d-b4a5-4a28-95c8-a608d580b517-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ed39e78d-b4a5-4a28-95c8-a608d580b517" (UID: "ed39e78d-b4a5-4a28-95c8-a608d580b517"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:08:13 crc kubenswrapper[4947]: I1129 07:08:06.053837 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncx7d\" (UniqueName: \"kubernetes.io/projected/ed39e78d-b4a5-4a28-95c8-a608d580b517-kube-api-access-ncx7d\") on node \"crc\" DevicePath \"\"" Nov 29 07:08:13 crc kubenswrapper[4947]: I1129 07:08:06.053881 4947 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ed39e78d-b4a5-4a28-95c8-a608d580b517-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:08:13 crc kubenswrapper[4947]: I1129 07:08:06.053894 4947 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed39e78d-b4a5-4a28-95c8-a608d580b517-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:08:13 crc kubenswrapper[4947]: I1129 07:08:06.393878 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcn4g" event={"ID":"ed39e78d-b4a5-4a28-95c8-a608d580b517","Type":"ContainerDied","Data":"87fcb7ffdf251a30907e0a52cd35fa38328d72a4e8577f1d0075310a2b9428f1"} Nov 29 07:08:13 crc kubenswrapper[4947]: I1129 07:08:06.394202 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87fcb7ffdf251a30907e0a52cd35fa38328d72a4e8577f1d0075310a2b9428f1" Nov 29 07:08:13 crc kubenswrapper[4947]: I1129 07:08:06.393915 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcn4g" Nov 29 07:08:13 crc kubenswrapper[4947]: I1129 07:08:06.524840 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8kzv9"] Nov 29 07:08:13 crc kubenswrapper[4947]: E1129 07:08:06.525427 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed39e78d-b4a5-4a28-95c8-a608d580b517" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:08:13 crc kubenswrapper[4947]: I1129 07:08:06.525443 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed39e78d-b4a5-4a28-95c8-a608d580b517" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:08:13 crc kubenswrapper[4947]: I1129 07:08:06.525971 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed39e78d-b4a5-4a28-95c8-a608d580b517" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:08:13 crc kubenswrapper[4947]: I1129 07:08:06.526746 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8kzv9" Nov 29 07:08:13 crc kubenswrapper[4947]: I1129 07:08:06.531363 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:08:13 crc kubenswrapper[4947]: I1129 07:08:06.531370 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xvljs" Nov 29 07:08:13 crc kubenswrapper[4947]: I1129 07:08:06.532652 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:08:13 crc kubenswrapper[4947]: I1129 07:08:06.532777 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:08:13 crc kubenswrapper[4947]: I1129 07:08:06.539137 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8kzv9"] Nov 29 07:08:13 crc kubenswrapper[4947]: I1129 07:08:06.696172 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4034151e-79e6-4a43-a0aa-3d9a41af19ee-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8kzv9\" (UID: \"4034151e-79e6-4a43-a0aa-3d9a41af19ee\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8kzv9" Nov 29 07:08:13 crc kubenswrapper[4947]: I1129 07:08:06.696451 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4034151e-79e6-4a43-a0aa-3d9a41af19ee-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8kzv9\" (UID: \"4034151e-79e6-4a43-a0aa-3d9a41af19ee\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8kzv9" Nov 29 07:08:13 crc kubenswrapper[4947]: I1129 07:08:06.696661 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-692p6\" (UniqueName: \"kubernetes.io/projected/4034151e-79e6-4a43-a0aa-3d9a41af19ee-kube-api-access-692p6\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8kzv9\" (UID: \"4034151e-79e6-4a43-a0aa-3d9a41af19ee\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8kzv9" Nov 29 07:08:13 crc kubenswrapper[4947]: I1129 07:08:06.798760 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-692p6\" (UniqueName: \"kubernetes.io/projected/4034151e-79e6-4a43-a0aa-3d9a41af19ee-kube-api-access-692p6\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8kzv9\" (UID: \"4034151e-79e6-4a43-a0aa-3d9a41af19ee\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8kzv9" Nov 29 07:08:13 crc kubenswrapper[4947]: I1129 07:08:06.798896 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4034151e-79e6-4a43-a0aa-3d9a41af19ee-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8kzv9\" (UID: \"4034151e-79e6-4a43-a0aa-3d9a41af19ee\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8kzv9" Nov 29 07:08:13 crc kubenswrapper[4947]: I1129 07:08:06.798968 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4034151e-79e6-4a43-a0aa-3d9a41af19ee-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8kzv9\" (UID: \"4034151e-79e6-4a43-a0aa-3d9a41af19ee\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8kzv9" Nov 29 07:08:13 crc kubenswrapper[4947]: I1129 07:08:06.812689 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4034151e-79e6-4a43-a0aa-3d9a41af19ee-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8kzv9\" (UID: \"4034151e-79e6-4a43-a0aa-3d9a41af19ee\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8kzv9" Nov 29 07:08:13 crc kubenswrapper[4947]: I1129 07:08:06.814069 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4034151e-79e6-4a43-a0aa-3d9a41af19ee-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8kzv9\" (UID: \"4034151e-79e6-4a43-a0aa-3d9a41af19ee\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8kzv9" Nov 29 07:08:13 crc kubenswrapper[4947]: I1129 07:08:06.817157 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-692p6\" (UniqueName: \"kubernetes.io/projected/4034151e-79e6-4a43-a0aa-3d9a41af19ee-kube-api-access-692p6\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8kzv9\" (UID: \"4034151e-79e6-4a43-a0aa-3d9a41af19ee\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8kzv9" Nov 29 07:08:13 crc kubenswrapper[4947]: I1129 07:08:06.854623 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8kzv9" Nov 29 07:08:13 crc kubenswrapper[4947]: I1129 07:08:12.063188 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7l78q"] Nov 29 07:08:13 crc kubenswrapper[4947]: I1129 07:08:12.076136 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7l78q"] Nov 29 07:08:13 crc kubenswrapper[4947]: I1129 07:08:13.192732 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d7c47be-5cbc-4cae-8eae-055a4693547c" path="/var/lib/kubelet/pods/4d7c47be-5cbc-4cae-8eae-055a4693547c/volumes" Nov 29 07:08:14 crc kubenswrapper[4947]: I1129 07:08:14.103906 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8kzv9"] Nov 29 07:08:14 crc kubenswrapper[4947]: I1129 07:08:14.114383 4947 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 07:08:14 crc kubenswrapper[4947]: I1129 07:08:14.481018 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8kzv9" event={"ID":"4034151e-79e6-4a43-a0aa-3d9a41af19ee","Type":"ContainerStarted","Data":"d7570b7d93ca8ccc2d46f996fcb66eec4938d8e5ad8e45882d06a7a26b534074"} Nov 29 07:08:16 crc kubenswrapper[4947]: I1129 07:08:16.991885 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:08:17 crc kubenswrapper[4947]: I1129 07:08:17.509200 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8kzv9" event={"ID":"4034151e-79e6-4a43-a0aa-3d9a41af19ee","Type":"ContainerStarted","Data":"4a2032a3f792f0c10a8d656f365c1d2d245f994d531b9258e1940e19b43d3b05"} Nov 29 07:08:18 crc kubenswrapper[4947]: I1129 07:08:18.558398 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8kzv9" podStartSLOduration=9.684050769 podStartE2EDuration="12.558353728s" podCreationTimestamp="2025-11-29 07:08:06 +0000 UTC" firstStartedPulling="2025-11-29 07:08:14.114049695 +0000 UTC m=+2045.158431786" lastFinishedPulling="2025-11-29 07:08:16.988352664 +0000 UTC m=+2048.032734745" observedRunningTime="2025-11-29 07:08:18.549454225 +0000 UTC m=+2049.593836306" watchObservedRunningTime="2025-11-29 07:08:18.558353728 +0000 UTC m=+2049.602735809" Nov 29 07:08:22 crc kubenswrapper[4947]: I1129 07:08:22.561949 4947 generic.go:334] "Generic (PLEG): container finished" podID="4034151e-79e6-4a43-a0aa-3d9a41af19ee" containerID="4a2032a3f792f0c10a8d656f365c1d2d245f994d531b9258e1940e19b43d3b05" exitCode=0 Nov 29 07:08:22 crc kubenswrapper[4947]: I1129 07:08:22.562049 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8kzv9" event={"ID":"4034151e-79e6-4a43-a0aa-3d9a41af19ee","Type":"ContainerDied","Data":"4a2032a3f792f0c10a8d656f365c1d2d245f994d531b9258e1940e19b43d3b05"} Nov 29 07:08:24 crc kubenswrapper[4947]: I1129 07:08:24.217112 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8kzv9" Nov 29 07:08:24 crc kubenswrapper[4947]: I1129 07:08:24.299538 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-692p6\" (UniqueName: \"kubernetes.io/projected/4034151e-79e6-4a43-a0aa-3d9a41af19ee-kube-api-access-692p6\") pod \"4034151e-79e6-4a43-a0aa-3d9a41af19ee\" (UID: \"4034151e-79e6-4a43-a0aa-3d9a41af19ee\") " Nov 29 07:08:24 crc kubenswrapper[4947]: I1129 07:08:24.299975 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4034151e-79e6-4a43-a0aa-3d9a41af19ee-ssh-key\") pod \"4034151e-79e6-4a43-a0aa-3d9a41af19ee\" (UID: \"4034151e-79e6-4a43-a0aa-3d9a41af19ee\") " Nov 29 07:08:24 crc kubenswrapper[4947]: I1129 07:08:24.300127 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4034151e-79e6-4a43-a0aa-3d9a41af19ee-inventory\") pod \"4034151e-79e6-4a43-a0aa-3d9a41af19ee\" (UID: \"4034151e-79e6-4a43-a0aa-3d9a41af19ee\") " Nov 29 07:08:24 crc kubenswrapper[4947]: I1129 07:08:24.308592 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4034151e-79e6-4a43-a0aa-3d9a41af19ee-kube-api-access-692p6" (OuterVolumeSpecName: "kube-api-access-692p6") pod "4034151e-79e6-4a43-a0aa-3d9a41af19ee" (UID: "4034151e-79e6-4a43-a0aa-3d9a41af19ee"). InnerVolumeSpecName "kube-api-access-692p6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:08:24 crc kubenswrapper[4947]: I1129 07:08:24.336616 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4034151e-79e6-4a43-a0aa-3d9a41af19ee-inventory" (OuterVolumeSpecName: "inventory") pod "4034151e-79e6-4a43-a0aa-3d9a41af19ee" (UID: "4034151e-79e6-4a43-a0aa-3d9a41af19ee"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:08:24 crc kubenswrapper[4947]: I1129 07:08:24.339290 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4034151e-79e6-4a43-a0aa-3d9a41af19ee-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4034151e-79e6-4a43-a0aa-3d9a41af19ee" (UID: "4034151e-79e6-4a43-a0aa-3d9a41af19ee"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:08:24 crc kubenswrapper[4947]: I1129 07:08:24.405248 4947 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4034151e-79e6-4a43-a0aa-3d9a41af19ee-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:08:24 crc kubenswrapper[4947]: I1129 07:08:24.405281 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-692p6\" (UniqueName: \"kubernetes.io/projected/4034151e-79e6-4a43-a0aa-3d9a41af19ee-kube-api-access-692p6\") on node \"crc\" DevicePath \"\"" Nov 29 07:08:24 crc kubenswrapper[4947]: I1129 07:08:24.405292 4947 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4034151e-79e6-4a43-a0aa-3d9a41af19ee-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:08:24 crc kubenswrapper[4947]: I1129 07:08:24.587106 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8kzv9" event={"ID":"4034151e-79e6-4a43-a0aa-3d9a41af19ee","Type":"ContainerDied","Data":"d7570b7d93ca8ccc2d46f996fcb66eec4938d8e5ad8e45882d06a7a26b534074"} Nov 29 07:08:24 crc kubenswrapper[4947]: I1129 07:08:24.587589 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7570b7d93ca8ccc2d46f996fcb66eec4938d8e5ad8e45882d06a7a26b534074" Nov 29 07:08:24 crc kubenswrapper[4947]: I1129 07:08:24.587153 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8kzv9" Nov 29 07:08:24 crc kubenswrapper[4947]: I1129 07:08:24.697432 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6gpzh"] Nov 29 07:08:24 crc kubenswrapper[4947]: E1129 07:08:24.698001 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4034151e-79e6-4a43-a0aa-3d9a41af19ee" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Nov 29 07:08:24 crc kubenswrapper[4947]: I1129 07:08:24.698035 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="4034151e-79e6-4a43-a0aa-3d9a41af19ee" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Nov 29 07:08:24 crc kubenswrapper[4947]: I1129 07:08:24.698388 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="4034151e-79e6-4a43-a0aa-3d9a41af19ee" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Nov 29 07:08:24 crc kubenswrapper[4947]: I1129 07:08:24.699359 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6gpzh" Nov 29 07:08:24 crc kubenswrapper[4947]: I1129 07:08:24.702028 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:08:24 crc kubenswrapper[4947]: I1129 07:08:24.702118 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:08:24 crc kubenswrapper[4947]: I1129 07:08:24.703399 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:08:24 crc kubenswrapper[4947]: I1129 07:08:24.703671 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xvljs" Nov 29 07:08:24 crc kubenswrapper[4947]: I1129 07:08:24.713070 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89ab0899-e8dd-40b6-9db7-6d4d567fc251-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6gpzh\" (UID: \"89ab0899-e8dd-40b6-9db7-6d4d567fc251\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6gpzh" Nov 29 07:08:24 crc kubenswrapper[4947]: I1129 07:08:24.713188 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/89ab0899-e8dd-40b6-9db7-6d4d567fc251-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6gpzh\" (UID: \"89ab0899-e8dd-40b6-9db7-6d4d567fc251\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6gpzh" Nov 29 07:08:24 crc kubenswrapper[4947]: I1129 07:08:24.713461 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2flz2\" (UniqueName: \"kubernetes.io/projected/89ab0899-e8dd-40b6-9db7-6d4d567fc251-kube-api-access-2flz2\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6gpzh\" (UID: \"89ab0899-e8dd-40b6-9db7-6d4d567fc251\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6gpzh" Nov 29 07:08:24 crc kubenswrapper[4947]: I1129 07:08:24.715186 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6gpzh"] Nov 29 07:08:24 crc kubenswrapper[4947]: I1129 07:08:24.815786 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/89ab0899-e8dd-40b6-9db7-6d4d567fc251-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6gpzh\" (UID: \"89ab0899-e8dd-40b6-9db7-6d4d567fc251\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6gpzh" Nov 29 07:08:24 crc kubenswrapper[4947]: I1129 07:08:24.815990 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2flz2\" (UniqueName: \"kubernetes.io/projected/89ab0899-e8dd-40b6-9db7-6d4d567fc251-kube-api-access-2flz2\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6gpzh\" (UID: \"89ab0899-e8dd-40b6-9db7-6d4d567fc251\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6gpzh" Nov 29 07:08:24 crc kubenswrapper[4947]: I1129 07:08:24.816091 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89ab0899-e8dd-40b6-9db7-6d4d567fc251-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6gpzh\" (UID: \"89ab0899-e8dd-40b6-9db7-6d4d567fc251\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6gpzh" Nov 29 07:08:24 crc kubenswrapper[4947]: I1129 07:08:24.820838 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/89ab0899-e8dd-40b6-9db7-6d4d567fc251-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6gpzh\" (UID: \"89ab0899-e8dd-40b6-9db7-6d4d567fc251\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6gpzh" Nov 29 07:08:24 crc kubenswrapper[4947]: I1129 07:08:24.820947 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89ab0899-e8dd-40b6-9db7-6d4d567fc251-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6gpzh\" (UID: \"89ab0899-e8dd-40b6-9db7-6d4d567fc251\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6gpzh" Nov 29 07:08:24 crc kubenswrapper[4947]: I1129 07:08:24.845688 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2flz2\" (UniqueName: \"kubernetes.io/projected/89ab0899-e8dd-40b6-9db7-6d4d567fc251-kube-api-access-2flz2\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6gpzh\" (UID: \"89ab0899-e8dd-40b6-9db7-6d4d567fc251\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6gpzh" Nov 29 07:08:25 crc kubenswrapper[4947]: I1129 07:08:25.031612 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6gpzh" Nov 29 07:08:25 crc kubenswrapper[4947]: I1129 07:08:25.651545 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6gpzh"] Nov 29 07:08:26 crc kubenswrapper[4947]: I1129 07:08:26.626992 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6gpzh" event={"ID":"89ab0899-e8dd-40b6-9db7-6d4d567fc251","Type":"ContainerStarted","Data":"eda0833a8b2e9310b21019044c5b1c6f7a6dc5d58d454ca2ce48de1512c377ba"} Nov 29 07:08:28 crc kubenswrapper[4947]: I1129 07:08:28.669918 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6gpzh" event={"ID":"89ab0899-e8dd-40b6-9db7-6d4d567fc251","Type":"ContainerStarted","Data":"d4b358deb8222da1b93452bb925bebcc37db6667de3a046485d8897bd3cd06f8"} Nov 29 07:08:28 crc kubenswrapper[4947]: I1129 07:08:28.710044 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6gpzh" podStartSLOduration=2.920385609 podStartE2EDuration="4.710019254s" podCreationTimestamp="2025-11-29 07:08:24 +0000 UTC" firstStartedPulling="2025-11-29 07:08:25.671505987 +0000 UTC m=+2056.715888068" lastFinishedPulling="2025-11-29 07:08:27.461139632 +0000 UTC m=+2058.505521713" observedRunningTime="2025-11-29 07:08:28.708560768 +0000 UTC m=+2059.752942859" watchObservedRunningTime="2025-11-29 07:08:28.710019254 +0000 UTC m=+2059.754401345" Nov 29 07:08:37 crc kubenswrapper[4947]: I1129 07:08:37.056013 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-7w6zr"] Nov 29 07:08:37 crc kubenswrapper[4947]: I1129 07:08:37.065739 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-7w6zr"] Nov 29 07:08:37 crc kubenswrapper[4947]: I1129 07:08:37.191712 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc39fbb3-cd61-46df-9549-9a75ff63206e" path="/var/lib/kubelet/pods/bc39fbb3-cd61-46df-9549-9a75ff63206e/volumes" Nov 29 07:08:37 crc kubenswrapper[4947]: I1129 07:08:37.729996 4947 scope.go:117] "RemoveContainer" containerID="89c1f2844aa41674ee039340c8f4908911aee71616ae7f37a62f133381644b56" Nov 29 07:08:37 crc kubenswrapper[4947]: I1129 07:08:37.803402 4947 scope.go:117] "RemoveContainer" containerID="88301c4a6525dd3464076b30396bec71b2f89f07078bcb17eb60b208cd865eba" Nov 29 07:08:50 crc kubenswrapper[4947]: I1129 07:08:50.048147 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-dvx7w"] Nov 29 07:08:50 crc kubenswrapper[4947]: I1129 07:08:50.059132 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-dvx7w"] Nov 29 07:08:51 crc kubenswrapper[4947]: I1129 07:08:51.196290 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdacfe2a-30f5-443f-b368-9019ec66fb2e" path="/var/lib/kubelet/pods/fdacfe2a-30f5-443f-b368-9019ec66fb2e/volumes" Nov 29 07:09:04 crc kubenswrapper[4947]: I1129 07:09:04.385459 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-66hcs"] Nov 29 07:09:04 crc kubenswrapper[4947]: I1129 07:09:04.421085 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-66hcs" Nov 29 07:09:04 crc kubenswrapper[4947]: I1129 07:09:04.435298 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-66hcs"] Nov 29 07:09:04 crc kubenswrapper[4947]: I1129 07:09:04.513383 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad6bbca2-e9ac-414c-8320-a58775322b7b-utilities\") pod \"redhat-operators-66hcs\" (UID: \"ad6bbca2-e9ac-414c-8320-a58775322b7b\") " pod="openshift-marketplace/redhat-operators-66hcs" Nov 29 07:09:04 crc kubenswrapper[4947]: I1129 07:09:04.513953 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x42lq\" (UniqueName: \"kubernetes.io/projected/ad6bbca2-e9ac-414c-8320-a58775322b7b-kube-api-access-x42lq\") pod \"redhat-operators-66hcs\" (UID: \"ad6bbca2-e9ac-414c-8320-a58775322b7b\") " pod="openshift-marketplace/redhat-operators-66hcs" Nov 29 07:09:04 crc kubenswrapper[4947]: I1129 07:09:04.514013 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad6bbca2-e9ac-414c-8320-a58775322b7b-catalog-content\") pod \"redhat-operators-66hcs\" (UID: \"ad6bbca2-e9ac-414c-8320-a58775322b7b\") " pod="openshift-marketplace/redhat-operators-66hcs" Nov 29 07:09:04 crc kubenswrapper[4947]: I1129 07:09:04.616474 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x42lq\" (UniqueName: \"kubernetes.io/projected/ad6bbca2-e9ac-414c-8320-a58775322b7b-kube-api-access-x42lq\") pod \"redhat-operators-66hcs\" (UID: \"ad6bbca2-e9ac-414c-8320-a58775322b7b\") " pod="openshift-marketplace/redhat-operators-66hcs" Nov 29 07:09:04 crc kubenswrapper[4947]: I1129 07:09:04.616573 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad6bbca2-e9ac-414c-8320-a58775322b7b-catalog-content\") pod \"redhat-operators-66hcs\" (UID: \"ad6bbca2-e9ac-414c-8320-a58775322b7b\") " pod="openshift-marketplace/redhat-operators-66hcs" Nov 29 07:09:04 crc kubenswrapper[4947]: I1129 07:09:04.616683 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad6bbca2-e9ac-414c-8320-a58775322b7b-utilities\") pod \"redhat-operators-66hcs\" (UID: \"ad6bbca2-e9ac-414c-8320-a58775322b7b\") " pod="openshift-marketplace/redhat-operators-66hcs" Nov 29 07:09:04 crc kubenswrapper[4947]: I1129 07:09:04.617336 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad6bbca2-e9ac-414c-8320-a58775322b7b-utilities\") pod \"redhat-operators-66hcs\" (UID: \"ad6bbca2-e9ac-414c-8320-a58775322b7b\") " pod="openshift-marketplace/redhat-operators-66hcs" Nov 29 07:09:04 crc kubenswrapper[4947]: I1129 07:09:04.618004 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad6bbca2-e9ac-414c-8320-a58775322b7b-catalog-content\") pod \"redhat-operators-66hcs\" (UID: \"ad6bbca2-e9ac-414c-8320-a58775322b7b\") " pod="openshift-marketplace/redhat-operators-66hcs" Nov 29 07:09:04 crc kubenswrapper[4947]: I1129 07:09:04.650696 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x42lq\" (UniqueName: \"kubernetes.io/projected/ad6bbca2-e9ac-414c-8320-a58775322b7b-kube-api-access-x42lq\") pod \"redhat-operators-66hcs\" (UID: \"ad6bbca2-e9ac-414c-8320-a58775322b7b\") " pod="openshift-marketplace/redhat-operators-66hcs" Nov 29 07:09:04 crc kubenswrapper[4947]: I1129 07:09:04.771776 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-66hcs" Nov 29 07:09:05 crc kubenswrapper[4947]: I1129 07:09:05.289676 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-66hcs"] Nov 29 07:09:06 crc kubenswrapper[4947]: I1129 07:09:06.018762 4947 generic.go:334] "Generic (PLEG): container finished" podID="ad6bbca2-e9ac-414c-8320-a58775322b7b" containerID="7ca947aefd65e008170f4d694d5326a667000d2f511336038750a8e05c88d334" exitCode=0 Nov 29 07:09:06 crc kubenswrapper[4947]: I1129 07:09:06.019066 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-66hcs" event={"ID":"ad6bbca2-e9ac-414c-8320-a58775322b7b","Type":"ContainerDied","Data":"7ca947aefd65e008170f4d694d5326a667000d2f511336038750a8e05c88d334"} Nov 29 07:09:06 crc kubenswrapper[4947]: I1129 07:09:06.019092 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-66hcs" event={"ID":"ad6bbca2-e9ac-414c-8320-a58775322b7b","Type":"ContainerStarted","Data":"df73b687b5bb6a041939e0180a7d6444c0e28eba649d0f83c9d66ada1b59bb00"} Nov 29 07:09:10 crc kubenswrapper[4947]: I1129 07:09:10.060160 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-66hcs" event={"ID":"ad6bbca2-e9ac-414c-8320-a58775322b7b","Type":"ContainerStarted","Data":"07e5a4a59e9985d1fbecca5524c2aab32d434b451a4f6b230caa3923a74f5c80"} Nov 29 07:09:13 crc kubenswrapper[4947]: I1129 07:09:13.089246 4947 generic.go:334] "Generic (PLEG): container finished" podID="ad6bbca2-e9ac-414c-8320-a58775322b7b" containerID="07e5a4a59e9985d1fbecca5524c2aab32d434b451a4f6b230caa3923a74f5c80" exitCode=0 Nov 29 07:09:13 crc kubenswrapper[4947]: I1129 07:09:13.089445 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-66hcs" event={"ID":"ad6bbca2-e9ac-414c-8320-a58775322b7b","Type":"ContainerDied","Data":"07e5a4a59e9985d1fbecca5524c2aab32d434b451a4f6b230caa3923a74f5c80"} Nov 29 07:09:17 crc kubenswrapper[4947]: I1129 07:09:17.130304 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-66hcs" event={"ID":"ad6bbca2-e9ac-414c-8320-a58775322b7b","Type":"ContainerStarted","Data":"11684dea5f6ed366da449e5a6c04c6ad205df39f6e334e2c83c5f852cd483acd"} Nov 29 07:09:17 crc kubenswrapper[4947]: I1129 07:09:17.161681 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-66hcs" podStartSLOduration=2.829255532 podStartE2EDuration="13.161644191s" podCreationTimestamp="2025-11-29 07:09:04 +0000 UTC" firstStartedPulling="2025-11-29 07:09:06.020934614 +0000 UTC m=+2097.065316775" lastFinishedPulling="2025-11-29 07:09:16.353323353 +0000 UTC m=+2107.397705434" observedRunningTime="2025-11-29 07:09:17.154714357 +0000 UTC m=+2108.199096458" watchObservedRunningTime="2025-11-29 07:09:17.161644191 +0000 UTC m=+2108.206026272" Nov 29 07:09:24 crc kubenswrapper[4947]: I1129 07:09:24.200178 4947 generic.go:334] "Generic (PLEG): container finished" podID="89ab0899-e8dd-40b6-9db7-6d4d567fc251" containerID="d4b358deb8222da1b93452bb925bebcc37db6667de3a046485d8897bd3cd06f8" exitCode=0 Nov 29 07:09:24 crc kubenswrapper[4947]: I1129 07:09:24.200272 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6gpzh" event={"ID":"89ab0899-e8dd-40b6-9db7-6d4d567fc251","Type":"ContainerDied","Data":"d4b358deb8222da1b93452bb925bebcc37db6667de3a046485d8897bd3cd06f8"} Nov 29 07:09:24 crc kubenswrapper[4947]: I1129 07:09:24.771996 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-66hcs" Nov 29 07:09:24 crc kubenswrapper[4947]: I1129 07:09:24.772603 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-66hcs" Nov 29 07:09:24 crc kubenswrapper[4947]: I1129 07:09:24.829828 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-66hcs" Nov 29 07:09:25 crc kubenswrapper[4947]: I1129 07:09:25.072211 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-f6f5x"] Nov 29 07:09:25 crc kubenswrapper[4947]: I1129 07:09:25.084187 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-f6f5x"] Nov 29 07:09:25 crc kubenswrapper[4947]: I1129 07:09:25.217899 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88302c5-0ec1-4e34-848e-0f5deaaa3fea" path="/var/lib/kubelet/pods/f88302c5-0ec1-4e34-848e-0f5deaaa3fea/volumes" Nov 29 07:09:25 crc kubenswrapper[4947]: I1129 07:09:25.299038 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-66hcs" Nov 29 07:09:25 crc kubenswrapper[4947]: I1129 07:09:25.379821 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-66hcs"] Nov 29 07:09:25 crc kubenswrapper[4947]: I1129 07:09:25.739182 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6gpzh" Nov 29 07:09:25 crc kubenswrapper[4947]: I1129 07:09:25.893202 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2flz2\" (UniqueName: \"kubernetes.io/projected/89ab0899-e8dd-40b6-9db7-6d4d567fc251-kube-api-access-2flz2\") pod \"89ab0899-e8dd-40b6-9db7-6d4d567fc251\" (UID: \"89ab0899-e8dd-40b6-9db7-6d4d567fc251\") " Nov 29 07:09:25 crc kubenswrapper[4947]: I1129 07:09:25.893452 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89ab0899-e8dd-40b6-9db7-6d4d567fc251-inventory\") pod \"89ab0899-e8dd-40b6-9db7-6d4d567fc251\" (UID: \"89ab0899-e8dd-40b6-9db7-6d4d567fc251\") " Nov 29 07:09:25 crc kubenswrapper[4947]: I1129 07:09:25.893563 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/89ab0899-e8dd-40b6-9db7-6d4d567fc251-ssh-key\") pod \"89ab0899-e8dd-40b6-9db7-6d4d567fc251\" (UID: \"89ab0899-e8dd-40b6-9db7-6d4d567fc251\") " Nov 29 07:09:25 crc kubenswrapper[4947]: I1129 07:09:25.903864 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89ab0899-e8dd-40b6-9db7-6d4d567fc251-kube-api-access-2flz2" (OuterVolumeSpecName: "kube-api-access-2flz2") pod "89ab0899-e8dd-40b6-9db7-6d4d567fc251" (UID: "89ab0899-e8dd-40b6-9db7-6d4d567fc251"). InnerVolumeSpecName "kube-api-access-2flz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:09:25 crc kubenswrapper[4947]: I1129 07:09:25.929660 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89ab0899-e8dd-40b6-9db7-6d4d567fc251-inventory" (OuterVolumeSpecName: "inventory") pod "89ab0899-e8dd-40b6-9db7-6d4d567fc251" (UID: "89ab0899-e8dd-40b6-9db7-6d4d567fc251"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:09:25 crc kubenswrapper[4947]: I1129 07:09:25.929745 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89ab0899-e8dd-40b6-9db7-6d4d567fc251-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "89ab0899-e8dd-40b6-9db7-6d4d567fc251" (UID: "89ab0899-e8dd-40b6-9db7-6d4d567fc251"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:09:25 crc kubenswrapper[4947]: I1129 07:09:25.995748 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2flz2\" (UniqueName: \"kubernetes.io/projected/89ab0899-e8dd-40b6-9db7-6d4d567fc251-kube-api-access-2flz2\") on node \"crc\" DevicePath \"\"" Nov 29 07:09:25 crc kubenswrapper[4947]: I1129 07:09:25.995795 4947 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89ab0899-e8dd-40b6-9db7-6d4d567fc251-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:09:25 crc kubenswrapper[4947]: I1129 07:09:25.995805 4947 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/89ab0899-e8dd-40b6-9db7-6d4d567fc251-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:09:26 crc kubenswrapper[4947]: I1129 07:09:26.247933 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6gpzh" Nov 29 07:09:26 crc kubenswrapper[4947]: I1129 07:09:26.253033 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6gpzh" event={"ID":"89ab0899-e8dd-40b6-9db7-6d4d567fc251","Type":"ContainerDied","Data":"eda0833a8b2e9310b21019044c5b1c6f7a6dc5d58d454ca2ce48de1512c377ba"} Nov 29 07:09:26 crc kubenswrapper[4947]: I1129 07:09:26.253134 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eda0833a8b2e9310b21019044c5b1c6f7a6dc5d58d454ca2ce48de1512c377ba" Nov 29 07:09:26 crc kubenswrapper[4947]: I1129 07:09:26.322332 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-nxl25"] Nov 29 07:09:26 crc kubenswrapper[4947]: E1129 07:09:26.322838 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89ab0899-e8dd-40b6-9db7-6d4d567fc251" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:09:26 crc kubenswrapper[4947]: I1129 07:09:26.322866 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="89ab0899-e8dd-40b6-9db7-6d4d567fc251" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:09:26 crc kubenswrapper[4947]: I1129 07:09:26.323122 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="89ab0899-e8dd-40b6-9db7-6d4d567fc251" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:09:26 crc kubenswrapper[4947]: I1129 07:09:26.324151 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-nxl25" Nov 29 07:09:26 crc kubenswrapper[4947]: I1129 07:09:26.326835 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xvljs" Nov 29 07:09:26 crc kubenswrapper[4947]: I1129 07:09:26.327292 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:09:26 crc kubenswrapper[4947]: I1129 07:09:26.327534 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:09:26 crc kubenswrapper[4947]: I1129 07:09:26.327679 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:09:26 crc kubenswrapper[4947]: I1129 07:09:26.333711 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-nxl25"] Nov 29 07:09:26 crc kubenswrapper[4947]: I1129 07:09:26.404756 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smwkp\" (UniqueName: \"kubernetes.io/projected/da557e2b-8588-4c78-9c06-7f8e891bc89a-kube-api-access-smwkp\") pod \"ssh-known-hosts-edpm-deployment-nxl25\" (UID: \"da557e2b-8588-4c78-9c06-7f8e891bc89a\") " pod="openstack/ssh-known-hosts-edpm-deployment-nxl25" Nov 29 07:09:26 crc kubenswrapper[4947]: I1129 07:09:26.404827 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/da557e2b-8588-4c78-9c06-7f8e891bc89a-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-nxl25\" (UID: \"da557e2b-8588-4c78-9c06-7f8e891bc89a\") " pod="openstack/ssh-known-hosts-edpm-deployment-nxl25" Nov 29 07:09:26 crc kubenswrapper[4947]: I1129 07:09:26.404889 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da557e2b-8588-4c78-9c06-7f8e891bc89a-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-nxl25\" (UID: \"da557e2b-8588-4c78-9c06-7f8e891bc89a\") " pod="openstack/ssh-known-hosts-edpm-deployment-nxl25" Nov 29 07:09:26 crc kubenswrapper[4947]: I1129 07:09:26.507083 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smwkp\" (UniqueName: \"kubernetes.io/projected/da557e2b-8588-4c78-9c06-7f8e891bc89a-kube-api-access-smwkp\") pod \"ssh-known-hosts-edpm-deployment-nxl25\" (UID: \"da557e2b-8588-4c78-9c06-7f8e891bc89a\") " pod="openstack/ssh-known-hosts-edpm-deployment-nxl25" Nov 29 07:09:26 crc kubenswrapper[4947]: I1129 07:09:26.507325 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/da557e2b-8588-4c78-9c06-7f8e891bc89a-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-nxl25\" (UID: \"da557e2b-8588-4c78-9c06-7f8e891bc89a\") " pod="openstack/ssh-known-hosts-edpm-deployment-nxl25" Nov 29 07:09:26 crc kubenswrapper[4947]: I1129 07:09:26.507401 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da557e2b-8588-4c78-9c06-7f8e891bc89a-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-nxl25\" (UID: \"da557e2b-8588-4c78-9c06-7f8e891bc89a\") " pod="openstack/ssh-known-hosts-edpm-deployment-nxl25" Nov 29 07:09:26 crc kubenswrapper[4947]: I1129 07:09:26.514192 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da557e2b-8588-4c78-9c06-7f8e891bc89a-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-nxl25\" (UID: \"da557e2b-8588-4c78-9c06-7f8e891bc89a\") " pod="openstack/ssh-known-hosts-edpm-deployment-nxl25" Nov 29 07:09:26 crc kubenswrapper[4947]: I1129 07:09:26.514288 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/da557e2b-8588-4c78-9c06-7f8e891bc89a-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-nxl25\" (UID: \"da557e2b-8588-4c78-9c06-7f8e891bc89a\") " pod="openstack/ssh-known-hosts-edpm-deployment-nxl25" Nov 29 07:09:26 crc kubenswrapper[4947]: I1129 07:09:26.529930 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smwkp\" (UniqueName: \"kubernetes.io/projected/da557e2b-8588-4c78-9c06-7f8e891bc89a-kube-api-access-smwkp\") pod \"ssh-known-hosts-edpm-deployment-nxl25\" (UID: \"da557e2b-8588-4c78-9c06-7f8e891bc89a\") " pod="openstack/ssh-known-hosts-edpm-deployment-nxl25" Nov 29 07:09:26 crc kubenswrapper[4947]: I1129 07:09:26.650347 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-nxl25" Nov 29 07:09:27 crc kubenswrapper[4947]: I1129 07:09:27.195538 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-nxl25"] Nov 29 07:09:27 crc kubenswrapper[4947]: I1129 07:09:27.261848 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-nxl25" event={"ID":"da557e2b-8588-4c78-9c06-7f8e891bc89a","Type":"ContainerStarted","Data":"ae0f0c82b41fc8400d2d60c335bbe170fb8ae70b93cb105d6dd9c4f333edf9a0"} Nov 29 07:09:27 crc kubenswrapper[4947]: I1129 07:09:27.262114 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-66hcs" podUID="ad6bbca2-e9ac-414c-8320-a58775322b7b" containerName="registry-server" containerID="cri-o://11684dea5f6ed366da449e5a6c04c6ad205df39f6e334e2c83c5f852cd483acd" gracePeriod=2 Nov 29 07:09:27 crc kubenswrapper[4947]: I1129 07:09:27.759094 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-66hcs" Nov 29 07:09:27 crc kubenswrapper[4947]: I1129 07:09:27.836892 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad6bbca2-e9ac-414c-8320-a58775322b7b-utilities\") pod \"ad6bbca2-e9ac-414c-8320-a58775322b7b\" (UID: \"ad6bbca2-e9ac-414c-8320-a58775322b7b\") " Nov 29 07:09:27 crc kubenswrapper[4947]: I1129 07:09:27.837430 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x42lq\" (UniqueName: \"kubernetes.io/projected/ad6bbca2-e9ac-414c-8320-a58775322b7b-kube-api-access-x42lq\") pod \"ad6bbca2-e9ac-414c-8320-a58775322b7b\" (UID: \"ad6bbca2-e9ac-414c-8320-a58775322b7b\") " Nov 29 07:09:27 crc kubenswrapper[4947]: I1129 07:09:27.837480 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad6bbca2-e9ac-414c-8320-a58775322b7b-catalog-content\") pod \"ad6bbca2-e9ac-414c-8320-a58775322b7b\" (UID: \"ad6bbca2-e9ac-414c-8320-a58775322b7b\") " Nov 29 07:09:27 crc kubenswrapper[4947]: I1129 07:09:27.839123 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad6bbca2-e9ac-414c-8320-a58775322b7b-utilities" (OuterVolumeSpecName: "utilities") pod "ad6bbca2-e9ac-414c-8320-a58775322b7b" (UID: "ad6bbca2-e9ac-414c-8320-a58775322b7b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:09:27 crc kubenswrapper[4947]: I1129 07:09:27.839998 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad6bbca2-e9ac-414c-8320-a58775322b7b-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:09:27 crc kubenswrapper[4947]: I1129 07:09:27.848663 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad6bbca2-e9ac-414c-8320-a58775322b7b-kube-api-access-x42lq" (OuterVolumeSpecName: "kube-api-access-x42lq") pod "ad6bbca2-e9ac-414c-8320-a58775322b7b" (UID: "ad6bbca2-e9ac-414c-8320-a58775322b7b"). InnerVolumeSpecName "kube-api-access-x42lq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:09:27 crc kubenswrapper[4947]: I1129 07:09:27.942497 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x42lq\" (UniqueName: \"kubernetes.io/projected/ad6bbca2-e9ac-414c-8320-a58775322b7b-kube-api-access-x42lq\") on node \"crc\" DevicePath \"\"" Nov 29 07:09:27 crc kubenswrapper[4947]: I1129 07:09:27.978743 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad6bbca2-e9ac-414c-8320-a58775322b7b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ad6bbca2-e9ac-414c-8320-a58775322b7b" (UID: "ad6bbca2-e9ac-414c-8320-a58775322b7b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:09:28 crc kubenswrapper[4947]: I1129 07:09:28.045613 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad6bbca2-e9ac-414c-8320-a58775322b7b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:09:28 crc kubenswrapper[4947]: I1129 07:09:28.272970 4947 generic.go:334] "Generic (PLEG): container finished" podID="ad6bbca2-e9ac-414c-8320-a58775322b7b" containerID="11684dea5f6ed366da449e5a6c04c6ad205df39f6e334e2c83c5f852cd483acd" exitCode=0 Nov 29 07:09:28 crc kubenswrapper[4947]: I1129 07:09:28.273036 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-66hcs" event={"ID":"ad6bbca2-e9ac-414c-8320-a58775322b7b","Type":"ContainerDied","Data":"11684dea5f6ed366da449e5a6c04c6ad205df39f6e334e2c83c5f852cd483acd"} Nov 29 07:09:28 crc kubenswrapper[4947]: I1129 07:09:28.273079 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-66hcs" Nov 29 07:09:28 crc kubenswrapper[4947]: I1129 07:09:28.273413 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-66hcs" event={"ID":"ad6bbca2-e9ac-414c-8320-a58775322b7b","Type":"ContainerDied","Data":"df73b687b5bb6a041939e0180a7d6444c0e28eba649d0f83c9d66ada1b59bb00"} Nov 29 07:09:28 crc kubenswrapper[4947]: I1129 07:09:28.273440 4947 scope.go:117] "RemoveContainer" containerID="11684dea5f6ed366da449e5a6c04c6ad205df39f6e334e2c83c5f852cd483acd" Nov 29 07:09:28 crc kubenswrapper[4947]: I1129 07:09:28.308584 4947 scope.go:117] "RemoveContainer" containerID="07e5a4a59e9985d1fbecca5524c2aab32d434b451a4f6b230caa3923a74f5c80" Nov 29 07:09:28 crc kubenswrapper[4947]: I1129 07:09:28.321122 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-66hcs"] Nov 29 07:09:28 crc kubenswrapper[4947]: I1129 07:09:28.334946 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-66hcs"] Nov 29 07:09:28 crc kubenswrapper[4947]: I1129 07:09:28.352734 4947 scope.go:117] "RemoveContainer" containerID="7ca947aefd65e008170f4d694d5326a667000d2f511336038750a8e05c88d334" Nov 29 07:09:28 crc kubenswrapper[4947]: I1129 07:09:28.388860 4947 scope.go:117] "RemoveContainer" containerID="11684dea5f6ed366da449e5a6c04c6ad205df39f6e334e2c83c5f852cd483acd" Nov 29 07:09:28 crc kubenswrapper[4947]: E1129 07:09:28.390735 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11684dea5f6ed366da449e5a6c04c6ad205df39f6e334e2c83c5f852cd483acd\": container with ID starting with 11684dea5f6ed366da449e5a6c04c6ad205df39f6e334e2c83c5f852cd483acd not found: ID does not exist" containerID="11684dea5f6ed366da449e5a6c04c6ad205df39f6e334e2c83c5f852cd483acd" Nov 29 07:09:28 crc kubenswrapper[4947]: I1129 07:09:28.390813 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11684dea5f6ed366da449e5a6c04c6ad205df39f6e334e2c83c5f852cd483acd"} err="failed to get container status \"11684dea5f6ed366da449e5a6c04c6ad205df39f6e334e2c83c5f852cd483acd\": rpc error: code = NotFound desc = could not find container \"11684dea5f6ed366da449e5a6c04c6ad205df39f6e334e2c83c5f852cd483acd\": container with ID starting with 11684dea5f6ed366da449e5a6c04c6ad205df39f6e334e2c83c5f852cd483acd not found: ID does not exist" Nov 29 07:09:28 crc kubenswrapper[4947]: I1129 07:09:28.390860 4947 scope.go:117] "RemoveContainer" containerID="07e5a4a59e9985d1fbecca5524c2aab32d434b451a4f6b230caa3923a74f5c80" Nov 29 07:09:28 crc kubenswrapper[4947]: E1129 07:09:28.391467 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07e5a4a59e9985d1fbecca5524c2aab32d434b451a4f6b230caa3923a74f5c80\": container with ID starting with 07e5a4a59e9985d1fbecca5524c2aab32d434b451a4f6b230caa3923a74f5c80 not found: ID does not exist" containerID="07e5a4a59e9985d1fbecca5524c2aab32d434b451a4f6b230caa3923a74f5c80" Nov 29 07:09:28 crc kubenswrapper[4947]: I1129 07:09:28.391498 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07e5a4a59e9985d1fbecca5524c2aab32d434b451a4f6b230caa3923a74f5c80"} err="failed to get container status \"07e5a4a59e9985d1fbecca5524c2aab32d434b451a4f6b230caa3923a74f5c80\": rpc error: code = NotFound desc = could not find container \"07e5a4a59e9985d1fbecca5524c2aab32d434b451a4f6b230caa3923a74f5c80\": container with ID starting with 07e5a4a59e9985d1fbecca5524c2aab32d434b451a4f6b230caa3923a74f5c80 not found: ID does not exist" Nov 29 07:09:28 crc kubenswrapper[4947]: I1129 07:09:28.391516 4947 scope.go:117] "RemoveContainer" containerID="7ca947aefd65e008170f4d694d5326a667000d2f511336038750a8e05c88d334" Nov 29 07:09:28 crc kubenswrapper[4947]: E1129 07:09:28.392367 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ca947aefd65e008170f4d694d5326a667000d2f511336038750a8e05c88d334\": container with ID starting with 7ca947aefd65e008170f4d694d5326a667000d2f511336038750a8e05c88d334 not found: ID does not exist" containerID="7ca947aefd65e008170f4d694d5326a667000d2f511336038750a8e05c88d334" Nov 29 07:09:28 crc kubenswrapper[4947]: I1129 07:09:28.392417 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ca947aefd65e008170f4d694d5326a667000d2f511336038750a8e05c88d334"} err="failed to get container status \"7ca947aefd65e008170f4d694d5326a667000d2f511336038750a8e05c88d334\": rpc error: code = NotFound desc = could not find container \"7ca947aefd65e008170f4d694d5326a667000d2f511336038750a8e05c88d334\": container with ID starting with 7ca947aefd65e008170f4d694d5326a667000d2f511336038750a8e05c88d334 not found: ID does not exist" Nov 29 07:09:29 crc kubenswrapper[4947]: I1129 07:09:29.192894 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad6bbca2-e9ac-414c-8320-a58775322b7b" path="/var/lib/kubelet/pods/ad6bbca2-e9ac-414c-8320-a58775322b7b/volumes" Nov 29 07:09:29 crc kubenswrapper[4947]: I1129 07:09:29.285801 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-nxl25" event={"ID":"da557e2b-8588-4c78-9c06-7f8e891bc89a","Type":"ContainerStarted","Data":"db2b60b2e6cb04ddfb2155f615b46e8541ba7847b6edc8c55ddfdaa67d752d5e"} Nov 29 07:09:37 crc kubenswrapper[4947]: I1129 07:09:37.375898 4947 generic.go:334] "Generic (PLEG): container finished" podID="da557e2b-8588-4c78-9c06-7f8e891bc89a" containerID="db2b60b2e6cb04ddfb2155f615b46e8541ba7847b6edc8c55ddfdaa67d752d5e" exitCode=0 Nov 29 07:09:37 crc kubenswrapper[4947]: I1129 07:09:37.376028 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-nxl25" event={"ID":"da557e2b-8588-4c78-9c06-7f8e891bc89a","Type":"ContainerDied","Data":"db2b60b2e6cb04ddfb2155f615b46e8541ba7847b6edc8c55ddfdaa67d752d5e"} Nov 29 07:09:37 crc kubenswrapper[4947]: I1129 07:09:37.951181 4947 scope.go:117] "RemoveContainer" containerID="26ebe7134e34e48a78b93ca743611ac2b6b566a39deb26b3d9388c0dc4cd659d" Nov 29 07:09:38 crc kubenswrapper[4947]: I1129 07:09:38.033665 4947 scope.go:117] "RemoveContainer" containerID="9ad70e03fefbcb9419496af9cc1b1c36fb6799180c7e812b53b5325346d0143a" Nov 29 07:09:38 crc kubenswrapper[4947]: I1129 07:09:38.940081 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-nxl25" Nov 29 07:09:39 crc kubenswrapper[4947]: I1129 07:09:39.122638 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/da557e2b-8588-4c78-9c06-7f8e891bc89a-inventory-0\") pod \"da557e2b-8588-4c78-9c06-7f8e891bc89a\" (UID: \"da557e2b-8588-4c78-9c06-7f8e891bc89a\") " Nov 29 07:09:39 crc kubenswrapper[4947]: I1129 07:09:39.123155 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da557e2b-8588-4c78-9c06-7f8e891bc89a-ssh-key-openstack-edpm-ipam\") pod \"da557e2b-8588-4c78-9c06-7f8e891bc89a\" (UID: \"da557e2b-8588-4c78-9c06-7f8e891bc89a\") " Nov 29 07:09:39 crc kubenswrapper[4947]: I1129 07:09:39.123258 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smwkp\" (UniqueName: \"kubernetes.io/projected/da557e2b-8588-4c78-9c06-7f8e891bc89a-kube-api-access-smwkp\") pod \"da557e2b-8588-4c78-9c06-7f8e891bc89a\" (UID: \"da557e2b-8588-4c78-9c06-7f8e891bc89a\") " Nov 29 07:09:39 crc kubenswrapper[4947]: I1129 07:09:39.134931 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da557e2b-8588-4c78-9c06-7f8e891bc89a-kube-api-access-smwkp" (OuterVolumeSpecName: "kube-api-access-smwkp") pod "da557e2b-8588-4c78-9c06-7f8e891bc89a" (UID: "da557e2b-8588-4c78-9c06-7f8e891bc89a"). InnerVolumeSpecName "kube-api-access-smwkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:09:39 crc kubenswrapper[4947]: I1129 07:09:39.160996 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da557e2b-8588-4c78-9c06-7f8e891bc89a-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "da557e2b-8588-4c78-9c06-7f8e891bc89a" (UID: "da557e2b-8588-4c78-9c06-7f8e891bc89a"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:09:39 crc kubenswrapper[4947]: I1129 07:09:39.165132 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da557e2b-8588-4c78-9c06-7f8e891bc89a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "da557e2b-8588-4c78-9c06-7f8e891bc89a" (UID: "da557e2b-8588-4c78-9c06-7f8e891bc89a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:09:39 crc kubenswrapper[4947]: I1129 07:09:39.226989 4947 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da557e2b-8588-4c78-9c06-7f8e891bc89a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 29 07:09:39 crc kubenswrapper[4947]: I1129 07:09:39.227671 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smwkp\" (UniqueName: \"kubernetes.io/projected/da557e2b-8588-4c78-9c06-7f8e891bc89a-kube-api-access-smwkp\") on node \"crc\" DevicePath \"\"" Nov 29 07:09:39 crc kubenswrapper[4947]: I1129 07:09:39.227692 4947 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/da557e2b-8588-4c78-9c06-7f8e891bc89a-inventory-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:09:39 crc kubenswrapper[4947]: I1129 07:09:39.399766 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-nxl25" event={"ID":"da557e2b-8588-4c78-9c06-7f8e891bc89a","Type":"ContainerDied","Data":"ae0f0c82b41fc8400d2d60c335bbe170fb8ae70b93cb105d6dd9c4f333edf9a0"} Nov 29 07:09:39 crc kubenswrapper[4947]: I1129 07:09:39.399830 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae0f0c82b41fc8400d2d60c335bbe170fb8ae70b93cb105d6dd9c4f333edf9a0" Nov 29 07:09:39 crc kubenswrapper[4947]: I1129 07:09:39.399916 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-nxl25" Nov 29 07:09:39 crc kubenswrapper[4947]: I1129 07:09:39.501085 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-6nrnj"] Nov 29 07:09:39 crc kubenswrapper[4947]: E1129 07:09:39.501724 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad6bbca2-e9ac-414c-8320-a58775322b7b" containerName="extract-utilities" Nov 29 07:09:39 crc kubenswrapper[4947]: I1129 07:09:39.501751 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad6bbca2-e9ac-414c-8320-a58775322b7b" containerName="extract-utilities" Nov 29 07:09:39 crc kubenswrapper[4947]: E1129 07:09:39.501776 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad6bbca2-e9ac-414c-8320-a58775322b7b" containerName="extract-content" Nov 29 07:09:39 crc kubenswrapper[4947]: I1129 07:09:39.501785 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad6bbca2-e9ac-414c-8320-a58775322b7b" containerName="extract-content" Nov 29 07:09:39 crc kubenswrapper[4947]: E1129 07:09:39.501800 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad6bbca2-e9ac-414c-8320-a58775322b7b" containerName="registry-server" Nov 29 07:09:39 crc kubenswrapper[4947]: I1129 07:09:39.501810 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad6bbca2-e9ac-414c-8320-a58775322b7b" containerName="registry-server" Nov 29 07:09:39 crc kubenswrapper[4947]: E1129 07:09:39.501819 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da557e2b-8588-4c78-9c06-7f8e891bc89a" containerName="ssh-known-hosts-edpm-deployment" Nov 29 07:09:39 crc kubenswrapper[4947]: I1129 07:09:39.501826 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="da557e2b-8588-4c78-9c06-7f8e891bc89a" containerName="ssh-known-hosts-edpm-deployment" Nov 29 07:09:39 crc kubenswrapper[4947]: I1129 07:09:39.502098 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="da557e2b-8588-4c78-9c06-7f8e891bc89a" containerName="ssh-known-hosts-edpm-deployment" Nov 29 07:09:39 crc kubenswrapper[4947]: I1129 07:09:39.502139 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad6bbca2-e9ac-414c-8320-a58775322b7b" containerName="registry-server" Nov 29 07:09:39 crc kubenswrapper[4947]: I1129 07:09:39.503196 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6nrnj" Nov 29 07:09:39 crc kubenswrapper[4947]: I1129 07:09:39.508385 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:09:39 crc kubenswrapper[4947]: I1129 07:09:39.508622 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:09:39 crc kubenswrapper[4947]: I1129 07:09:39.509151 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:09:39 crc kubenswrapper[4947]: I1129 07:09:39.510659 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xvljs" Nov 29 07:09:39 crc kubenswrapper[4947]: I1129 07:09:39.517091 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-6nrnj"] Nov 29 07:09:39 crc kubenswrapper[4947]: I1129 07:09:39.642684 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f3a52042-9a2c-4179-add5-f4b0da2f9eb4-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6nrnj\" (UID: \"f3a52042-9a2c-4179-add5-f4b0da2f9eb4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6nrnj" Nov 29 07:09:39 crc kubenswrapper[4947]: I1129 07:09:39.642779 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f3a52042-9a2c-4179-add5-f4b0da2f9eb4-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6nrnj\" (UID: \"f3a52042-9a2c-4179-add5-f4b0da2f9eb4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6nrnj" Nov 29 07:09:39 crc kubenswrapper[4947]: I1129 07:09:39.642876 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp52j\" (UniqueName: \"kubernetes.io/projected/f3a52042-9a2c-4179-add5-f4b0da2f9eb4-kube-api-access-fp52j\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6nrnj\" (UID: \"f3a52042-9a2c-4179-add5-f4b0da2f9eb4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6nrnj" Nov 29 07:09:39 crc kubenswrapper[4947]: I1129 07:09:39.745868 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fp52j\" (UniqueName: \"kubernetes.io/projected/f3a52042-9a2c-4179-add5-f4b0da2f9eb4-kube-api-access-fp52j\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6nrnj\" (UID: \"f3a52042-9a2c-4179-add5-f4b0da2f9eb4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6nrnj" Nov 29 07:09:39 crc kubenswrapper[4947]: I1129 07:09:39.746619 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f3a52042-9a2c-4179-add5-f4b0da2f9eb4-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6nrnj\" (UID: \"f3a52042-9a2c-4179-add5-f4b0da2f9eb4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6nrnj" Nov 29 07:09:39 crc kubenswrapper[4947]: I1129 07:09:39.747758 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f3a52042-9a2c-4179-add5-f4b0da2f9eb4-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6nrnj\" (UID: \"f3a52042-9a2c-4179-add5-f4b0da2f9eb4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6nrnj" Nov 29 07:09:39 crc kubenswrapper[4947]: I1129 07:09:39.763958 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f3a52042-9a2c-4179-add5-f4b0da2f9eb4-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6nrnj\" (UID: \"f3a52042-9a2c-4179-add5-f4b0da2f9eb4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6nrnj" Nov 29 07:09:39 crc kubenswrapper[4947]: I1129 07:09:39.764053 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f3a52042-9a2c-4179-add5-f4b0da2f9eb4-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6nrnj\" (UID: \"f3a52042-9a2c-4179-add5-f4b0da2f9eb4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6nrnj" Nov 29 07:09:39 crc kubenswrapper[4947]: I1129 07:09:39.768630 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp52j\" (UniqueName: \"kubernetes.io/projected/f3a52042-9a2c-4179-add5-f4b0da2f9eb4-kube-api-access-fp52j\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6nrnj\" (UID: \"f3a52042-9a2c-4179-add5-f4b0da2f9eb4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6nrnj" Nov 29 07:09:39 crc kubenswrapper[4947]: I1129 07:09:39.834824 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6nrnj" Nov 29 07:09:40 crc kubenswrapper[4947]: I1129 07:09:40.479956 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-6nrnj"] Nov 29 07:09:40 crc kubenswrapper[4947]: W1129 07:09:40.489598 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3a52042_9a2c_4179_add5_f4b0da2f9eb4.slice/crio-c8048fd212d87d835317466bbc8f317327f5e8530430374531e7b96cae66081e WatchSource:0}: Error finding container c8048fd212d87d835317466bbc8f317327f5e8530430374531e7b96cae66081e: Status 404 returned error can't find the container with id c8048fd212d87d835317466bbc8f317327f5e8530430374531e7b96cae66081e Nov 29 07:09:41 crc kubenswrapper[4947]: I1129 07:09:41.459855 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6nrnj" event={"ID":"f3a52042-9a2c-4179-add5-f4b0da2f9eb4","Type":"ContainerStarted","Data":"c8048fd212d87d835317466bbc8f317327f5e8530430374531e7b96cae66081e"} Nov 29 07:09:42 crc kubenswrapper[4947]: I1129 07:09:42.471733 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6nrnj" event={"ID":"f3a52042-9a2c-4179-add5-f4b0da2f9eb4","Type":"ContainerStarted","Data":"90d5dbdad52563af7f8908c5a87d7cdb8ec1d3f0d82d106f7b2587b60cd73ab2"} Nov 29 07:09:42 crc kubenswrapper[4947]: I1129 07:09:42.491233 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6nrnj" podStartSLOduration=2.7177846150000002 podStartE2EDuration="3.491194184s" podCreationTimestamp="2025-11-29 07:09:39 +0000 UTC" firstStartedPulling="2025-11-29 07:09:40.493529721 +0000 UTC m=+2131.537911812" lastFinishedPulling="2025-11-29 07:09:41.2669393 +0000 UTC m=+2132.311321381" observedRunningTime="2025-11-29 07:09:42.488766603 +0000 UTC m=+2133.533148684" watchObservedRunningTime="2025-11-29 07:09:42.491194184 +0000 UTC m=+2133.535576265" Nov 29 07:09:50 crc kubenswrapper[4947]: I1129 07:09:50.567519 4947 generic.go:334] "Generic (PLEG): container finished" podID="f3a52042-9a2c-4179-add5-f4b0da2f9eb4" containerID="90d5dbdad52563af7f8908c5a87d7cdb8ec1d3f0d82d106f7b2587b60cd73ab2" exitCode=0 Nov 29 07:09:50 crc kubenswrapper[4947]: I1129 07:09:50.567625 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6nrnj" event={"ID":"f3a52042-9a2c-4179-add5-f4b0da2f9eb4","Type":"ContainerDied","Data":"90d5dbdad52563af7f8908c5a87d7cdb8ec1d3f0d82d106f7b2587b60cd73ab2"} Nov 29 07:09:52 crc kubenswrapper[4947]: I1129 07:09:52.105827 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6nrnj" Nov 29 07:09:52 crc kubenswrapper[4947]: I1129 07:09:52.235995 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fp52j\" (UniqueName: \"kubernetes.io/projected/f3a52042-9a2c-4179-add5-f4b0da2f9eb4-kube-api-access-fp52j\") pod \"f3a52042-9a2c-4179-add5-f4b0da2f9eb4\" (UID: \"f3a52042-9a2c-4179-add5-f4b0da2f9eb4\") " Nov 29 07:09:52 crc kubenswrapper[4947]: I1129 07:09:52.236443 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f3a52042-9a2c-4179-add5-f4b0da2f9eb4-inventory\") pod \"f3a52042-9a2c-4179-add5-f4b0da2f9eb4\" (UID: \"f3a52042-9a2c-4179-add5-f4b0da2f9eb4\") " Nov 29 07:09:52 crc kubenswrapper[4947]: I1129 07:09:52.236543 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f3a52042-9a2c-4179-add5-f4b0da2f9eb4-ssh-key\") pod \"f3a52042-9a2c-4179-add5-f4b0da2f9eb4\" (UID: \"f3a52042-9a2c-4179-add5-f4b0da2f9eb4\") " Nov 29 07:09:52 crc kubenswrapper[4947]: I1129 07:09:52.245798 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3a52042-9a2c-4179-add5-f4b0da2f9eb4-kube-api-access-fp52j" (OuterVolumeSpecName: "kube-api-access-fp52j") pod "f3a52042-9a2c-4179-add5-f4b0da2f9eb4" (UID: "f3a52042-9a2c-4179-add5-f4b0da2f9eb4"). InnerVolumeSpecName "kube-api-access-fp52j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:09:52 crc kubenswrapper[4947]: I1129 07:09:52.271820 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3a52042-9a2c-4179-add5-f4b0da2f9eb4-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f3a52042-9a2c-4179-add5-f4b0da2f9eb4" (UID: "f3a52042-9a2c-4179-add5-f4b0da2f9eb4"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:09:52 crc kubenswrapper[4947]: I1129 07:09:52.282193 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3a52042-9a2c-4179-add5-f4b0da2f9eb4-inventory" (OuterVolumeSpecName: "inventory") pod "f3a52042-9a2c-4179-add5-f4b0da2f9eb4" (UID: "f3a52042-9a2c-4179-add5-f4b0da2f9eb4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:09:52 crc kubenswrapper[4947]: I1129 07:09:52.339415 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fp52j\" (UniqueName: \"kubernetes.io/projected/f3a52042-9a2c-4179-add5-f4b0da2f9eb4-kube-api-access-fp52j\") on node \"crc\" DevicePath \"\"" Nov 29 07:09:52 crc kubenswrapper[4947]: I1129 07:09:52.339672 4947 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f3a52042-9a2c-4179-add5-f4b0da2f9eb4-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:09:52 crc kubenswrapper[4947]: I1129 07:09:52.339726 4947 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f3a52042-9a2c-4179-add5-f4b0da2f9eb4-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:09:52 crc kubenswrapper[4947]: I1129 07:09:52.597125 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6nrnj" event={"ID":"f3a52042-9a2c-4179-add5-f4b0da2f9eb4","Type":"ContainerDied","Data":"c8048fd212d87d835317466bbc8f317327f5e8530430374531e7b96cae66081e"} Nov 29 07:09:52 crc kubenswrapper[4947]: I1129 07:09:52.597475 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8048fd212d87d835317466bbc8f317327f5e8530430374531e7b96cae66081e" Nov 29 07:09:52 crc kubenswrapper[4947]: I1129 07:09:52.597243 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6nrnj" Nov 29 07:09:52 crc kubenswrapper[4947]: I1129 07:09:52.706649 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5d66z"] Nov 29 07:09:52 crc kubenswrapper[4947]: E1129 07:09:52.707728 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3a52042-9a2c-4179-add5-f4b0da2f9eb4" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:09:52 crc kubenswrapper[4947]: I1129 07:09:52.707760 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3a52042-9a2c-4179-add5-f4b0da2f9eb4" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:09:52 crc kubenswrapper[4947]: I1129 07:09:52.708172 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3a52042-9a2c-4179-add5-f4b0da2f9eb4" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:09:52 crc kubenswrapper[4947]: I1129 07:09:52.709156 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5d66z" Nov 29 07:09:52 crc kubenswrapper[4947]: I1129 07:09:52.714243 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:09:52 crc kubenswrapper[4947]: I1129 07:09:52.718489 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:09:52 crc kubenswrapper[4947]: I1129 07:09:52.718661 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:09:52 crc kubenswrapper[4947]: I1129 07:09:52.720170 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xvljs" Nov 29 07:09:52 crc kubenswrapper[4947]: I1129 07:09:52.730791 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5d66z"] Nov 29 07:09:52 crc kubenswrapper[4947]: I1129 07:09:52.856549 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04257891-6366-45f6-95e9-3d2d479d7380-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5d66z\" (UID: \"04257891-6366-45f6-95e9-3d2d479d7380\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5d66z" Nov 29 07:09:52 crc kubenswrapper[4947]: I1129 07:09:52.856658 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/04257891-6366-45f6-95e9-3d2d479d7380-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5d66z\" (UID: \"04257891-6366-45f6-95e9-3d2d479d7380\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5d66z" Nov 29 07:09:52 crc kubenswrapper[4947]: I1129 07:09:52.857376 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgl6v\" (UniqueName: \"kubernetes.io/projected/04257891-6366-45f6-95e9-3d2d479d7380-kube-api-access-fgl6v\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5d66z\" (UID: \"04257891-6366-45f6-95e9-3d2d479d7380\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5d66z" Nov 29 07:09:52 crc kubenswrapper[4947]: I1129 07:09:52.959850 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgl6v\" (UniqueName: \"kubernetes.io/projected/04257891-6366-45f6-95e9-3d2d479d7380-kube-api-access-fgl6v\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5d66z\" (UID: \"04257891-6366-45f6-95e9-3d2d479d7380\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5d66z" Nov 29 07:09:52 crc kubenswrapper[4947]: I1129 07:09:52.960048 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04257891-6366-45f6-95e9-3d2d479d7380-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5d66z\" (UID: \"04257891-6366-45f6-95e9-3d2d479d7380\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5d66z" Nov 29 07:09:52 crc kubenswrapper[4947]: I1129 07:09:52.960093 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/04257891-6366-45f6-95e9-3d2d479d7380-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5d66z\" (UID: \"04257891-6366-45f6-95e9-3d2d479d7380\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5d66z" Nov 29 07:09:52 crc kubenswrapper[4947]: I1129 07:09:52.968115 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/04257891-6366-45f6-95e9-3d2d479d7380-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5d66z\" (UID: \"04257891-6366-45f6-95e9-3d2d479d7380\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5d66z" Nov 29 07:09:52 crc kubenswrapper[4947]: I1129 07:09:52.968242 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04257891-6366-45f6-95e9-3d2d479d7380-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5d66z\" (UID: \"04257891-6366-45f6-95e9-3d2d479d7380\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5d66z" Nov 29 07:09:52 crc kubenswrapper[4947]: I1129 07:09:52.982005 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgl6v\" (UniqueName: \"kubernetes.io/projected/04257891-6366-45f6-95e9-3d2d479d7380-kube-api-access-fgl6v\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5d66z\" (UID: \"04257891-6366-45f6-95e9-3d2d479d7380\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5d66z" Nov 29 07:09:52 crc kubenswrapper[4947]: I1129 07:09:52.987475 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:09:52 crc kubenswrapper[4947]: I1129 07:09:52.987571 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:09:53 crc kubenswrapper[4947]: I1129 07:09:53.036032 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5d66z" Nov 29 07:09:53 crc kubenswrapper[4947]: I1129 07:09:53.636325 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5d66z"] Nov 29 07:09:54 crc kubenswrapper[4947]: I1129 07:09:54.621037 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5d66z" event={"ID":"04257891-6366-45f6-95e9-3d2d479d7380","Type":"ContainerStarted","Data":"bcaf855d0580ba6adffec268c4d6831607e2704c3e4ef2d800060a00e65e647f"} Nov 29 07:09:56 crc kubenswrapper[4947]: I1129 07:09:56.649924 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5d66z" event={"ID":"04257891-6366-45f6-95e9-3d2d479d7380","Type":"ContainerStarted","Data":"0f851512bfc62d98d76dadb5271814ebb6ba8b67b31c069e285d0813b3ef3b0b"} Nov 29 07:09:57 crc kubenswrapper[4947]: I1129 07:09:57.681162 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5d66z" podStartSLOduration=3.322985563 podStartE2EDuration="5.681137726s" podCreationTimestamp="2025-11-29 07:09:52 +0000 UTC" firstStartedPulling="2025-11-29 07:09:53.64267167 +0000 UTC m=+2144.687053751" lastFinishedPulling="2025-11-29 07:09:56.000823833 +0000 UTC m=+2147.045205914" observedRunningTime="2025-11-29 07:09:57.676329184 +0000 UTC m=+2148.720711275" watchObservedRunningTime="2025-11-29 07:09:57.681137726 +0000 UTC m=+2148.725519807" Nov 29 07:10:07 crc kubenswrapper[4947]: I1129 07:10:07.766604 4947 generic.go:334] "Generic (PLEG): container finished" podID="04257891-6366-45f6-95e9-3d2d479d7380" containerID="0f851512bfc62d98d76dadb5271814ebb6ba8b67b31c069e285d0813b3ef3b0b" exitCode=0 Nov 29 07:10:07 crc kubenswrapper[4947]: I1129 07:10:07.766758 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5d66z" event={"ID":"04257891-6366-45f6-95e9-3d2d479d7380","Type":"ContainerDied","Data":"0f851512bfc62d98d76dadb5271814ebb6ba8b67b31c069e285d0813b3ef3b0b"} Nov 29 07:10:09 crc kubenswrapper[4947]: I1129 07:10:09.223570 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5d66z" Nov 29 07:10:09 crc kubenswrapper[4947]: I1129 07:10:09.377859 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgl6v\" (UniqueName: \"kubernetes.io/projected/04257891-6366-45f6-95e9-3d2d479d7380-kube-api-access-fgl6v\") pod \"04257891-6366-45f6-95e9-3d2d479d7380\" (UID: \"04257891-6366-45f6-95e9-3d2d479d7380\") " Nov 29 07:10:09 crc kubenswrapper[4947]: I1129 07:10:09.379086 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/04257891-6366-45f6-95e9-3d2d479d7380-ssh-key\") pod \"04257891-6366-45f6-95e9-3d2d479d7380\" (UID: \"04257891-6366-45f6-95e9-3d2d479d7380\") " Nov 29 07:10:09 crc kubenswrapper[4947]: I1129 07:10:09.379130 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04257891-6366-45f6-95e9-3d2d479d7380-inventory\") pod \"04257891-6366-45f6-95e9-3d2d479d7380\" (UID: \"04257891-6366-45f6-95e9-3d2d479d7380\") " Nov 29 07:10:09 crc kubenswrapper[4947]: I1129 07:10:09.385394 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04257891-6366-45f6-95e9-3d2d479d7380-kube-api-access-fgl6v" (OuterVolumeSpecName: "kube-api-access-fgl6v") pod "04257891-6366-45f6-95e9-3d2d479d7380" (UID: "04257891-6366-45f6-95e9-3d2d479d7380"). InnerVolumeSpecName "kube-api-access-fgl6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:10:09 crc kubenswrapper[4947]: I1129 07:10:09.409827 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04257891-6366-45f6-95e9-3d2d479d7380-inventory" (OuterVolumeSpecName: "inventory") pod "04257891-6366-45f6-95e9-3d2d479d7380" (UID: "04257891-6366-45f6-95e9-3d2d479d7380"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:10:09 crc kubenswrapper[4947]: I1129 07:10:09.413279 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04257891-6366-45f6-95e9-3d2d479d7380-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "04257891-6366-45f6-95e9-3d2d479d7380" (UID: "04257891-6366-45f6-95e9-3d2d479d7380"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:10:09 crc kubenswrapper[4947]: I1129 07:10:09.482194 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgl6v\" (UniqueName: \"kubernetes.io/projected/04257891-6366-45f6-95e9-3d2d479d7380-kube-api-access-fgl6v\") on node \"crc\" DevicePath \"\"" Nov 29 07:10:09 crc kubenswrapper[4947]: I1129 07:10:09.482376 4947 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/04257891-6366-45f6-95e9-3d2d479d7380-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:10:09 crc kubenswrapper[4947]: I1129 07:10:09.482394 4947 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04257891-6366-45f6-95e9-3d2d479d7380-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:10:09 crc kubenswrapper[4947]: I1129 07:10:09.793982 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5d66z" event={"ID":"04257891-6366-45f6-95e9-3d2d479d7380","Type":"ContainerDied","Data":"bcaf855d0580ba6adffec268c4d6831607e2704c3e4ef2d800060a00e65e647f"} Nov 29 07:10:09 crc kubenswrapper[4947]: I1129 07:10:09.794041 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bcaf855d0580ba6adffec268c4d6831607e2704c3e4ef2d800060a00e65e647f" Nov 29 07:10:09 crc kubenswrapper[4947]: I1129 07:10:09.794120 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5d66z" Nov 29 07:10:22 crc kubenswrapper[4947]: I1129 07:10:22.987963 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:10:22 crc kubenswrapper[4947]: I1129 07:10:22.989025 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:10:52 crc kubenswrapper[4947]: I1129 07:10:52.988042 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:10:52 crc kubenswrapper[4947]: I1129 07:10:52.988802 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:10:52 crc kubenswrapper[4947]: I1129 07:10:52.988872 4947 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" Nov 29 07:10:52 crc kubenswrapper[4947]: I1129 07:10:52.989824 4947 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3f350df373165ad762649db5ed6ef7635f995dc53553f1a53de7c07149d00d23"} pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 07:10:52 crc kubenswrapper[4947]: I1129 07:10:52.989888 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" containerID="cri-o://3f350df373165ad762649db5ed6ef7635f995dc53553f1a53de7c07149d00d23" gracePeriod=600 Nov 29 07:10:54 crc kubenswrapper[4947]: E1129 07:10:54.652188 4947 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f4d791f_bb61_4aaa_a09c_3007b59645a7.slice/crio-conmon-3f350df373165ad762649db5ed6ef7635f995dc53553f1a53de7c07149d00d23.scope\": RecentStats: unable to find data in memory cache]" Nov 29 07:10:55 crc kubenswrapper[4947]: I1129 07:10:55.433569 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" event={"ID":"5f4d791f-bb61-4aaa-a09c-3007b59645a7","Type":"ContainerDied","Data":"3f350df373165ad762649db5ed6ef7635f995dc53553f1a53de7c07149d00d23"} Nov 29 07:10:55 crc kubenswrapper[4947]: I1129 07:10:55.434188 4947 scope.go:117] "RemoveContainer" containerID="4e08579e8ab72d8a7c4f3261905a80a5e108e5f74d8ab7f6a91c9b8476999fd3" Nov 29 07:10:55 crc kubenswrapper[4947]: I1129 07:10:55.433499 4947 generic.go:334] "Generic (PLEG): container finished" podID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerID="3f350df373165ad762649db5ed6ef7635f995dc53553f1a53de7c07149d00d23" exitCode=0 Nov 29 07:10:57 crc kubenswrapper[4947]: I1129 07:10:57.461261 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" event={"ID":"5f4d791f-bb61-4aaa-a09c-3007b59645a7","Type":"ContainerStarted","Data":"4e8a1bb4365f266c0a40f1757eac36e4c4debcbd11bd1184ebc913d9f9683bb6"} Nov 29 07:11:57 crc kubenswrapper[4947]: I1129 07:11:57.988421 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7wkp9"] Nov 29 07:11:57 crc kubenswrapper[4947]: E1129 07:11:57.990803 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04257891-6366-45f6-95e9-3d2d479d7380" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:11:57 crc kubenswrapper[4947]: I1129 07:11:57.990909 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="04257891-6366-45f6-95e9-3d2d479d7380" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:11:57 crc kubenswrapper[4947]: I1129 07:11:57.991244 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="04257891-6366-45f6-95e9-3d2d479d7380" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:11:57 crc kubenswrapper[4947]: I1129 07:11:57.993235 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7wkp9" Nov 29 07:11:58 crc kubenswrapper[4947]: I1129 07:11:58.036343 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7wkp9"] Nov 29 07:11:58 crc kubenswrapper[4947]: I1129 07:11:58.091886 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2435dbb1-46ea-4b97-a974-f00fbfa54c0d-utilities\") pod \"community-operators-7wkp9\" (UID: \"2435dbb1-46ea-4b97-a974-f00fbfa54c0d\") " pod="openshift-marketplace/community-operators-7wkp9" Nov 29 07:11:58 crc kubenswrapper[4947]: I1129 07:11:58.092429 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kp5j\" (UniqueName: \"kubernetes.io/projected/2435dbb1-46ea-4b97-a974-f00fbfa54c0d-kube-api-access-2kp5j\") pod \"community-operators-7wkp9\" (UID: \"2435dbb1-46ea-4b97-a974-f00fbfa54c0d\") " pod="openshift-marketplace/community-operators-7wkp9" Nov 29 07:11:58 crc kubenswrapper[4947]: I1129 07:11:58.093363 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2435dbb1-46ea-4b97-a974-f00fbfa54c0d-catalog-content\") pod \"community-operators-7wkp9\" (UID: \"2435dbb1-46ea-4b97-a974-f00fbfa54c0d\") " pod="openshift-marketplace/community-operators-7wkp9" Nov 29 07:11:58 crc kubenswrapper[4947]: I1129 07:11:58.195866 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2435dbb1-46ea-4b97-a974-f00fbfa54c0d-utilities\") pod \"community-operators-7wkp9\" (UID: \"2435dbb1-46ea-4b97-a974-f00fbfa54c0d\") " pod="openshift-marketplace/community-operators-7wkp9" Nov 29 07:11:58 crc kubenswrapper[4947]: I1129 07:11:58.196341 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kp5j\" (UniqueName: \"kubernetes.io/projected/2435dbb1-46ea-4b97-a974-f00fbfa54c0d-kube-api-access-2kp5j\") pod \"community-operators-7wkp9\" (UID: \"2435dbb1-46ea-4b97-a974-f00fbfa54c0d\") " pod="openshift-marketplace/community-operators-7wkp9" Nov 29 07:11:58 crc kubenswrapper[4947]: I1129 07:11:58.196569 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2435dbb1-46ea-4b97-a974-f00fbfa54c0d-catalog-content\") pod \"community-operators-7wkp9\" (UID: \"2435dbb1-46ea-4b97-a974-f00fbfa54c0d\") " pod="openshift-marketplace/community-operators-7wkp9" Nov 29 07:11:58 crc kubenswrapper[4947]: I1129 07:11:58.196630 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2435dbb1-46ea-4b97-a974-f00fbfa54c0d-utilities\") pod \"community-operators-7wkp9\" (UID: \"2435dbb1-46ea-4b97-a974-f00fbfa54c0d\") " pod="openshift-marketplace/community-operators-7wkp9" Nov 29 07:11:58 crc kubenswrapper[4947]: I1129 07:11:58.197470 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2435dbb1-46ea-4b97-a974-f00fbfa54c0d-catalog-content\") pod \"community-operators-7wkp9\" (UID: \"2435dbb1-46ea-4b97-a974-f00fbfa54c0d\") " pod="openshift-marketplace/community-operators-7wkp9" Nov 29 07:11:58 crc kubenswrapper[4947]: I1129 07:11:58.218531 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kp5j\" (UniqueName: \"kubernetes.io/projected/2435dbb1-46ea-4b97-a974-f00fbfa54c0d-kube-api-access-2kp5j\") pod \"community-operators-7wkp9\" (UID: \"2435dbb1-46ea-4b97-a974-f00fbfa54c0d\") " pod="openshift-marketplace/community-operators-7wkp9" Nov 29 07:11:58 crc kubenswrapper[4947]: I1129 07:11:58.324004 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7wkp9" Nov 29 07:11:58 crc kubenswrapper[4947]: I1129 07:11:58.914026 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7wkp9"] Nov 29 07:11:59 crc kubenswrapper[4947]: I1129 07:11:59.076406 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wkp9" event={"ID":"2435dbb1-46ea-4b97-a974-f00fbfa54c0d","Type":"ContainerStarted","Data":"698331b1cb36e149e9b605cd3f19dbd893a5c9ff661951b474b605341c4168c3"} Nov 29 07:12:00 crc kubenswrapper[4947]: I1129 07:12:00.090828 4947 generic.go:334] "Generic (PLEG): container finished" podID="2435dbb1-46ea-4b97-a974-f00fbfa54c0d" containerID="2fb9d7cd77d1949c827971d24b0102419468b5d8b14428e35bbc59a829f4a148" exitCode=0 Nov 29 07:12:00 crc kubenswrapper[4947]: I1129 07:12:00.090983 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wkp9" event={"ID":"2435dbb1-46ea-4b97-a974-f00fbfa54c0d","Type":"ContainerDied","Data":"2fb9d7cd77d1949c827971d24b0102419468b5d8b14428e35bbc59a829f4a148"} Nov 29 07:12:06 crc kubenswrapper[4947]: I1129 07:12:06.157163 4947 generic.go:334] "Generic (PLEG): container finished" podID="2435dbb1-46ea-4b97-a974-f00fbfa54c0d" containerID="6812d8d1f9afed58a0aab72ef5e7e52d8373e7b1d1cc32a30b6023acb22592dc" exitCode=0 Nov 29 07:12:06 crc kubenswrapper[4947]: I1129 07:12:06.157261 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wkp9" event={"ID":"2435dbb1-46ea-4b97-a974-f00fbfa54c0d","Type":"ContainerDied","Data":"6812d8d1f9afed58a0aab72ef5e7e52d8373e7b1d1cc32a30b6023acb22592dc"} Nov 29 07:12:09 crc kubenswrapper[4947]: I1129 07:12:09.192200 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wkp9" event={"ID":"2435dbb1-46ea-4b97-a974-f00fbfa54c0d","Type":"ContainerStarted","Data":"39823f772845f8afcae281985d1c6a0e211a6284f1f52f7aef2b71004dedd123"} Nov 29 07:12:09 crc kubenswrapper[4947]: I1129 07:12:09.222567 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7wkp9" podStartSLOduration=4.39262265 podStartE2EDuration="12.222543518s" podCreationTimestamp="2025-11-29 07:11:57 +0000 UTC" firstStartedPulling="2025-11-29 07:12:00.094137491 +0000 UTC m=+2271.138519572" lastFinishedPulling="2025-11-29 07:12:07.924058359 +0000 UTC m=+2278.968440440" observedRunningTime="2025-11-29 07:12:09.216760442 +0000 UTC m=+2280.261142533" watchObservedRunningTime="2025-11-29 07:12:09.222543518 +0000 UTC m=+2280.266925609" Nov 29 07:12:18 crc kubenswrapper[4947]: I1129 07:12:18.324328 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7wkp9" Nov 29 07:12:18 crc kubenswrapper[4947]: I1129 07:12:18.325464 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7wkp9" Nov 29 07:12:18 crc kubenswrapper[4947]: I1129 07:12:18.375941 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7wkp9" Nov 29 07:12:19 crc kubenswrapper[4947]: I1129 07:12:19.352314 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7wkp9" Nov 29 07:12:19 crc kubenswrapper[4947]: I1129 07:12:19.459368 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7wkp9"] Nov 29 07:12:19 crc kubenswrapper[4947]: I1129 07:12:19.517780 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bqjz7"] Nov 29 07:12:19 crc kubenswrapper[4947]: I1129 07:12:19.518112 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bqjz7" podUID="3d9d3e66-fc0e-4abd-8992-3e364ae72745" containerName="registry-server" containerID="cri-o://0fc75f6ab359ef25e3d45a02261ddba31ef3e1d0a16d38b3dbb4837bda69d7b2" gracePeriod=2 Nov 29 07:12:20 crc kubenswrapper[4947]: I1129 07:12:20.319807 4947 generic.go:334] "Generic (PLEG): container finished" podID="3d9d3e66-fc0e-4abd-8992-3e364ae72745" containerID="0fc75f6ab359ef25e3d45a02261ddba31ef3e1d0a16d38b3dbb4837bda69d7b2" exitCode=0 Nov 29 07:12:20 crc kubenswrapper[4947]: I1129 07:12:20.319942 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bqjz7" event={"ID":"3d9d3e66-fc0e-4abd-8992-3e364ae72745","Type":"ContainerDied","Data":"0fc75f6ab359ef25e3d45a02261ddba31ef3e1d0a16d38b3dbb4837bda69d7b2"} Nov 29 07:12:20 crc kubenswrapper[4947]: I1129 07:12:20.774109 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bqjz7" Nov 29 07:12:20 crc kubenswrapper[4947]: I1129 07:12:20.888387 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2frm6\" (UniqueName: \"kubernetes.io/projected/3d9d3e66-fc0e-4abd-8992-3e364ae72745-kube-api-access-2frm6\") pod \"3d9d3e66-fc0e-4abd-8992-3e364ae72745\" (UID: \"3d9d3e66-fc0e-4abd-8992-3e364ae72745\") " Nov 29 07:12:20 crc kubenswrapper[4947]: I1129 07:12:20.888709 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d9d3e66-fc0e-4abd-8992-3e364ae72745-utilities\") pod \"3d9d3e66-fc0e-4abd-8992-3e364ae72745\" (UID: \"3d9d3e66-fc0e-4abd-8992-3e364ae72745\") " Nov 29 07:12:20 crc kubenswrapper[4947]: I1129 07:12:20.888784 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d9d3e66-fc0e-4abd-8992-3e364ae72745-catalog-content\") pod \"3d9d3e66-fc0e-4abd-8992-3e364ae72745\" (UID: \"3d9d3e66-fc0e-4abd-8992-3e364ae72745\") " Nov 29 07:12:20 crc kubenswrapper[4947]: I1129 07:12:20.889445 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d9d3e66-fc0e-4abd-8992-3e364ae72745-utilities" (OuterVolumeSpecName: "utilities") pod "3d9d3e66-fc0e-4abd-8992-3e364ae72745" (UID: "3d9d3e66-fc0e-4abd-8992-3e364ae72745"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:12:20 crc kubenswrapper[4947]: I1129 07:12:20.897425 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d9d3e66-fc0e-4abd-8992-3e364ae72745-kube-api-access-2frm6" (OuterVolumeSpecName: "kube-api-access-2frm6") pod "3d9d3e66-fc0e-4abd-8992-3e364ae72745" (UID: "3d9d3e66-fc0e-4abd-8992-3e364ae72745"). InnerVolumeSpecName "kube-api-access-2frm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:12:20 crc kubenswrapper[4947]: I1129 07:12:20.941982 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d9d3e66-fc0e-4abd-8992-3e364ae72745-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3d9d3e66-fc0e-4abd-8992-3e364ae72745" (UID: "3d9d3e66-fc0e-4abd-8992-3e364ae72745"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:12:20 crc kubenswrapper[4947]: I1129 07:12:20.992176 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d9d3e66-fc0e-4abd-8992-3e364ae72745-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:12:20 crc kubenswrapper[4947]: I1129 07:12:20.992249 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d9d3e66-fc0e-4abd-8992-3e364ae72745-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:12:20 crc kubenswrapper[4947]: I1129 07:12:20.992263 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2frm6\" (UniqueName: \"kubernetes.io/projected/3d9d3e66-fc0e-4abd-8992-3e364ae72745-kube-api-access-2frm6\") on node \"crc\" DevicePath \"\"" Nov 29 07:12:21 crc kubenswrapper[4947]: I1129 07:12:21.335637 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bqjz7" Nov 29 07:12:21 crc kubenswrapper[4947]: I1129 07:12:21.336376 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bqjz7" event={"ID":"3d9d3e66-fc0e-4abd-8992-3e364ae72745","Type":"ContainerDied","Data":"96fd5c13c9d44a1ee23380a6be7770652df9b4580ca7d9c8c7ce820604026577"} Nov 29 07:12:21 crc kubenswrapper[4947]: I1129 07:12:21.336448 4947 scope.go:117] "RemoveContainer" containerID="0fc75f6ab359ef25e3d45a02261ddba31ef3e1d0a16d38b3dbb4837bda69d7b2" Nov 29 07:12:21 crc kubenswrapper[4947]: I1129 07:12:21.372754 4947 scope.go:117] "RemoveContainer" containerID="35b5de5e888d125f6861802d4e798eea40269af439410e05694a14a4f0169973" Nov 29 07:12:21 crc kubenswrapper[4947]: I1129 07:12:21.384307 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bqjz7"] Nov 29 07:12:21 crc kubenswrapper[4947]: I1129 07:12:21.404013 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bqjz7"] Nov 29 07:12:21 crc kubenswrapper[4947]: I1129 07:12:21.430843 4947 scope.go:117] "RemoveContainer" containerID="786db43ee8d7af2a78b434dbe8d1a56df1a5f166ec79b68e1263d3cec1b1b17d" Nov 29 07:12:23 crc kubenswrapper[4947]: I1129 07:12:23.192501 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d9d3e66-fc0e-4abd-8992-3e364ae72745" path="/var/lib/kubelet/pods/3d9d3e66-fc0e-4abd-8992-3e364ae72745/volumes" Nov 29 07:13:22 crc kubenswrapper[4947]: I1129 07:13:22.988371 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:13:22 crc kubenswrapper[4947]: I1129 07:13:22.989432 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:13:52 crc kubenswrapper[4947]: I1129 07:13:52.988318 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:13:53 crc kubenswrapper[4947]: I1129 07:13:52.989758 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:14:22 crc kubenswrapper[4947]: I1129 07:14:22.987802 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:14:22 crc kubenswrapper[4947]: I1129 07:14:22.988672 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:14:22 crc kubenswrapper[4947]: I1129 07:14:22.988752 4947 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" Nov 29 07:14:22 crc kubenswrapper[4947]: I1129 07:14:22.989903 4947 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4e8a1bb4365f266c0a40f1757eac36e4c4debcbd11bd1184ebc913d9f9683bb6"} pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 07:14:22 crc kubenswrapper[4947]: I1129 07:14:22.989983 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" containerID="cri-o://4e8a1bb4365f266c0a40f1757eac36e4c4debcbd11bd1184ebc913d9f9683bb6" gracePeriod=600 Nov 29 07:14:23 crc kubenswrapper[4947]: E1129 07:14:23.135850 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:14:23 crc kubenswrapper[4947]: I1129 07:14:23.828556 4947 generic.go:334] "Generic (PLEG): container finished" podID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerID="4e8a1bb4365f266c0a40f1757eac36e4c4debcbd11bd1184ebc913d9f9683bb6" exitCode=0 Nov 29 07:14:23 crc kubenswrapper[4947]: I1129 07:14:23.828693 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" event={"ID":"5f4d791f-bb61-4aaa-a09c-3007b59645a7","Type":"ContainerDied","Data":"4e8a1bb4365f266c0a40f1757eac36e4c4debcbd11bd1184ebc913d9f9683bb6"} Nov 29 07:14:23 crc kubenswrapper[4947]: I1129 07:14:23.828917 4947 scope.go:117] "RemoveContainer" containerID="3f350df373165ad762649db5ed6ef7635f995dc53553f1a53de7c07149d00d23" Nov 29 07:14:23 crc kubenswrapper[4947]: I1129 07:14:23.829776 4947 scope.go:117] "RemoveContainer" containerID="4e8a1bb4365f266c0a40f1757eac36e4c4debcbd11bd1184ebc913d9f9683bb6" Nov 29 07:14:23 crc kubenswrapper[4947]: E1129 07:14:23.830068 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:14:31 crc kubenswrapper[4947]: I1129 07:14:31.513690 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-d8z4g"] Nov 29 07:14:31 crc kubenswrapper[4947]: E1129 07:14:31.515906 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d9d3e66-fc0e-4abd-8992-3e364ae72745" containerName="extract-utilities" Nov 29 07:14:31 crc kubenswrapper[4947]: I1129 07:14:31.515939 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d9d3e66-fc0e-4abd-8992-3e364ae72745" containerName="extract-utilities" Nov 29 07:14:31 crc kubenswrapper[4947]: E1129 07:14:31.515965 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d9d3e66-fc0e-4abd-8992-3e364ae72745" containerName="extract-content" Nov 29 07:14:31 crc kubenswrapper[4947]: I1129 07:14:31.515974 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d9d3e66-fc0e-4abd-8992-3e364ae72745" containerName="extract-content" Nov 29 07:14:31 crc kubenswrapper[4947]: E1129 07:14:31.516005 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d9d3e66-fc0e-4abd-8992-3e364ae72745" containerName="registry-server" Nov 29 07:14:31 crc kubenswrapper[4947]: I1129 07:14:31.516013 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d9d3e66-fc0e-4abd-8992-3e364ae72745" containerName="registry-server" Nov 29 07:14:31 crc kubenswrapper[4947]: I1129 07:14:31.516233 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d9d3e66-fc0e-4abd-8992-3e364ae72745" containerName="registry-server" Nov 29 07:14:31 crc kubenswrapper[4947]: I1129 07:14:31.517893 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d8z4g" Nov 29 07:14:31 crc kubenswrapper[4947]: I1129 07:14:31.530009 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d8z4g"] Nov 29 07:14:31 crc kubenswrapper[4947]: I1129 07:14:31.570369 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqsld\" (UniqueName: \"kubernetes.io/projected/6966e2da-121a-4e30-8997-05fa47afd934-kube-api-access-rqsld\") pod \"redhat-marketplace-d8z4g\" (UID: \"6966e2da-121a-4e30-8997-05fa47afd934\") " pod="openshift-marketplace/redhat-marketplace-d8z4g" Nov 29 07:14:31 crc kubenswrapper[4947]: I1129 07:14:31.570529 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6966e2da-121a-4e30-8997-05fa47afd934-utilities\") pod \"redhat-marketplace-d8z4g\" (UID: \"6966e2da-121a-4e30-8997-05fa47afd934\") " pod="openshift-marketplace/redhat-marketplace-d8z4g" Nov 29 07:14:31 crc kubenswrapper[4947]: I1129 07:14:31.570636 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6966e2da-121a-4e30-8997-05fa47afd934-catalog-content\") pod \"redhat-marketplace-d8z4g\" (UID: \"6966e2da-121a-4e30-8997-05fa47afd934\") " pod="openshift-marketplace/redhat-marketplace-d8z4g" Nov 29 07:14:31 crc kubenswrapper[4947]: I1129 07:14:31.673906 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqsld\" (UniqueName: \"kubernetes.io/projected/6966e2da-121a-4e30-8997-05fa47afd934-kube-api-access-rqsld\") pod \"redhat-marketplace-d8z4g\" (UID: \"6966e2da-121a-4e30-8997-05fa47afd934\") " pod="openshift-marketplace/redhat-marketplace-d8z4g" Nov 29 07:14:31 crc kubenswrapper[4947]: I1129 07:14:31.674014 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6966e2da-121a-4e30-8997-05fa47afd934-utilities\") pod \"redhat-marketplace-d8z4g\" (UID: \"6966e2da-121a-4e30-8997-05fa47afd934\") " pod="openshift-marketplace/redhat-marketplace-d8z4g" Nov 29 07:14:31 crc kubenswrapper[4947]: I1129 07:14:31.674088 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6966e2da-121a-4e30-8997-05fa47afd934-catalog-content\") pod \"redhat-marketplace-d8z4g\" (UID: \"6966e2da-121a-4e30-8997-05fa47afd934\") " pod="openshift-marketplace/redhat-marketplace-d8z4g" Nov 29 07:14:31 crc kubenswrapper[4947]: I1129 07:14:31.674895 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6966e2da-121a-4e30-8997-05fa47afd934-catalog-content\") pod \"redhat-marketplace-d8z4g\" (UID: \"6966e2da-121a-4e30-8997-05fa47afd934\") " pod="openshift-marketplace/redhat-marketplace-d8z4g" Nov 29 07:14:31 crc kubenswrapper[4947]: I1129 07:14:31.675326 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6966e2da-121a-4e30-8997-05fa47afd934-utilities\") pod \"redhat-marketplace-d8z4g\" (UID: \"6966e2da-121a-4e30-8997-05fa47afd934\") " pod="openshift-marketplace/redhat-marketplace-d8z4g" Nov 29 07:14:31 crc kubenswrapper[4947]: I1129 07:14:31.703434 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqsld\" (UniqueName: \"kubernetes.io/projected/6966e2da-121a-4e30-8997-05fa47afd934-kube-api-access-rqsld\") pod \"redhat-marketplace-d8z4g\" (UID: \"6966e2da-121a-4e30-8997-05fa47afd934\") " pod="openshift-marketplace/redhat-marketplace-d8z4g" Nov 29 07:14:31 crc kubenswrapper[4947]: I1129 07:14:31.844045 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d8z4g" Nov 29 07:14:32 crc kubenswrapper[4947]: I1129 07:14:32.374837 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d8z4g"] Nov 29 07:14:32 crc kubenswrapper[4947]: I1129 07:14:32.935460 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d8z4g" event={"ID":"6966e2da-121a-4e30-8997-05fa47afd934","Type":"ContainerStarted","Data":"b02aaca974e7a21b887680630532df5dd922953b7bcd2e8541a65f85e15fd0bc"} Nov 29 07:14:32 crc kubenswrapper[4947]: I1129 07:14:32.936005 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d8z4g" event={"ID":"6966e2da-121a-4e30-8997-05fa47afd934","Type":"ContainerStarted","Data":"b53ed26899978f1c07a539eae93edae4e04a5db104576ab6a9ece2413d52ecf1"} Nov 29 07:14:33 crc kubenswrapper[4947]: I1129 07:14:33.948477 4947 generic.go:334] "Generic (PLEG): container finished" podID="6966e2da-121a-4e30-8997-05fa47afd934" containerID="b02aaca974e7a21b887680630532df5dd922953b7bcd2e8541a65f85e15fd0bc" exitCode=0 Nov 29 07:14:33 crc kubenswrapper[4947]: I1129 07:14:33.948599 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d8z4g" event={"ID":"6966e2da-121a-4e30-8997-05fa47afd934","Type":"ContainerDied","Data":"b02aaca974e7a21b887680630532df5dd922953b7bcd2e8541a65f85e15fd0bc"} Nov 29 07:14:33 crc kubenswrapper[4947]: I1129 07:14:33.952325 4947 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 07:14:35 crc kubenswrapper[4947]: I1129 07:14:35.973597 4947 generic.go:334] "Generic (PLEG): container finished" podID="6966e2da-121a-4e30-8997-05fa47afd934" containerID="f013cda50f3736a600e4dab86933695964841eaab0f24d429663eb102d23dc1f" exitCode=0 Nov 29 07:14:35 crc kubenswrapper[4947]: I1129 07:14:35.973775 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d8z4g" event={"ID":"6966e2da-121a-4e30-8997-05fa47afd934","Type":"ContainerDied","Data":"f013cda50f3736a600e4dab86933695964841eaab0f24d429663eb102d23dc1f"} Nov 29 07:14:36 crc kubenswrapper[4947]: I1129 07:14:36.991073 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d8z4g" event={"ID":"6966e2da-121a-4e30-8997-05fa47afd934","Type":"ContainerStarted","Data":"22c2af713bd758f856718f3333cf004536ce23fcb3e18e0e00fc7fecd7278ec1"} Nov 29 07:14:37 crc kubenswrapper[4947]: I1129 07:14:37.026199 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-d8z4g" podStartSLOduration=3.517255687 podStartE2EDuration="6.026170968s" podCreationTimestamp="2025-11-29 07:14:31 +0000 UTC" firstStartedPulling="2025-11-29 07:14:33.95173475 +0000 UTC m=+2424.996116831" lastFinishedPulling="2025-11-29 07:14:36.460650021 +0000 UTC m=+2427.505032112" observedRunningTime="2025-11-29 07:14:37.022143876 +0000 UTC m=+2428.066525967" watchObservedRunningTime="2025-11-29 07:14:37.026170968 +0000 UTC m=+2428.070553049" Nov 29 07:14:39 crc kubenswrapper[4947]: I1129 07:14:39.186656 4947 scope.go:117] "RemoveContainer" containerID="4e8a1bb4365f266c0a40f1757eac36e4c4debcbd11bd1184ebc913d9f9683bb6" Nov 29 07:14:39 crc kubenswrapper[4947]: E1129 07:14:39.187966 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:14:41 crc kubenswrapper[4947]: I1129 07:14:41.844711 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-d8z4g" Nov 29 07:14:41 crc kubenswrapper[4947]: I1129 07:14:41.845416 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-d8z4g" Nov 29 07:14:41 crc kubenswrapper[4947]: I1129 07:14:41.895962 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-d8z4g" Nov 29 07:14:42 crc kubenswrapper[4947]: I1129 07:14:42.102581 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-d8z4g" Nov 29 07:14:42 crc kubenswrapper[4947]: I1129 07:14:42.165404 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d8z4g"] Nov 29 07:14:44 crc kubenswrapper[4947]: I1129 07:14:44.071193 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-d8z4g" podUID="6966e2da-121a-4e30-8997-05fa47afd934" containerName="registry-server" containerID="cri-o://22c2af713bd758f856718f3333cf004536ce23fcb3e18e0e00fc7fecd7278ec1" gracePeriod=2 Nov 29 07:14:45 crc kubenswrapper[4947]: I1129 07:14:45.083520 4947 generic.go:334] "Generic (PLEG): container finished" podID="6966e2da-121a-4e30-8997-05fa47afd934" containerID="22c2af713bd758f856718f3333cf004536ce23fcb3e18e0e00fc7fecd7278ec1" exitCode=0 Nov 29 07:14:45 crc kubenswrapper[4947]: I1129 07:14:45.083725 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d8z4g" event={"ID":"6966e2da-121a-4e30-8997-05fa47afd934","Type":"ContainerDied","Data":"22c2af713bd758f856718f3333cf004536ce23fcb3e18e0e00fc7fecd7278ec1"} Nov 29 07:14:45 crc kubenswrapper[4947]: I1129 07:14:45.680130 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d8z4g" Nov 29 07:14:45 crc kubenswrapper[4947]: I1129 07:14:45.795134 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqsld\" (UniqueName: \"kubernetes.io/projected/6966e2da-121a-4e30-8997-05fa47afd934-kube-api-access-rqsld\") pod \"6966e2da-121a-4e30-8997-05fa47afd934\" (UID: \"6966e2da-121a-4e30-8997-05fa47afd934\") " Nov 29 07:14:45 crc kubenswrapper[4947]: I1129 07:14:45.795574 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6966e2da-121a-4e30-8997-05fa47afd934-catalog-content\") pod \"6966e2da-121a-4e30-8997-05fa47afd934\" (UID: \"6966e2da-121a-4e30-8997-05fa47afd934\") " Nov 29 07:14:45 crc kubenswrapper[4947]: I1129 07:14:45.795625 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6966e2da-121a-4e30-8997-05fa47afd934-utilities\") pod \"6966e2da-121a-4e30-8997-05fa47afd934\" (UID: \"6966e2da-121a-4e30-8997-05fa47afd934\") " Nov 29 07:14:45 crc kubenswrapper[4947]: I1129 07:14:45.797301 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6966e2da-121a-4e30-8997-05fa47afd934-utilities" (OuterVolumeSpecName: "utilities") pod "6966e2da-121a-4e30-8997-05fa47afd934" (UID: "6966e2da-121a-4e30-8997-05fa47afd934"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:14:45 crc kubenswrapper[4947]: I1129 07:14:45.805175 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6966e2da-121a-4e30-8997-05fa47afd934-kube-api-access-rqsld" (OuterVolumeSpecName: "kube-api-access-rqsld") pod "6966e2da-121a-4e30-8997-05fa47afd934" (UID: "6966e2da-121a-4e30-8997-05fa47afd934"). InnerVolumeSpecName "kube-api-access-rqsld". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:14:45 crc kubenswrapper[4947]: I1129 07:14:45.818768 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6966e2da-121a-4e30-8997-05fa47afd934-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6966e2da-121a-4e30-8997-05fa47afd934" (UID: "6966e2da-121a-4e30-8997-05fa47afd934"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:14:45 crc kubenswrapper[4947]: I1129 07:14:45.901151 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6966e2da-121a-4e30-8997-05fa47afd934-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:14:45 crc kubenswrapper[4947]: I1129 07:14:45.901248 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6966e2da-121a-4e30-8997-05fa47afd934-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:14:45 crc kubenswrapper[4947]: I1129 07:14:45.901269 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqsld\" (UniqueName: \"kubernetes.io/projected/6966e2da-121a-4e30-8997-05fa47afd934-kube-api-access-rqsld\") on node \"crc\" DevicePath \"\"" Nov 29 07:14:46 crc kubenswrapper[4947]: I1129 07:14:46.098694 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d8z4g" event={"ID":"6966e2da-121a-4e30-8997-05fa47afd934","Type":"ContainerDied","Data":"b53ed26899978f1c07a539eae93edae4e04a5db104576ab6a9ece2413d52ecf1"} Nov 29 07:14:46 crc kubenswrapper[4947]: I1129 07:14:46.098772 4947 scope.go:117] "RemoveContainer" containerID="22c2af713bd758f856718f3333cf004536ce23fcb3e18e0e00fc7fecd7278ec1" Nov 29 07:14:46 crc kubenswrapper[4947]: I1129 07:14:46.098780 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d8z4g" Nov 29 07:14:46 crc kubenswrapper[4947]: I1129 07:14:46.133414 4947 scope.go:117] "RemoveContainer" containerID="f013cda50f3736a600e4dab86933695964841eaab0f24d429663eb102d23dc1f" Nov 29 07:14:46 crc kubenswrapper[4947]: I1129 07:14:46.147728 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d8z4g"] Nov 29 07:14:46 crc kubenswrapper[4947]: I1129 07:14:46.157748 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-d8z4g"] Nov 29 07:14:46 crc kubenswrapper[4947]: I1129 07:14:46.160984 4947 scope.go:117] "RemoveContainer" containerID="b02aaca974e7a21b887680630532df5dd922953b7bcd2e8541a65f85e15fd0bc" Nov 29 07:14:47 crc kubenswrapper[4947]: I1129 07:14:47.192452 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6966e2da-121a-4e30-8997-05fa47afd934" path="/var/lib/kubelet/pods/6966e2da-121a-4e30-8997-05fa47afd934/volumes" Nov 29 07:14:54 crc kubenswrapper[4947]: I1129 07:14:54.097557 4947 scope.go:117] "RemoveContainer" containerID="4e8a1bb4365f266c0a40f1757eac36e4c4debcbd11bd1184ebc913d9f9683bb6" Nov 29 07:14:54 crc kubenswrapper[4947]: E1129 07:14:54.098737 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:15:00 crc kubenswrapper[4947]: I1129 07:15:00.188475 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406675-phqfz"] Nov 29 07:15:00 crc kubenswrapper[4947]: E1129 07:15:00.189774 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6966e2da-121a-4e30-8997-05fa47afd934" containerName="extract-utilities" Nov 29 07:15:00 crc kubenswrapper[4947]: I1129 07:15:00.189802 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="6966e2da-121a-4e30-8997-05fa47afd934" containerName="extract-utilities" Nov 29 07:15:00 crc kubenswrapper[4947]: E1129 07:15:00.189827 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6966e2da-121a-4e30-8997-05fa47afd934" containerName="registry-server" Nov 29 07:15:00 crc kubenswrapper[4947]: I1129 07:15:00.189835 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="6966e2da-121a-4e30-8997-05fa47afd934" containerName="registry-server" Nov 29 07:15:00 crc kubenswrapper[4947]: E1129 07:15:00.189879 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6966e2da-121a-4e30-8997-05fa47afd934" containerName="extract-content" Nov 29 07:15:00 crc kubenswrapper[4947]: I1129 07:15:00.189886 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="6966e2da-121a-4e30-8997-05fa47afd934" containerName="extract-content" Nov 29 07:15:00 crc kubenswrapper[4947]: I1129 07:15:00.190115 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="6966e2da-121a-4e30-8997-05fa47afd934" containerName="registry-server" Nov 29 07:15:00 crc kubenswrapper[4947]: I1129 07:15:00.191094 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-phqfz" Nov 29 07:15:00 crc kubenswrapper[4947]: I1129 07:15:00.195247 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 29 07:15:00 crc kubenswrapper[4947]: I1129 07:15:00.197962 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 29 07:15:00 crc kubenswrapper[4947]: I1129 07:15:00.204708 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406675-phqfz"] Nov 29 07:15:00 crc kubenswrapper[4947]: I1129 07:15:00.240131 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbbmq\" (UniqueName: \"kubernetes.io/projected/6ed24124-04a9-44f2-aef4-831e83d62724-kube-api-access-qbbmq\") pod \"collect-profiles-29406675-phqfz\" (UID: \"6ed24124-04a9-44f2-aef4-831e83d62724\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-phqfz" Nov 29 07:15:00 crc kubenswrapper[4947]: I1129 07:15:00.240304 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6ed24124-04a9-44f2-aef4-831e83d62724-secret-volume\") pod \"collect-profiles-29406675-phqfz\" (UID: \"6ed24124-04a9-44f2-aef4-831e83d62724\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-phqfz" Nov 29 07:15:00 crc kubenswrapper[4947]: I1129 07:15:00.240369 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ed24124-04a9-44f2-aef4-831e83d62724-config-volume\") pod \"collect-profiles-29406675-phqfz\" (UID: \"6ed24124-04a9-44f2-aef4-831e83d62724\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-phqfz" Nov 29 07:15:00 crc kubenswrapper[4947]: I1129 07:15:00.342976 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbbmq\" (UniqueName: \"kubernetes.io/projected/6ed24124-04a9-44f2-aef4-831e83d62724-kube-api-access-qbbmq\") pod \"collect-profiles-29406675-phqfz\" (UID: \"6ed24124-04a9-44f2-aef4-831e83d62724\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-phqfz" Nov 29 07:15:00 crc kubenswrapper[4947]: I1129 07:15:00.343212 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6ed24124-04a9-44f2-aef4-831e83d62724-secret-volume\") pod \"collect-profiles-29406675-phqfz\" (UID: \"6ed24124-04a9-44f2-aef4-831e83d62724\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-phqfz" Nov 29 07:15:00 crc kubenswrapper[4947]: I1129 07:15:00.343316 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ed24124-04a9-44f2-aef4-831e83d62724-config-volume\") pod \"collect-profiles-29406675-phqfz\" (UID: \"6ed24124-04a9-44f2-aef4-831e83d62724\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-phqfz" Nov 29 07:15:00 crc kubenswrapper[4947]: I1129 07:15:00.344623 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ed24124-04a9-44f2-aef4-831e83d62724-config-volume\") pod \"collect-profiles-29406675-phqfz\" (UID: \"6ed24124-04a9-44f2-aef4-831e83d62724\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-phqfz" Nov 29 07:15:00 crc kubenswrapper[4947]: I1129 07:15:00.352216 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6ed24124-04a9-44f2-aef4-831e83d62724-secret-volume\") pod \"collect-profiles-29406675-phqfz\" (UID: \"6ed24124-04a9-44f2-aef4-831e83d62724\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-phqfz" Nov 29 07:15:00 crc kubenswrapper[4947]: I1129 07:15:00.371668 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbbmq\" (UniqueName: \"kubernetes.io/projected/6ed24124-04a9-44f2-aef4-831e83d62724-kube-api-access-qbbmq\") pod \"collect-profiles-29406675-phqfz\" (UID: \"6ed24124-04a9-44f2-aef4-831e83d62724\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-phqfz" Nov 29 07:15:00 crc kubenswrapper[4947]: I1129 07:15:00.530557 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-phqfz" Nov 29 07:15:01 crc kubenswrapper[4947]: I1129 07:15:01.105180 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406675-phqfz"] Nov 29 07:15:01 crc kubenswrapper[4947]: I1129 07:15:01.233028 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-phqfz" event={"ID":"6ed24124-04a9-44f2-aef4-831e83d62724","Type":"ContainerStarted","Data":"0e6443dc7e00f0287eb3cb75fd6f64bab6b5174d195dc3f507479cd2b223c242"} Nov 29 07:15:02 crc kubenswrapper[4947]: I1129 07:15:02.225081 4947 generic.go:334] "Generic (PLEG): container finished" podID="6ed24124-04a9-44f2-aef4-831e83d62724" containerID="6d734f57fabf8ea5a75c2189060f1361175d77cf6c1c42a1aa4c405dc94e8bfa" exitCode=0 Nov 29 07:15:02 crc kubenswrapper[4947]: I1129 07:15:02.225272 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-phqfz" event={"ID":"6ed24124-04a9-44f2-aef4-831e83d62724","Type":"ContainerDied","Data":"6d734f57fabf8ea5a75c2189060f1361175d77cf6c1c42a1aa4c405dc94e8bfa"} Nov 29 07:15:03 crc kubenswrapper[4947]: I1129 07:15:03.577393 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-phqfz" Nov 29 07:15:03 crc kubenswrapper[4947]: I1129 07:15:03.722064 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6ed24124-04a9-44f2-aef4-831e83d62724-secret-volume\") pod \"6ed24124-04a9-44f2-aef4-831e83d62724\" (UID: \"6ed24124-04a9-44f2-aef4-831e83d62724\") " Nov 29 07:15:03 crc kubenswrapper[4947]: I1129 07:15:03.722447 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ed24124-04a9-44f2-aef4-831e83d62724-config-volume\") pod \"6ed24124-04a9-44f2-aef4-831e83d62724\" (UID: \"6ed24124-04a9-44f2-aef4-831e83d62724\") " Nov 29 07:15:03 crc kubenswrapper[4947]: I1129 07:15:03.722643 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbbmq\" (UniqueName: \"kubernetes.io/projected/6ed24124-04a9-44f2-aef4-831e83d62724-kube-api-access-qbbmq\") pod \"6ed24124-04a9-44f2-aef4-831e83d62724\" (UID: \"6ed24124-04a9-44f2-aef4-831e83d62724\") " Nov 29 07:15:03 crc kubenswrapper[4947]: I1129 07:15:03.723418 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ed24124-04a9-44f2-aef4-831e83d62724-config-volume" (OuterVolumeSpecName: "config-volume") pod "6ed24124-04a9-44f2-aef4-831e83d62724" (UID: "6ed24124-04a9-44f2-aef4-831e83d62724"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:03 crc kubenswrapper[4947]: I1129 07:15:03.725461 4947 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ed24124-04a9-44f2-aef4-831e83d62724-config-volume\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:03 crc kubenswrapper[4947]: I1129 07:15:03.730813 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ed24124-04a9-44f2-aef4-831e83d62724-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6ed24124-04a9-44f2-aef4-831e83d62724" (UID: "6ed24124-04a9-44f2-aef4-831e83d62724"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:03 crc kubenswrapper[4947]: I1129 07:15:03.731582 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ed24124-04a9-44f2-aef4-831e83d62724-kube-api-access-qbbmq" (OuterVolumeSpecName: "kube-api-access-qbbmq") pod "6ed24124-04a9-44f2-aef4-831e83d62724" (UID: "6ed24124-04a9-44f2-aef4-831e83d62724"). InnerVolumeSpecName "kube-api-access-qbbmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:03 crc kubenswrapper[4947]: I1129 07:15:03.828000 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbbmq\" (UniqueName: \"kubernetes.io/projected/6ed24124-04a9-44f2-aef4-831e83d62724-kube-api-access-qbbmq\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:03 crc kubenswrapper[4947]: I1129 07:15:03.828053 4947 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6ed24124-04a9-44f2-aef4-831e83d62724-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:04 crc kubenswrapper[4947]: I1129 07:15:04.249075 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-phqfz" event={"ID":"6ed24124-04a9-44f2-aef4-831e83d62724","Type":"ContainerDied","Data":"0e6443dc7e00f0287eb3cb75fd6f64bab6b5174d195dc3f507479cd2b223c242"} Nov 29 07:15:04 crc kubenswrapper[4947]: I1129 07:15:04.249652 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e6443dc7e00f0287eb3cb75fd6f64bab6b5174d195dc3f507479cd2b223c242" Nov 29 07:15:04 crc kubenswrapper[4947]: I1129 07:15:04.249446 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-phqfz" Nov 29 07:15:04 crc kubenswrapper[4947]: I1129 07:15:04.664190 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406630-wnfkh"] Nov 29 07:15:04 crc kubenswrapper[4947]: I1129 07:15:04.673677 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406630-wnfkh"] Nov 29 07:15:05 crc kubenswrapper[4947]: I1129 07:15:05.196486 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f161f69-0220-4b3f-9f46-76277cd105f9" path="/var/lib/kubelet/pods/3f161f69-0220-4b3f-9f46-76277cd105f9/volumes" Nov 29 07:15:07 crc kubenswrapper[4947]: I1129 07:15:07.180194 4947 scope.go:117] "RemoveContainer" containerID="4e8a1bb4365f266c0a40f1757eac36e4c4debcbd11bd1184ebc913d9f9683bb6" Nov 29 07:15:07 crc kubenswrapper[4947]: E1129 07:15:07.180820 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:15:19 crc kubenswrapper[4947]: I1129 07:15:19.187305 4947 scope.go:117] "RemoveContainer" containerID="4e8a1bb4365f266c0a40f1757eac36e4c4debcbd11bd1184ebc913d9f9683bb6" Nov 29 07:15:19 crc kubenswrapper[4947]: E1129 07:15:19.188381 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:15:31 crc kubenswrapper[4947]: I1129 07:15:31.178253 4947 scope.go:117] "RemoveContainer" containerID="4e8a1bb4365f266c0a40f1757eac36e4c4debcbd11bd1184ebc913d9f9683bb6" Nov 29 07:15:31 crc kubenswrapper[4947]: E1129 07:15:31.179405 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:15:38 crc kubenswrapper[4947]: I1129 07:15:38.338190 4947 scope.go:117] "RemoveContainer" containerID="14db6dd613c8c9dea1c8d42e9341e43b5be15363a2ac81ec32d4573cf39a077f" Nov 29 07:15:41 crc kubenswrapper[4947]: I1129 07:15:41.875159 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tc6t2"] Nov 29 07:15:41 crc kubenswrapper[4947]: I1129 07:15:41.917701 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q78dl"] Nov 29 07:15:41 crc kubenswrapper[4947]: I1129 07:15:41.933361 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6gpzh"] Nov 29 07:15:41 crc kubenswrapper[4947]: I1129 07:15:41.944058 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tc6t2"] Nov 29 07:15:41 crc kubenswrapper[4947]: I1129 07:15:41.953559 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qp8m5"] Nov 29 07:15:41 crc kubenswrapper[4947]: I1129 07:15:41.962433 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zx6rs"] Nov 29 07:15:41 crc kubenswrapper[4947]: I1129 07:15:41.970131 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q78dl"] Nov 29 07:15:41 crc kubenswrapper[4947]: I1129 07:15:41.986418 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6gpzh"] Nov 29 07:15:41 crc kubenswrapper[4947]: I1129 07:15:41.993839 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-nxl25"] Nov 29 07:15:42 crc kubenswrapper[4947]: I1129 07:15:42.002960 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zx6rs"] Nov 29 07:15:42 crc kubenswrapper[4947]: I1129 07:15:42.012372 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcn4g"] Nov 29 07:15:42 crc kubenswrapper[4947]: I1129 07:15:42.023398 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-6nrnj"] Nov 29 07:15:42 crc kubenswrapper[4947]: I1129 07:15:42.033454 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qp8m5"] Nov 29 07:15:42 crc kubenswrapper[4947]: I1129 07:15:42.043613 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-nxl25"] Nov 29 07:15:42 crc kubenswrapper[4947]: I1129 07:15:42.052617 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcn4g"] Nov 29 07:15:42 crc kubenswrapper[4947]: I1129 07:15:42.062373 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8kzv9"] Nov 29 07:15:42 crc kubenswrapper[4947]: I1129 07:15:42.073908 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-6nrnj"] Nov 29 07:15:42 crc kubenswrapper[4947]: I1129 07:15:42.085506 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5d66z"] Nov 29 07:15:42 crc kubenswrapper[4947]: I1129 07:15:42.098605 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5d66z"] Nov 29 07:15:42 crc kubenswrapper[4947]: I1129 07:15:42.108814 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8kzv9"] Nov 29 07:15:43 crc kubenswrapper[4947]: I1129 07:15:43.190998 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04257891-6366-45f6-95e9-3d2d479d7380" path="/var/lib/kubelet/pods/04257891-6366-45f6-95e9-3d2d479d7380/volumes" Nov 29 07:15:43 crc kubenswrapper[4947]: I1129 07:15:43.192168 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05604b27-055f-4688-9382-f1c91615bc46" path="/var/lib/kubelet/pods/05604b27-055f-4688-9382-f1c91615bc46/volumes" Nov 29 07:15:43 crc kubenswrapper[4947]: I1129 07:15:43.192737 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c0f3f4c-3de4-4c42-b53f-57845082dd20" path="/var/lib/kubelet/pods/2c0f3f4c-3de4-4c42-b53f-57845082dd20/volumes" Nov 29 07:15:43 crc kubenswrapper[4947]: I1129 07:15:43.193267 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4034151e-79e6-4a43-a0aa-3d9a41af19ee" path="/var/lib/kubelet/pods/4034151e-79e6-4a43-a0aa-3d9a41af19ee/volumes" Nov 29 07:15:43 crc kubenswrapper[4947]: I1129 07:15:43.194429 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="624b369c-3174-4b17-86de-0950411e5ddf" path="/var/lib/kubelet/pods/624b369c-3174-4b17-86de-0950411e5ddf/volumes" Nov 29 07:15:43 crc kubenswrapper[4947]: I1129 07:15:43.194994 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89ab0899-e8dd-40b6-9db7-6d4d567fc251" path="/var/lib/kubelet/pods/89ab0899-e8dd-40b6-9db7-6d4d567fc251/volumes" Nov 29 07:15:43 crc kubenswrapper[4947]: I1129 07:15:43.195561 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da557e2b-8588-4c78-9c06-7f8e891bc89a" path="/var/lib/kubelet/pods/da557e2b-8588-4c78-9c06-7f8e891bc89a/volumes" Nov 29 07:15:43 crc kubenswrapper[4947]: I1129 07:15:43.196857 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eafd7e68-2ecc-4ad5-aacd-844c7a197e36" path="/var/lib/kubelet/pods/eafd7e68-2ecc-4ad5-aacd-844c7a197e36/volumes" Nov 29 07:15:43 crc kubenswrapper[4947]: I1129 07:15:43.197565 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed39e78d-b4a5-4a28-95c8-a608d580b517" path="/var/lib/kubelet/pods/ed39e78d-b4a5-4a28-95c8-a608d580b517/volumes" Nov 29 07:15:43 crc kubenswrapper[4947]: I1129 07:15:43.198268 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3a52042-9a2c-4179-add5-f4b0da2f9eb4" path="/var/lib/kubelet/pods/f3a52042-9a2c-4179-add5-f4b0da2f9eb4/volumes" Nov 29 07:15:44 crc kubenswrapper[4947]: I1129 07:15:44.179294 4947 scope.go:117] "RemoveContainer" containerID="4e8a1bb4365f266c0a40f1757eac36e4c4debcbd11bd1184ebc913d9f9683bb6" Nov 29 07:15:44 crc kubenswrapper[4947]: E1129 07:15:44.180266 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:15:47 crc kubenswrapper[4947]: I1129 07:15:47.990451 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4xdqb"] Nov 29 07:15:47 crc kubenswrapper[4947]: E1129 07:15:47.991505 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ed24124-04a9-44f2-aef4-831e83d62724" containerName="collect-profiles" Nov 29 07:15:47 crc kubenswrapper[4947]: I1129 07:15:47.991530 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ed24124-04a9-44f2-aef4-831e83d62724" containerName="collect-profiles" Nov 29 07:15:47 crc kubenswrapper[4947]: I1129 07:15:47.991796 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ed24124-04a9-44f2-aef4-831e83d62724" containerName="collect-profiles" Nov 29 07:15:47 crc kubenswrapper[4947]: I1129 07:15:47.992714 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4xdqb" Nov 29 07:15:47 crc kubenswrapper[4947]: I1129 07:15:47.994805 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:15:47 crc kubenswrapper[4947]: I1129 07:15:47.994941 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:15:47 crc kubenswrapper[4947]: I1129 07:15:47.995046 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 29 07:15:48 crc kubenswrapper[4947]: I1129 07:15:47.996388 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:15:48 crc kubenswrapper[4947]: I1129 07:15:47.996588 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xvljs" Nov 29 07:15:48 crc kubenswrapper[4947]: I1129 07:15:48.010764 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4xdqb"] Nov 29 07:15:48 crc kubenswrapper[4947]: I1129 07:15:48.097699 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xzv4\" (UniqueName: \"kubernetes.io/projected/a0c726c3-2734-4336-a716-b21a6b32f9f9-kube-api-access-5xzv4\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4xdqb\" (UID: \"a0c726c3-2734-4336-a716-b21a6b32f9f9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4xdqb" Nov 29 07:15:48 crc kubenswrapper[4947]: I1129 07:15:48.098164 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a0c726c3-2734-4336-a716-b21a6b32f9f9-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4xdqb\" (UID: \"a0c726c3-2734-4336-a716-b21a6b32f9f9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4xdqb" Nov 29 07:15:48 crc kubenswrapper[4947]: I1129 07:15:48.098301 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a0c726c3-2734-4336-a716-b21a6b32f9f9-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4xdqb\" (UID: \"a0c726c3-2734-4336-a716-b21a6b32f9f9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4xdqb" Nov 29 07:15:48 crc kubenswrapper[4947]: I1129 07:15:48.098460 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a0c726c3-2734-4336-a716-b21a6b32f9f9-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4xdqb\" (UID: \"a0c726c3-2734-4336-a716-b21a6b32f9f9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4xdqb" Nov 29 07:15:48 crc kubenswrapper[4947]: I1129 07:15:48.098578 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0c726c3-2734-4336-a716-b21a6b32f9f9-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4xdqb\" (UID: \"a0c726c3-2734-4336-a716-b21a6b32f9f9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4xdqb" Nov 29 07:15:48 crc kubenswrapper[4947]: I1129 07:15:48.201466 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a0c726c3-2734-4336-a716-b21a6b32f9f9-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4xdqb\" (UID: \"a0c726c3-2734-4336-a716-b21a6b32f9f9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4xdqb" Nov 29 07:15:48 crc kubenswrapper[4947]: I1129 07:15:48.201839 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a0c726c3-2734-4336-a716-b21a6b32f9f9-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4xdqb\" (UID: \"a0c726c3-2734-4336-a716-b21a6b32f9f9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4xdqb" Nov 29 07:15:48 crc kubenswrapper[4947]: I1129 07:15:48.202137 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a0c726c3-2734-4336-a716-b21a6b32f9f9-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4xdqb\" (UID: \"a0c726c3-2734-4336-a716-b21a6b32f9f9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4xdqb" Nov 29 07:15:48 crc kubenswrapper[4947]: I1129 07:15:48.202491 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0c726c3-2734-4336-a716-b21a6b32f9f9-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4xdqb\" (UID: \"a0c726c3-2734-4336-a716-b21a6b32f9f9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4xdqb" Nov 29 07:15:48 crc kubenswrapper[4947]: I1129 07:15:48.202683 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xzv4\" (UniqueName: \"kubernetes.io/projected/a0c726c3-2734-4336-a716-b21a6b32f9f9-kube-api-access-5xzv4\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4xdqb\" (UID: \"a0c726c3-2734-4336-a716-b21a6b32f9f9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4xdqb" Nov 29 07:15:48 crc kubenswrapper[4947]: I1129 07:15:48.210167 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a0c726c3-2734-4336-a716-b21a6b32f9f9-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4xdqb\" (UID: \"a0c726c3-2734-4336-a716-b21a6b32f9f9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4xdqb" Nov 29 07:15:48 crc kubenswrapper[4947]: I1129 07:15:48.210315 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a0c726c3-2734-4336-a716-b21a6b32f9f9-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4xdqb\" (UID: \"a0c726c3-2734-4336-a716-b21a6b32f9f9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4xdqb" Nov 29 07:15:48 crc kubenswrapper[4947]: I1129 07:15:48.211237 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a0c726c3-2734-4336-a716-b21a6b32f9f9-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4xdqb\" (UID: \"a0c726c3-2734-4336-a716-b21a6b32f9f9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4xdqb" Nov 29 07:15:48 crc kubenswrapper[4947]: I1129 07:15:48.211535 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0c726c3-2734-4336-a716-b21a6b32f9f9-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4xdqb\" (UID: \"a0c726c3-2734-4336-a716-b21a6b32f9f9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4xdqb" Nov 29 07:15:48 crc kubenswrapper[4947]: I1129 07:15:48.223272 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xzv4\" (UniqueName: \"kubernetes.io/projected/a0c726c3-2734-4336-a716-b21a6b32f9f9-kube-api-access-5xzv4\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4xdqb\" (UID: \"a0c726c3-2734-4336-a716-b21a6b32f9f9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4xdqb" Nov 29 07:15:48 crc kubenswrapper[4947]: I1129 07:15:48.318796 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4xdqb" Nov 29 07:15:48 crc kubenswrapper[4947]: I1129 07:15:48.950700 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4xdqb"] Nov 29 07:15:49 crc kubenswrapper[4947]: I1129 07:15:49.722420 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4xdqb" event={"ID":"a0c726c3-2734-4336-a716-b21a6b32f9f9","Type":"ContainerStarted","Data":"0a25ba220dac9183c422ceaa8a96f01ef73b60482da8efc0cd30a78cacc121a6"} Nov 29 07:15:50 crc kubenswrapper[4947]: I1129 07:15:50.739742 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4xdqb" event={"ID":"a0c726c3-2734-4336-a716-b21a6b32f9f9","Type":"ContainerStarted","Data":"12424460f270c584c2ddbeca1ac8fc49f17bd5b60523dd65dd3a5fb51104e63d"} Nov 29 07:15:50 crc kubenswrapper[4947]: I1129 07:15:50.772655 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4xdqb" podStartSLOduration=3.081961552 podStartE2EDuration="3.772627502s" podCreationTimestamp="2025-11-29 07:15:47 +0000 UTC" firstStartedPulling="2025-11-29 07:15:48.960210274 +0000 UTC m=+2500.004592355" lastFinishedPulling="2025-11-29 07:15:49.650876214 +0000 UTC m=+2500.695258305" observedRunningTime="2025-11-29 07:15:50.759837728 +0000 UTC m=+2501.804219819" watchObservedRunningTime="2025-11-29 07:15:50.772627502 +0000 UTC m=+2501.817009583" Nov 29 07:15:57 crc kubenswrapper[4947]: I1129 07:15:57.180885 4947 scope.go:117] "RemoveContainer" containerID="4e8a1bb4365f266c0a40f1757eac36e4c4debcbd11bd1184ebc913d9f9683bb6" Nov 29 07:15:57 crc kubenswrapper[4947]: E1129 07:15:57.182127 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:16:02 crc kubenswrapper[4947]: I1129 07:16:02.948041 4947 generic.go:334] "Generic (PLEG): container finished" podID="a0c726c3-2734-4336-a716-b21a6b32f9f9" containerID="12424460f270c584c2ddbeca1ac8fc49f17bd5b60523dd65dd3a5fb51104e63d" exitCode=0 Nov 29 07:16:02 crc kubenswrapper[4947]: I1129 07:16:02.948144 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4xdqb" event={"ID":"a0c726c3-2734-4336-a716-b21a6b32f9f9","Type":"ContainerDied","Data":"12424460f270c584c2ddbeca1ac8fc49f17bd5b60523dd65dd3a5fb51104e63d"} Nov 29 07:16:04 crc kubenswrapper[4947]: I1129 07:16:04.618552 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4xdqb" Nov 29 07:16:04 crc kubenswrapper[4947]: I1129 07:16:04.805162 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a0c726c3-2734-4336-a716-b21a6b32f9f9-ssh-key\") pod \"a0c726c3-2734-4336-a716-b21a6b32f9f9\" (UID: \"a0c726c3-2734-4336-a716-b21a6b32f9f9\") " Nov 29 07:16:04 crc kubenswrapper[4947]: I1129 07:16:04.805402 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0c726c3-2734-4336-a716-b21a6b32f9f9-repo-setup-combined-ca-bundle\") pod \"a0c726c3-2734-4336-a716-b21a6b32f9f9\" (UID: \"a0c726c3-2734-4336-a716-b21a6b32f9f9\") " Nov 29 07:16:04 crc kubenswrapper[4947]: I1129 07:16:04.807062 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xzv4\" (UniqueName: \"kubernetes.io/projected/a0c726c3-2734-4336-a716-b21a6b32f9f9-kube-api-access-5xzv4\") pod \"a0c726c3-2734-4336-a716-b21a6b32f9f9\" (UID: \"a0c726c3-2734-4336-a716-b21a6b32f9f9\") " Nov 29 07:16:04 crc kubenswrapper[4947]: I1129 07:16:04.807326 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a0c726c3-2734-4336-a716-b21a6b32f9f9-inventory\") pod \"a0c726c3-2734-4336-a716-b21a6b32f9f9\" (UID: \"a0c726c3-2734-4336-a716-b21a6b32f9f9\") " Nov 29 07:16:04 crc kubenswrapper[4947]: I1129 07:16:04.807362 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a0c726c3-2734-4336-a716-b21a6b32f9f9-ceph\") pod \"a0c726c3-2734-4336-a716-b21a6b32f9f9\" (UID: \"a0c726c3-2734-4336-a716-b21a6b32f9f9\") " Nov 29 07:16:04 crc kubenswrapper[4947]: I1129 07:16:04.813851 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0c726c3-2734-4336-a716-b21a6b32f9f9-ceph" (OuterVolumeSpecName: "ceph") pod "a0c726c3-2734-4336-a716-b21a6b32f9f9" (UID: "a0c726c3-2734-4336-a716-b21a6b32f9f9"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:16:04 crc kubenswrapper[4947]: I1129 07:16:04.815114 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0c726c3-2734-4336-a716-b21a6b32f9f9-kube-api-access-5xzv4" (OuterVolumeSpecName: "kube-api-access-5xzv4") pod "a0c726c3-2734-4336-a716-b21a6b32f9f9" (UID: "a0c726c3-2734-4336-a716-b21a6b32f9f9"). InnerVolumeSpecName "kube-api-access-5xzv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:16:04 crc kubenswrapper[4947]: I1129 07:16:04.828523 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0c726c3-2734-4336-a716-b21a6b32f9f9-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "a0c726c3-2734-4336-a716-b21a6b32f9f9" (UID: "a0c726c3-2734-4336-a716-b21a6b32f9f9"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:16:04 crc kubenswrapper[4947]: I1129 07:16:04.846265 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0c726c3-2734-4336-a716-b21a6b32f9f9-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a0c726c3-2734-4336-a716-b21a6b32f9f9" (UID: "a0c726c3-2734-4336-a716-b21a6b32f9f9"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:16:04 crc kubenswrapper[4947]: I1129 07:16:04.852843 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0c726c3-2734-4336-a716-b21a6b32f9f9-inventory" (OuterVolumeSpecName: "inventory") pod "a0c726c3-2734-4336-a716-b21a6b32f9f9" (UID: "a0c726c3-2734-4336-a716-b21a6b32f9f9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:16:04 crc kubenswrapper[4947]: I1129 07:16:04.910549 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xzv4\" (UniqueName: \"kubernetes.io/projected/a0c726c3-2734-4336-a716-b21a6b32f9f9-kube-api-access-5xzv4\") on node \"crc\" DevicePath \"\"" Nov 29 07:16:04 crc kubenswrapper[4947]: I1129 07:16:04.910628 4947 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a0c726c3-2734-4336-a716-b21a6b32f9f9-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:16:04 crc kubenswrapper[4947]: I1129 07:16:04.910642 4947 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a0c726c3-2734-4336-a716-b21a6b32f9f9-ceph\") on node \"crc\" DevicePath \"\"" Nov 29 07:16:04 crc kubenswrapper[4947]: I1129 07:16:04.910653 4947 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a0c726c3-2734-4336-a716-b21a6b32f9f9-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:16:04 crc kubenswrapper[4947]: I1129 07:16:04.910667 4947 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0c726c3-2734-4336-a716-b21a6b32f9f9-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:16:04 crc kubenswrapper[4947]: I1129 07:16:04.972088 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4xdqb" event={"ID":"a0c726c3-2734-4336-a716-b21a6b32f9f9","Type":"ContainerDied","Data":"0a25ba220dac9183c422ceaa8a96f01ef73b60482da8efc0cd30a78cacc121a6"} Nov 29 07:16:04 crc kubenswrapper[4947]: I1129 07:16:04.972152 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a25ba220dac9183c422ceaa8a96f01ef73b60482da8efc0cd30a78cacc121a6" Nov 29 07:16:04 crc kubenswrapper[4947]: I1129 07:16:04.972193 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4xdqb" Nov 29 07:16:05 crc kubenswrapper[4947]: I1129 07:16:05.101164 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wb8qd"] Nov 29 07:16:05 crc kubenswrapper[4947]: E1129 07:16:05.103214 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0c726c3-2734-4336-a716-b21a6b32f9f9" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 29 07:16:05 crc kubenswrapper[4947]: I1129 07:16:05.103262 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0c726c3-2734-4336-a716-b21a6b32f9f9" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 29 07:16:05 crc kubenswrapper[4947]: I1129 07:16:05.103520 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0c726c3-2734-4336-a716-b21a6b32f9f9" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 29 07:16:05 crc kubenswrapper[4947]: I1129 07:16:05.104423 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wb8qd" Nov 29 07:16:05 crc kubenswrapper[4947]: I1129 07:16:05.107589 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:16:05 crc kubenswrapper[4947]: I1129 07:16:05.108281 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xvljs" Nov 29 07:16:05 crc kubenswrapper[4947]: I1129 07:16:05.108297 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:16:05 crc kubenswrapper[4947]: I1129 07:16:05.108963 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:16:05 crc kubenswrapper[4947]: I1129 07:16:05.112786 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 29 07:16:05 crc kubenswrapper[4947]: I1129 07:16:05.118828 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wb8qd"] Nov 29 07:16:05 crc kubenswrapper[4947]: I1129 07:16:05.216145 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wb8qd\" (UID: \"a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wb8qd" Nov 29 07:16:05 crc kubenswrapper[4947]: I1129 07:16:05.216213 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wb8qd\" (UID: \"a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wb8qd" Nov 29 07:16:05 crc kubenswrapper[4947]: I1129 07:16:05.216264 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wb8qd\" (UID: \"a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wb8qd" Nov 29 07:16:05 crc kubenswrapper[4947]: I1129 07:16:05.216407 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dghsc\" (UniqueName: \"kubernetes.io/projected/a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039-kube-api-access-dghsc\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wb8qd\" (UID: \"a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wb8qd" Nov 29 07:16:05 crc kubenswrapper[4947]: I1129 07:16:05.216468 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wb8qd\" (UID: \"a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wb8qd" Nov 29 07:16:05 crc kubenswrapper[4947]: I1129 07:16:05.318187 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wb8qd\" (UID: \"a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wb8qd" Nov 29 07:16:05 crc kubenswrapper[4947]: I1129 07:16:05.318301 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wb8qd\" (UID: \"a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wb8qd" Nov 29 07:16:05 crc kubenswrapper[4947]: I1129 07:16:05.318354 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wb8qd\" (UID: \"a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wb8qd" Nov 29 07:16:05 crc kubenswrapper[4947]: I1129 07:16:05.318374 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wb8qd\" (UID: \"a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wb8qd" Nov 29 07:16:05 crc kubenswrapper[4947]: I1129 07:16:05.318539 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dghsc\" (UniqueName: \"kubernetes.io/projected/a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039-kube-api-access-dghsc\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wb8qd\" (UID: \"a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wb8qd" Nov 29 07:16:05 crc kubenswrapper[4947]: I1129 07:16:05.324850 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wb8qd\" (UID: \"a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wb8qd" Nov 29 07:16:05 crc kubenswrapper[4947]: I1129 07:16:05.325041 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wb8qd\" (UID: \"a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wb8qd" Nov 29 07:16:05 crc kubenswrapper[4947]: I1129 07:16:05.326254 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wb8qd\" (UID: \"a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wb8qd" Nov 29 07:16:05 crc kubenswrapper[4947]: I1129 07:16:05.330841 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wb8qd\" (UID: \"a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wb8qd" Nov 29 07:16:05 crc kubenswrapper[4947]: I1129 07:16:05.341350 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dghsc\" (UniqueName: \"kubernetes.io/projected/a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039-kube-api-access-dghsc\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wb8qd\" (UID: \"a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wb8qd" Nov 29 07:16:05 crc kubenswrapper[4947]: I1129 07:16:05.430929 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wb8qd" Nov 29 07:16:06 crc kubenswrapper[4947]: I1129 07:16:06.095465 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wb8qd"] Nov 29 07:16:07 crc kubenswrapper[4947]: I1129 07:16:07.000543 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wb8qd" event={"ID":"a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039","Type":"ContainerStarted","Data":"1554daa32e78887c286784849835f44b00c9f55b89c738c059494dab41e397d6"} Nov 29 07:16:07 crc kubenswrapper[4947]: I1129 07:16:07.001148 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wb8qd" event={"ID":"a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039","Type":"ContainerStarted","Data":"a418b6f6254e76bed9f70c2e85fa10f7a7d2cf9dd492e62c80ebd84d81809cf9"} Nov 29 07:16:07 crc kubenswrapper[4947]: I1129 07:16:07.023541 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wb8qd" podStartSLOduration=1.552441923 podStartE2EDuration="2.023512406s" podCreationTimestamp="2025-11-29 07:16:05 +0000 UTC" firstStartedPulling="2025-11-29 07:16:06.115835494 +0000 UTC m=+2517.160217575" lastFinishedPulling="2025-11-29 07:16:06.586905967 +0000 UTC m=+2517.631288058" observedRunningTime="2025-11-29 07:16:07.021419793 +0000 UTC m=+2518.065801884" watchObservedRunningTime="2025-11-29 07:16:07.023512406 +0000 UTC m=+2518.067894487" Nov 29 07:16:12 crc kubenswrapper[4947]: I1129 07:16:12.179827 4947 scope.go:117] "RemoveContainer" containerID="4e8a1bb4365f266c0a40f1757eac36e4c4debcbd11bd1184ebc913d9f9683bb6" Nov 29 07:16:12 crc kubenswrapper[4947]: E1129 07:16:12.181089 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:16:24 crc kubenswrapper[4947]: I1129 07:16:24.178964 4947 scope.go:117] "RemoveContainer" containerID="4e8a1bb4365f266c0a40f1757eac36e4c4debcbd11bd1184ebc913d9f9683bb6" Nov 29 07:16:24 crc kubenswrapper[4947]: E1129 07:16:24.180280 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:16:37 crc kubenswrapper[4947]: I1129 07:16:37.178866 4947 scope.go:117] "RemoveContainer" containerID="4e8a1bb4365f266c0a40f1757eac36e4c4debcbd11bd1184ebc913d9f9683bb6" Nov 29 07:16:37 crc kubenswrapper[4947]: E1129 07:16:37.180184 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:16:38 crc kubenswrapper[4947]: I1129 07:16:38.442535 4947 scope.go:117] "RemoveContainer" containerID="c70804da874a89e394a2eb609846fd6eb81bba660410545dfb851a76fb638483" Nov 29 07:16:38 crc kubenswrapper[4947]: I1129 07:16:38.538358 4947 scope.go:117] "RemoveContainer" containerID="0f851512bfc62d98d76dadb5271814ebb6ba8b67b31c069e285d0813b3ef3b0b" Nov 29 07:16:38 crc kubenswrapper[4947]: I1129 07:16:38.579076 4947 scope.go:117] "RemoveContainer" containerID="d4b358deb8222da1b93452bb925bebcc37db6667de3a046485d8897bd3cd06f8" Nov 29 07:16:38 crc kubenswrapper[4947]: I1129 07:16:38.654119 4947 scope.go:117] "RemoveContainer" containerID="90d5dbdad52563af7f8908c5a87d7cdb8ec1d3f0d82d106f7b2587b60cd73ab2" Nov 29 07:16:38 crc kubenswrapper[4947]: I1129 07:16:38.739294 4947 scope.go:117] "RemoveContainer" containerID="0dce44565c5c2d6515956c2cbdf4578c47b76881db51440c12130fe66215b881" Nov 29 07:16:38 crc kubenswrapper[4947]: I1129 07:16:38.846949 4947 scope.go:117] "RemoveContainer" containerID="29411b86a185aa97f61a263fd791092a82016efe9d9faf00cde2b5639c3fdae4" Nov 29 07:16:38 crc kubenswrapper[4947]: I1129 07:16:38.920488 4947 scope.go:117] "RemoveContainer" containerID="99fd562d9b583388ae9893c0a79ea064ac319133912fd97c4a7a770a995db776" Nov 29 07:16:38 crc kubenswrapper[4947]: I1129 07:16:38.996169 4947 scope.go:117] "RemoveContainer" containerID="32fd31c4b506abe6f0a2a68b5a465bb2acf785e68de42627a5ecebb5253bd0b5" Nov 29 07:16:39 crc kubenswrapper[4947]: I1129 07:16:39.036434 4947 scope.go:117] "RemoveContainer" containerID="4a2032a3f792f0c10a8d656f365c1d2d245f994d531b9258e1940e19b43d3b05" Nov 29 07:16:39 crc kubenswrapper[4947]: I1129 07:16:39.084001 4947 scope.go:117] "RemoveContainer" containerID="db2b60b2e6cb04ddfb2155f615b46e8541ba7847b6edc8c55ddfdaa67d752d5e" Nov 29 07:16:51 crc kubenswrapper[4947]: I1129 07:16:51.180542 4947 scope.go:117] "RemoveContainer" containerID="4e8a1bb4365f266c0a40f1757eac36e4c4debcbd11bd1184ebc913d9f9683bb6" Nov 29 07:16:51 crc kubenswrapper[4947]: E1129 07:16:51.181857 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:17:04 crc kubenswrapper[4947]: I1129 07:17:04.178713 4947 scope.go:117] "RemoveContainer" containerID="4e8a1bb4365f266c0a40f1757eac36e4c4debcbd11bd1184ebc913d9f9683bb6" Nov 29 07:17:04 crc kubenswrapper[4947]: E1129 07:17:04.179515 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:17:19 crc kubenswrapper[4947]: I1129 07:17:19.186163 4947 scope.go:117] "RemoveContainer" containerID="4e8a1bb4365f266c0a40f1757eac36e4c4debcbd11bd1184ebc913d9f9683bb6" Nov 29 07:17:19 crc kubenswrapper[4947]: E1129 07:17:19.186837 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:17:30 crc kubenswrapper[4947]: I1129 07:17:30.179245 4947 scope.go:117] "RemoveContainer" containerID="4e8a1bb4365f266c0a40f1757eac36e4c4debcbd11bd1184ebc913d9f9683bb6" Nov 29 07:17:30 crc kubenswrapper[4947]: E1129 07:17:30.180357 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:17:32 crc kubenswrapper[4947]: I1129 07:17:32.794646 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sqd9h"] Nov 29 07:17:32 crc kubenswrapper[4947]: I1129 07:17:32.797318 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sqd9h" Nov 29 07:17:32 crc kubenswrapper[4947]: I1129 07:17:32.811041 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sqd9h"] Nov 29 07:17:32 crc kubenswrapper[4947]: I1129 07:17:32.844055 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9281330c-344f-40f6-9a4a-5fb352fa4f99-utilities\") pod \"certified-operators-sqd9h\" (UID: \"9281330c-344f-40f6-9a4a-5fb352fa4f99\") " pod="openshift-marketplace/certified-operators-sqd9h" Nov 29 07:17:32 crc kubenswrapper[4947]: I1129 07:17:32.844258 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9281330c-344f-40f6-9a4a-5fb352fa4f99-catalog-content\") pod \"certified-operators-sqd9h\" (UID: \"9281330c-344f-40f6-9a4a-5fb352fa4f99\") " pod="openshift-marketplace/certified-operators-sqd9h" Nov 29 07:17:32 crc kubenswrapper[4947]: I1129 07:17:32.844300 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6ztc\" (UniqueName: \"kubernetes.io/projected/9281330c-344f-40f6-9a4a-5fb352fa4f99-kube-api-access-m6ztc\") pod \"certified-operators-sqd9h\" (UID: \"9281330c-344f-40f6-9a4a-5fb352fa4f99\") " pod="openshift-marketplace/certified-operators-sqd9h" Nov 29 07:17:32 crc kubenswrapper[4947]: I1129 07:17:32.946302 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9281330c-344f-40f6-9a4a-5fb352fa4f99-catalog-content\") pod \"certified-operators-sqd9h\" (UID: \"9281330c-344f-40f6-9a4a-5fb352fa4f99\") " pod="openshift-marketplace/certified-operators-sqd9h" Nov 29 07:17:32 crc kubenswrapper[4947]: I1129 07:17:32.946367 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6ztc\" (UniqueName: \"kubernetes.io/projected/9281330c-344f-40f6-9a4a-5fb352fa4f99-kube-api-access-m6ztc\") pod \"certified-operators-sqd9h\" (UID: \"9281330c-344f-40f6-9a4a-5fb352fa4f99\") " pod="openshift-marketplace/certified-operators-sqd9h" Nov 29 07:17:32 crc kubenswrapper[4947]: I1129 07:17:32.946470 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9281330c-344f-40f6-9a4a-5fb352fa4f99-utilities\") pod \"certified-operators-sqd9h\" (UID: \"9281330c-344f-40f6-9a4a-5fb352fa4f99\") " pod="openshift-marketplace/certified-operators-sqd9h" Nov 29 07:17:32 crc kubenswrapper[4947]: I1129 07:17:32.947282 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9281330c-344f-40f6-9a4a-5fb352fa4f99-utilities\") pod \"certified-operators-sqd9h\" (UID: \"9281330c-344f-40f6-9a4a-5fb352fa4f99\") " pod="openshift-marketplace/certified-operators-sqd9h" Nov 29 07:17:32 crc kubenswrapper[4947]: I1129 07:17:32.947531 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9281330c-344f-40f6-9a4a-5fb352fa4f99-catalog-content\") pod \"certified-operators-sqd9h\" (UID: \"9281330c-344f-40f6-9a4a-5fb352fa4f99\") " pod="openshift-marketplace/certified-operators-sqd9h" Nov 29 07:17:32 crc kubenswrapper[4947]: I1129 07:17:32.983658 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6ztc\" (UniqueName: \"kubernetes.io/projected/9281330c-344f-40f6-9a4a-5fb352fa4f99-kube-api-access-m6ztc\") pod \"certified-operators-sqd9h\" (UID: \"9281330c-344f-40f6-9a4a-5fb352fa4f99\") " pod="openshift-marketplace/certified-operators-sqd9h" Nov 29 07:17:33 crc kubenswrapper[4947]: I1129 07:17:33.124906 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sqd9h" Nov 29 07:17:33 crc kubenswrapper[4947]: I1129 07:17:33.702415 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sqd9h"] Nov 29 07:17:33 crc kubenswrapper[4947]: I1129 07:17:33.950633 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sqd9h" event={"ID":"9281330c-344f-40f6-9a4a-5fb352fa4f99","Type":"ContainerStarted","Data":"ada4ea719154703431a64bf211d79b6744cdd7557011e69a65c14f59366ef444"} Nov 29 07:17:34 crc kubenswrapper[4947]: I1129 07:17:34.970913 4947 generic.go:334] "Generic (PLEG): container finished" podID="9281330c-344f-40f6-9a4a-5fb352fa4f99" containerID="88922fc16f3238f767fe0324fcc404bc7eaffe57203ef18056856dabf0b008e0" exitCode=0 Nov 29 07:17:34 crc kubenswrapper[4947]: I1129 07:17:34.971438 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sqd9h" event={"ID":"9281330c-344f-40f6-9a4a-5fb352fa4f99","Type":"ContainerDied","Data":"88922fc16f3238f767fe0324fcc404bc7eaffe57203ef18056856dabf0b008e0"} Nov 29 07:17:40 crc kubenswrapper[4947]: I1129 07:17:40.020401 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sqd9h" event={"ID":"9281330c-344f-40f6-9a4a-5fb352fa4f99","Type":"ContainerStarted","Data":"b90399573512cb283d0aa1cd0861af48e88103cfcb6dc96f720076e718cd679f"} Nov 29 07:17:41 crc kubenswrapper[4947]: I1129 07:17:41.034100 4947 generic.go:334] "Generic (PLEG): container finished" podID="9281330c-344f-40f6-9a4a-5fb352fa4f99" containerID="b90399573512cb283d0aa1cd0861af48e88103cfcb6dc96f720076e718cd679f" exitCode=0 Nov 29 07:17:41 crc kubenswrapper[4947]: I1129 07:17:41.034161 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sqd9h" event={"ID":"9281330c-344f-40f6-9a4a-5fb352fa4f99","Type":"ContainerDied","Data":"b90399573512cb283d0aa1cd0861af48e88103cfcb6dc96f720076e718cd679f"} Nov 29 07:17:42 crc kubenswrapper[4947]: I1129 07:17:42.050709 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sqd9h" event={"ID":"9281330c-344f-40f6-9a4a-5fb352fa4f99","Type":"ContainerStarted","Data":"14d939606721cc1402313008022bda0062bb8680dafa84ae7d98b89f10b747ae"} Nov 29 07:17:42 crc kubenswrapper[4947]: I1129 07:17:42.092063 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sqd9h" podStartSLOduration=3.458642715 podStartE2EDuration="10.091993855s" podCreationTimestamp="2025-11-29 07:17:32 +0000 UTC" firstStartedPulling="2025-11-29 07:17:34.974145862 +0000 UTC m=+2606.018527943" lastFinishedPulling="2025-11-29 07:17:41.607497002 +0000 UTC m=+2612.651879083" observedRunningTime="2025-11-29 07:17:42.078570645 +0000 UTC m=+2613.122952716" watchObservedRunningTime="2025-11-29 07:17:42.091993855 +0000 UTC m=+2613.136375936" Nov 29 07:17:43 crc kubenswrapper[4947]: I1129 07:17:43.125663 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sqd9h" Nov 29 07:17:43 crc kubenswrapper[4947]: I1129 07:17:43.125749 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sqd9h" Nov 29 07:17:43 crc kubenswrapper[4947]: I1129 07:17:43.179929 4947 scope.go:117] "RemoveContainer" containerID="4e8a1bb4365f266c0a40f1757eac36e4c4debcbd11bd1184ebc913d9f9683bb6" Nov 29 07:17:43 crc kubenswrapper[4947]: E1129 07:17:43.180779 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:17:44 crc kubenswrapper[4947]: I1129 07:17:44.177179 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-sqd9h" podUID="9281330c-344f-40f6-9a4a-5fb352fa4f99" containerName="registry-server" probeResult="failure" output=< Nov 29 07:17:44 crc kubenswrapper[4947]: timeout: failed to connect service ":50051" within 1s Nov 29 07:17:44 crc kubenswrapper[4947]: > Nov 29 07:17:53 crc kubenswrapper[4947]: I1129 07:17:53.177493 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sqd9h" Nov 29 07:17:53 crc kubenswrapper[4947]: I1129 07:17:53.233353 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sqd9h" Nov 29 07:17:53 crc kubenswrapper[4947]: I1129 07:17:53.420977 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sqd9h"] Nov 29 07:17:54 crc kubenswrapper[4947]: I1129 07:17:54.180255 4947 scope.go:117] "RemoveContainer" containerID="4e8a1bb4365f266c0a40f1757eac36e4c4debcbd11bd1184ebc913d9f9683bb6" Nov 29 07:17:54 crc kubenswrapper[4947]: E1129 07:17:54.180608 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:17:55 crc kubenswrapper[4947]: I1129 07:17:55.161407 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sqd9h" podUID="9281330c-344f-40f6-9a4a-5fb352fa4f99" containerName="registry-server" containerID="cri-o://14d939606721cc1402313008022bda0062bb8680dafa84ae7d98b89f10b747ae" gracePeriod=2 Nov 29 07:17:56 crc kubenswrapper[4947]: I1129 07:17:56.172494 4947 generic.go:334] "Generic (PLEG): container finished" podID="9281330c-344f-40f6-9a4a-5fb352fa4f99" containerID="14d939606721cc1402313008022bda0062bb8680dafa84ae7d98b89f10b747ae" exitCode=0 Nov 29 07:17:56 crc kubenswrapper[4947]: I1129 07:17:56.172562 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sqd9h" event={"ID":"9281330c-344f-40f6-9a4a-5fb352fa4f99","Type":"ContainerDied","Data":"14d939606721cc1402313008022bda0062bb8680dafa84ae7d98b89f10b747ae"} Nov 29 07:17:56 crc kubenswrapper[4947]: I1129 07:17:56.313956 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sqd9h" Nov 29 07:17:56 crc kubenswrapper[4947]: I1129 07:17:56.393581 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9281330c-344f-40f6-9a4a-5fb352fa4f99-utilities\") pod \"9281330c-344f-40f6-9a4a-5fb352fa4f99\" (UID: \"9281330c-344f-40f6-9a4a-5fb352fa4f99\") " Nov 29 07:17:56 crc kubenswrapper[4947]: I1129 07:17:56.393689 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6ztc\" (UniqueName: \"kubernetes.io/projected/9281330c-344f-40f6-9a4a-5fb352fa4f99-kube-api-access-m6ztc\") pod \"9281330c-344f-40f6-9a4a-5fb352fa4f99\" (UID: \"9281330c-344f-40f6-9a4a-5fb352fa4f99\") " Nov 29 07:17:56 crc kubenswrapper[4947]: I1129 07:17:56.393748 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9281330c-344f-40f6-9a4a-5fb352fa4f99-catalog-content\") pod \"9281330c-344f-40f6-9a4a-5fb352fa4f99\" (UID: \"9281330c-344f-40f6-9a4a-5fb352fa4f99\") " Nov 29 07:17:56 crc kubenswrapper[4947]: I1129 07:17:56.396111 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9281330c-344f-40f6-9a4a-5fb352fa4f99-utilities" (OuterVolumeSpecName: "utilities") pod "9281330c-344f-40f6-9a4a-5fb352fa4f99" (UID: "9281330c-344f-40f6-9a4a-5fb352fa4f99"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:17:56 crc kubenswrapper[4947]: I1129 07:17:56.402558 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9281330c-344f-40f6-9a4a-5fb352fa4f99-kube-api-access-m6ztc" (OuterVolumeSpecName: "kube-api-access-m6ztc") pod "9281330c-344f-40f6-9a4a-5fb352fa4f99" (UID: "9281330c-344f-40f6-9a4a-5fb352fa4f99"). InnerVolumeSpecName "kube-api-access-m6ztc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:17:56 crc kubenswrapper[4947]: I1129 07:17:56.449716 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9281330c-344f-40f6-9a4a-5fb352fa4f99-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9281330c-344f-40f6-9a4a-5fb352fa4f99" (UID: "9281330c-344f-40f6-9a4a-5fb352fa4f99"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:17:56 crc kubenswrapper[4947]: I1129 07:17:56.497352 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9281330c-344f-40f6-9a4a-5fb352fa4f99-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:17:56 crc kubenswrapper[4947]: I1129 07:17:56.497847 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6ztc\" (UniqueName: \"kubernetes.io/projected/9281330c-344f-40f6-9a4a-5fb352fa4f99-kube-api-access-m6ztc\") on node \"crc\" DevicePath \"\"" Nov 29 07:17:56 crc kubenswrapper[4947]: I1129 07:17:56.497952 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9281330c-344f-40f6-9a4a-5fb352fa4f99-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:17:57 crc kubenswrapper[4947]: I1129 07:17:57.218744 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sqd9h" event={"ID":"9281330c-344f-40f6-9a4a-5fb352fa4f99","Type":"ContainerDied","Data":"ada4ea719154703431a64bf211d79b6744cdd7557011e69a65c14f59366ef444"} Nov 29 07:17:57 crc kubenswrapper[4947]: I1129 07:17:57.218865 4947 scope.go:117] "RemoveContainer" containerID="14d939606721cc1402313008022bda0062bb8680dafa84ae7d98b89f10b747ae" Nov 29 07:17:57 crc kubenswrapper[4947]: I1129 07:17:57.219351 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sqd9h" Nov 29 07:17:57 crc kubenswrapper[4947]: I1129 07:17:57.284706 4947 scope.go:117] "RemoveContainer" containerID="b90399573512cb283d0aa1cd0861af48e88103cfcb6dc96f720076e718cd679f" Nov 29 07:17:57 crc kubenswrapper[4947]: I1129 07:17:57.355723 4947 scope.go:117] "RemoveContainer" containerID="88922fc16f3238f767fe0324fcc404bc7eaffe57203ef18056856dabf0b008e0" Nov 29 07:17:57 crc kubenswrapper[4947]: I1129 07:17:57.368454 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sqd9h"] Nov 29 07:17:57 crc kubenswrapper[4947]: I1129 07:17:57.379281 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sqd9h"] Nov 29 07:17:59 crc kubenswrapper[4947]: I1129 07:17:59.192489 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9281330c-344f-40f6-9a4a-5fb352fa4f99" path="/var/lib/kubelet/pods/9281330c-344f-40f6-9a4a-5fb352fa4f99/volumes" Nov 29 07:18:07 crc kubenswrapper[4947]: I1129 07:18:07.179897 4947 scope.go:117] "RemoveContainer" containerID="4e8a1bb4365f266c0a40f1757eac36e4c4debcbd11bd1184ebc913d9f9683bb6" Nov 29 07:18:07 crc kubenswrapper[4947]: E1129 07:18:07.181305 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:18:18 crc kubenswrapper[4947]: I1129 07:18:18.179164 4947 scope.go:117] "RemoveContainer" containerID="4e8a1bb4365f266c0a40f1757eac36e4c4debcbd11bd1184ebc913d9f9683bb6" Nov 29 07:18:18 crc kubenswrapper[4947]: E1129 07:18:18.180126 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:18:33 crc kubenswrapper[4947]: I1129 07:18:33.180052 4947 scope.go:117] "RemoveContainer" containerID="4e8a1bb4365f266c0a40f1757eac36e4c4debcbd11bd1184ebc913d9f9683bb6" Nov 29 07:18:33 crc kubenswrapper[4947]: E1129 07:18:33.181611 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:18:39 crc kubenswrapper[4947]: I1129 07:18:39.636941 4947 generic.go:334] "Generic (PLEG): container finished" podID="a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039" containerID="1554daa32e78887c286784849835f44b00c9f55b89c738c059494dab41e397d6" exitCode=0 Nov 29 07:18:39 crc kubenswrapper[4947]: I1129 07:18:39.636982 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wb8qd" event={"ID":"a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039","Type":"ContainerDied","Data":"1554daa32e78887c286784849835f44b00c9f55b89c738c059494dab41e397d6"} Nov 29 07:18:41 crc kubenswrapper[4947]: I1129 07:18:41.082823 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wb8qd" Nov 29 07:18:41 crc kubenswrapper[4947]: I1129 07:18:41.161121 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dghsc\" (UniqueName: \"kubernetes.io/projected/a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039-kube-api-access-dghsc\") pod \"a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039\" (UID: \"a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039\") " Nov 29 07:18:41 crc kubenswrapper[4947]: I1129 07:18:41.161259 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039-ceph\") pod \"a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039\" (UID: \"a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039\") " Nov 29 07:18:41 crc kubenswrapper[4947]: I1129 07:18:41.161353 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039-inventory\") pod \"a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039\" (UID: \"a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039\") " Nov 29 07:18:41 crc kubenswrapper[4947]: I1129 07:18:41.161394 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039-bootstrap-combined-ca-bundle\") pod \"a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039\" (UID: \"a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039\") " Nov 29 07:18:41 crc kubenswrapper[4947]: I1129 07:18:41.161581 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039-ssh-key\") pod \"a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039\" (UID: \"a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039\") " Nov 29 07:18:41 crc kubenswrapper[4947]: I1129 07:18:41.173637 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039" (UID: "a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:18:41 crc kubenswrapper[4947]: I1129 07:18:41.173713 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039-ceph" (OuterVolumeSpecName: "ceph") pod "a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039" (UID: "a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:18:41 crc kubenswrapper[4947]: I1129 07:18:41.173741 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039-kube-api-access-dghsc" (OuterVolumeSpecName: "kube-api-access-dghsc") pod "a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039" (UID: "a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039"). InnerVolumeSpecName "kube-api-access-dghsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:18:41 crc kubenswrapper[4947]: I1129 07:18:41.196619 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039-inventory" (OuterVolumeSpecName: "inventory") pod "a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039" (UID: "a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:18:41 crc kubenswrapper[4947]: I1129 07:18:41.197390 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039" (UID: "a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:18:41 crc kubenswrapper[4947]: I1129 07:18:41.264873 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dghsc\" (UniqueName: \"kubernetes.io/projected/a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039-kube-api-access-dghsc\") on node \"crc\" DevicePath \"\"" Nov 29 07:18:41 crc kubenswrapper[4947]: I1129 07:18:41.264923 4947 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039-ceph\") on node \"crc\" DevicePath \"\"" Nov 29 07:18:41 crc kubenswrapper[4947]: I1129 07:18:41.264936 4947 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:18:41 crc kubenswrapper[4947]: I1129 07:18:41.264951 4947 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:18:41 crc kubenswrapper[4947]: I1129 07:18:41.264963 4947 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:18:41 crc kubenswrapper[4947]: I1129 07:18:41.658658 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wb8qd" event={"ID":"a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039","Type":"ContainerDied","Data":"a418b6f6254e76bed9f70c2e85fa10f7a7d2cf9dd492e62c80ebd84d81809cf9"} Nov 29 07:18:41 crc kubenswrapper[4947]: I1129 07:18:41.658718 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a418b6f6254e76bed9f70c2e85fa10f7a7d2cf9dd492e62c80ebd84d81809cf9" Nov 29 07:18:41 crc kubenswrapper[4947]: I1129 07:18:41.658792 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wb8qd" Nov 29 07:18:41 crc kubenswrapper[4947]: I1129 07:18:41.760060 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n25vz"] Nov 29 07:18:41 crc kubenswrapper[4947]: E1129 07:18:41.760638 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 29 07:18:41 crc kubenswrapper[4947]: I1129 07:18:41.760670 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 29 07:18:41 crc kubenswrapper[4947]: E1129 07:18:41.760702 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9281330c-344f-40f6-9a4a-5fb352fa4f99" containerName="registry-server" Nov 29 07:18:41 crc kubenswrapper[4947]: I1129 07:18:41.760712 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="9281330c-344f-40f6-9a4a-5fb352fa4f99" containerName="registry-server" Nov 29 07:18:41 crc kubenswrapper[4947]: E1129 07:18:41.760727 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9281330c-344f-40f6-9a4a-5fb352fa4f99" containerName="extract-utilities" Nov 29 07:18:41 crc kubenswrapper[4947]: I1129 07:18:41.760733 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="9281330c-344f-40f6-9a4a-5fb352fa4f99" containerName="extract-utilities" Nov 29 07:18:41 crc kubenswrapper[4947]: E1129 07:18:41.760755 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9281330c-344f-40f6-9a4a-5fb352fa4f99" containerName="extract-content" Nov 29 07:18:41 crc kubenswrapper[4947]: I1129 07:18:41.760761 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="9281330c-344f-40f6-9a4a-5fb352fa4f99" containerName="extract-content" Nov 29 07:18:41 crc kubenswrapper[4947]: I1129 07:18:41.760985 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="9281330c-344f-40f6-9a4a-5fb352fa4f99" containerName="registry-server" Nov 29 07:18:41 crc kubenswrapper[4947]: I1129 07:18:41.761025 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 29 07:18:41 crc kubenswrapper[4947]: I1129 07:18:41.762097 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n25vz" Nov 29 07:18:41 crc kubenswrapper[4947]: I1129 07:18:41.772894 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 29 07:18:41 crc kubenswrapper[4947]: I1129 07:18:41.773349 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:18:41 crc kubenswrapper[4947]: I1129 07:18:41.773511 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:18:41 crc kubenswrapper[4947]: I1129 07:18:41.773692 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:18:41 crc kubenswrapper[4947]: I1129 07:18:41.773831 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xvljs" Nov 29 07:18:41 crc kubenswrapper[4947]: I1129 07:18:41.776423 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n25vz"] Nov 29 07:18:41 crc kubenswrapper[4947]: I1129 07:18:41.881297 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46ef6c65-ceb7-4787-95bc-783fc372fdf7-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-n25vz\" (UID: \"46ef6c65-ceb7-4787-95bc-783fc372fdf7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n25vz" Nov 29 07:18:41 crc kubenswrapper[4947]: I1129 07:18:41.881527 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/46ef6c65-ceb7-4787-95bc-783fc372fdf7-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-n25vz\" (UID: \"46ef6c65-ceb7-4787-95bc-783fc372fdf7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n25vz" Nov 29 07:18:41 crc kubenswrapper[4947]: I1129 07:18:41.882166 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/46ef6c65-ceb7-4787-95bc-783fc372fdf7-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-n25vz\" (UID: \"46ef6c65-ceb7-4787-95bc-783fc372fdf7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n25vz" Nov 29 07:18:41 crc kubenswrapper[4947]: I1129 07:18:41.882413 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp4jt\" (UniqueName: \"kubernetes.io/projected/46ef6c65-ceb7-4787-95bc-783fc372fdf7-kube-api-access-lp4jt\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-n25vz\" (UID: \"46ef6c65-ceb7-4787-95bc-783fc372fdf7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n25vz" Nov 29 07:18:41 crc kubenswrapper[4947]: I1129 07:18:41.984346 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46ef6c65-ceb7-4787-95bc-783fc372fdf7-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-n25vz\" (UID: \"46ef6c65-ceb7-4787-95bc-783fc372fdf7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n25vz" Nov 29 07:18:41 crc kubenswrapper[4947]: I1129 07:18:41.984448 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/46ef6c65-ceb7-4787-95bc-783fc372fdf7-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-n25vz\" (UID: \"46ef6c65-ceb7-4787-95bc-783fc372fdf7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n25vz" Nov 29 07:18:41 crc kubenswrapper[4947]: I1129 07:18:41.984574 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/46ef6c65-ceb7-4787-95bc-783fc372fdf7-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-n25vz\" (UID: \"46ef6c65-ceb7-4787-95bc-783fc372fdf7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n25vz" Nov 29 07:18:41 crc kubenswrapper[4947]: I1129 07:18:41.984603 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp4jt\" (UniqueName: \"kubernetes.io/projected/46ef6c65-ceb7-4787-95bc-783fc372fdf7-kube-api-access-lp4jt\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-n25vz\" (UID: \"46ef6c65-ceb7-4787-95bc-783fc372fdf7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n25vz" Nov 29 07:18:41 crc kubenswrapper[4947]: I1129 07:18:41.990334 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/46ef6c65-ceb7-4787-95bc-783fc372fdf7-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-n25vz\" (UID: \"46ef6c65-ceb7-4787-95bc-783fc372fdf7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n25vz" Nov 29 07:18:41 crc kubenswrapper[4947]: I1129 07:18:41.990385 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46ef6c65-ceb7-4787-95bc-783fc372fdf7-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-n25vz\" (UID: \"46ef6c65-ceb7-4787-95bc-783fc372fdf7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n25vz" Nov 29 07:18:42 crc kubenswrapper[4947]: I1129 07:18:42.002979 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/46ef6c65-ceb7-4787-95bc-783fc372fdf7-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-n25vz\" (UID: \"46ef6c65-ceb7-4787-95bc-783fc372fdf7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n25vz" Nov 29 07:18:42 crc kubenswrapper[4947]: I1129 07:18:42.004661 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp4jt\" (UniqueName: \"kubernetes.io/projected/46ef6c65-ceb7-4787-95bc-783fc372fdf7-kube-api-access-lp4jt\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-n25vz\" (UID: \"46ef6c65-ceb7-4787-95bc-783fc372fdf7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n25vz" Nov 29 07:18:42 crc kubenswrapper[4947]: I1129 07:18:42.097555 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n25vz" Nov 29 07:18:42 crc kubenswrapper[4947]: I1129 07:18:42.709348 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n25vz"] Nov 29 07:18:43 crc kubenswrapper[4947]: I1129 07:18:43.682556 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n25vz" event={"ID":"46ef6c65-ceb7-4787-95bc-783fc372fdf7","Type":"ContainerStarted","Data":"a5a28e09f2998402a74f078297fd8a84d95f7f37c4507e446760b1e0fe722c89"} Nov 29 07:18:43 crc kubenswrapper[4947]: I1129 07:18:43.683123 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n25vz" event={"ID":"46ef6c65-ceb7-4787-95bc-783fc372fdf7","Type":"ContainerStarted","Data":"caaab3ef9d7a2faabab41283b58fb4cee64108aa812e0f6c4310f0ad1d14b060"} Nov 29 07:18:43 crc kubenswrapper[4947]: I1129 07:18:43.711454 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n25vz" podStartSLOduration=2.059737618 podStartE2EDuration="2.71142526s" podCreationTimestamp="2025-11-29 07:18:41 +0000 UTC" firstStartedPulling="2025-11-29 07:18:42.711338135 +0000 UTC m=+2673.755720216" lastFinishedPulling="2025-11-29 07:18:43.363025777 +0000 UTC m=+2674.407407858" observedRunningTime="2025-11-29 07:18:43.702939725 +0000 UTC m=+2674.747321796" watchObservedRunningTime="2025-11-29 07:18:43.71142526 +0000 UTC m=+2674.755807341" Nov 29 07:18:44 crc kubenswrapper[4947]: I1129 07:18:44.179684 4947 scope.go:117] "RemoveContainer" containerID="4e8a1bb4365f266c0a40f1757eac36e4c4debcbd11bd1184ebc913d9f9683bb6" Nov 29 07:18:44 crc kubenswrapper[4947]: E1129 07:18:44.180095 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:18:57 crc kubenswrapper[4947]: I1129 07:18:57.179719 4947 scope.go:117] "RemoveContainer" containerID="4e8a1bb4365f266c0a40f1757eac36e4c4debcbd11bd1184ebc913d9f9683bb6" Nov 29 07:18:57 crc kubenswrapper[4947]: E1129 07:18:57.181592 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:19:08 crc kubenswrapper[4947]: I1129 07:19:08.179296 4947 scope.go:117] "RemoveContainer" containerID="4e8a1bb4365f266c0a40f1757eac36e4c4debcbd11bd1184ebc913d9f9683bb6" Nov 29 07:19:08 crc kubenswrapper[4947]: E1129 07:19:08.180532 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:19:19 crc kubenswrapper[4947]: I1129 07:19:19.010692 4947 generic.go:334] "Generic (PLEG): container finished" podID="46ef6c65-ceb7-4787-95bc-783fc372fdf7" containerID="a5a28e09f2998402a74f078297fd8a84d95f7f37c4507e446760b1e0fe722c89" exitCode=0 Nov 29 07:19:19 crc kubenswrapper[4947]: I1129 07:19:19.010786 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n25vz" event={"ID":"46ef6c65-ceb7-4787-95bc-783fc372fdf7","Type":"ContainerDied","Data":"a5a28e09f2998402a74f078297fd8a84d95f7f37c4507e446760b1e0fe722c89"} Nov 29 07:19:20 crc kubenswrapper[4947]: I1129 07:19:20.178826 4947 scope.go:117] "RemoveContainer" containerID="4e8a1bb4365f266c0a40f1757eac36e4c4debcbd11bd1184ebc913d9f9683bb6" Nov 29 07:19:20 crc kubenswrapper[4947]: E1129 07:19:20.179398 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:19:20 crc kubenswrapper[4947]: I1129 07:19:20.523369 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n25vz" Nov 29 07:19:20 crc kubenswrapper[4947]: I1129 07:19:20.671576 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lp4jt\" (UniqueName: \"kubernetes.io/projected/46ef6c65-ceb7-4787-95bc-783fc372fdf7-kube-api-access-lp4jt\") pod \"46ef6c65-ceb7-4787-95bc-783fc372fdf7\" (UID: \"46ef6c65-ceb7-4787-95bc-783fc372fdf7\") " Nov 29 07:19:20 crc kubenswrapper[4947]: I1129 07:19:20.672254 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/46ef6c65-ceb7-4787-95bc-783fc372fdf7-ssh-key\") pod \"46ef6c65-ceb7-4787-95bc-783fc372fdf7\" (UID: \"46ef6c65-ceb7-4787-95bc-783fc372fdf7\") " Nov 29 07:19:20 crc kubenswrapper[4947]: I1129 07:19:20.672400 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/46ef6c65-ceb7-4787-95bc-783fc372fdf7-ceph\") pod \"46ef6c65-ceb7-4787-95bc-783fc372fdf7\" (UID: \"46ef6c65-ceb7-4787-95bc-783fc372fdf7\") " Nov 29 07:19:20 crc kubenswrapper[4947]: I1129 07:19:20.672452 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46ef6c65-ceb7-4787-95bc-783fc372fdf7-inventory\") pod \"46ef6c65-ceb7-4787-95bc-783fc372fdf7\" (UID: \"46ef6c65-ceb7-4787-95bc-783fc372fdf7\") " Nov 29 07:19:20 crc kubenswrapper[4947]: I1129 07:19:20.682688 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46ef6c65-ceb7-4787-95bc-783fc372fdf7-ceph" (OuterVolumeSpecName: "ceph") pod "46ef6c65-ceb7-4787-95bc-783fc372fdf7" (UID: "46ef6c65-ceb7-4787-95bc-783fc372fdf7"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:19:20 crc kubenswrapper[4947]: I1129 07:19:20.698675 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46ef6c65-ceb7-4787-95bc-783fc372fdf7-kube-api-access-lp4jt" (OuterVolumeSpecName: "kube-api-access-lp4jt") pod "46ef6c65-ceb7-4787-95bc-783fc372fdf7" (UID: "46ef6c65-ceb7-4787-95bc-783fc372fdf7"). InnerVolumeSpecName "kube-api-access-lp4jt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:19:20 crc kubenswrapper[4947]: I1129 07:19:20.713206 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46ef6c65-ceb7-4787-95bc-783fc372fdf7-inventory" (OuterVolumeSpecName: "inventory") pod "46ef6c65-ceb7-4787-95bc-783fc372fdf7" (UID: "46ef6c65-ceb7-4787-95bc-783fc372fdf7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:19:20 crc kubenswrapper[4947]: I1129 07:19:20.714327 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46ef6c65-ceb7-4787-95bc-783fc372fdf7-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "46ef6c65-ceb7-4787-95bc-783fc372fdf7" (UID: "46ef6c65-ceb7-4787-95bc-783fc372fdf7"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:19:20 crc kubenswrapper[4947]: I1129 07:19:20.775700 4947 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46ef6c65-ceb7-4787-95bc-783fc372fdf7-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:19:20 crc kubenswrapper[4947]: I1129 07:19:20.775777 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lp4jt\" (UniqueName: \"kubernetes.io/projected/46ef6c65-ceb7-4787-95bc-783fc372fdf7-kube-api-access-lp4jt\") on node \"crc\" DevicePath \"\"" Nov 29 07:19:20 crc kubenswrapper[4947]: I1129 07:19:20.775796 4947 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/46ef6c65-ceb7-4787-95bc-783fc372fdf7-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:19:20 crc kubenswrapper[4947]: I1129 07:19:20.775808 4947 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/46ef6c65-ceb7-4787-95bc-783fc372fdf7-ceph\") on node \"crc\" DevicePath \"\"" Nov 29 07:19:21 crc kubenswrapper[4947]: I1129 07:19:21.033767 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n25vz" event={"ID":"46ef6c65-ceb7-4787-95bc-783fc372fdf7","Type":"ContainerDied","Data":"caaab3ef9d7a2faabab41283b58fb4cee64108aa812e0f6c4310f0ad1d14b060"} Nov 29 07:19:21 crc kubenswrapper[4947]: I1129 07:19:21.033854 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="caaab3ef9d7a2faabab41283b58fb4cee64108aa812e0f6c4310f0ad1d14b060" Nov 29 07:19:21 crc kubenswrapper[4947]: I1129 07:19:21.034386 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n25vz" Nov 29 07:19:21 crc kubenswrapper[4947]: I1129 07:19:21.155982 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fqwtm"] Nov 29 07:19:21 crc kubenswrapper[4947]: E1129 07:19:21.156626 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46ef6c65-ceb7-4787-95bc-783fc372fdf7" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 29 07:19:21 crc kubenswrapper[4947]: I1129 07:19:21.156652 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="46ef6c65-ceb7-4787-95bc-783fc372fdf7" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 29 07:19:21 crc kubenswrapper[4947]: I1129 07:19:21.156936 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="46ef6c65-ceb7-4787-95bc-783fc372fdf7" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 29 07:19:21 crc kubenswrapper[4947]: I1129 07:19:21.157877 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fqwtm" Nov 29 07:19:21 crc kubenswrapper[4947]: I1129 07:19:21.162277 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:19:21 crc kubenswrapper[4947]: I1129 07:19:21.162706 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:19:21 crc kubenswrapper[4947]: I1129 07:19:21.162931 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 29 07:19:21 crc kubenswrapper[4947]: I1129 07:19:21.163397 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xvljs" Nov 29 07:19:21 crc kubenswrapper[4947]: I1129 07:19:21.165896 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:19:21 crc kubenswrapper[4947]: I1129 07:19:21.167439 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fqwtm"] Nov 29 07:19:21 crc kubenswrapper[4947]: I1129 07:19:21.285066 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b86cae48-3c9d-4647-96ac-bd6ac89ce895-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fqwtm\" (UID: \"b86cae48-3c9d-4647-96ac-bd6ac89ce895\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fqwtm" Nov 29 07:19:21 crc kubenswrapper[4947]: I1129 07:19:21.285150 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b86cae48-3c9d-4647-96ac-bd6ac89ce895-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fqwtm\" (UID: \"b86cae48-3c9d-4647-96ac-bd6ac89ce895\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fqwtm" Nov 29 07:19:21 crc kubenswrapper[4947]: I1129 07:19:21.285236 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b86cae48-3c9d-4647-96ac-bd6ac89ce895-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fqwtm\" (UID: \"b86cae48-3c9d-4647-96ac-bd6ac89ce895\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fqwtm" Nov 29 07:19:21 crc kubenswrapper[4947]: I1129 07:19:21.285795 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtkvx\" (UniqueName: \"kubernetes.io/projected/b86cae48-3c9d-4647-96ac-bd6ac89ce895-kube-api-access-mtkvx\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fqwtm\" (UID: \"b86cae48-3c9d-4647-96ac-bd6ac89ce895\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fqwtm" Nov 29 07:19:21 crc kubenswrapper[4947]: I1129 07:19:21.387958 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtkvx\" (UniqueName: \"kubernetes.io/projected/b86cae48-3c9d-4647-96ac-bd6ac89ce895-kube-api-access-mtkvx\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fqwtm\" (UID: \"b86cae48-3c9d-4647-96ac-bd6ac89ce895\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fqwtm" Nov 29 07:19:21 crc kubenswrapper[4947]: I1129 07:19:21.388022 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b86cae48-3c9d-4647-96ac-bd6ac89ce895-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fqwtm\" (UID: \"b86cae48-3c9d-4647-96ac-bd6ac89ce895\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fqwtm" Nov 29 07:19:21 crc kubenswrapper[4947]: I1129 07:19:21.388051 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b86cae48-3c9d-4647-96ac-bd6ac89ce895-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fqwtm\" (UID: \"b86cae48-3c9d-4647-96ac-bd6ac89ce895\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fqwtm" Nov 29 07:19:21 crc kubenswrapper[4947]: I1129 07:19:21.388088 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b86cae48-3c9d-4647-96ac-bd6ac89ce895-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fqwtm\" (UID: \"b86cae48-3c9d-4647-96ac-bd6ac89ce895\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fqwtm" Nov 29 07:19:21 crc kubenswrapper[4947]: I1129 07:19:21.395648 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b86cae48-3c9d-4647-96ac-bd6ac89ce895-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fqwtm\" (UID: \"b86cae48-3c9d-4647-96ac-bd6ac89ce895\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fqwtm" Nov 29 07:19:21 crc kubenswrapper[4947]: I1129 07:19:21.395971 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b86cae48-3c9d-4647-96ac-bd6ac89ce895-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fqwtm\" (UID: \"b86cae48-3c9d-4647-96ac-bd6ac89ce895\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fqwtm" Nov 29 07:19:21 crc kubenswrapper[4947]: I1129 07:19:21.407453 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b86cae48-3c9d-4647-96ac-bd6ac89ce895-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fqwtm\" (UID: \"b86cae48-3c9d-4647-96ac-bd6ac89ce895\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fqwtm" Nov 29 07:19:21 crc kubenswrapper[4947]: I1129 07:19:21.408058 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtkvx\" (UniqueName: \"kubernetes.io/projected/b86cae48-3c9d-4647-96ac-bd6ac89ce895-kube-api-access-mtkvx\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fqwtm\" (UID: \"b86cae48-3c9d-4647-96ac-bd6ac89ce895\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fqwtm" Nov 29 07:19:21 crc kubenswrapper[4947]: I1129 07:19:21.479322 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fqwtm" Nov 29 07:19:22 crc kubenswrapper[4947]: I1129 07:19:22.046728 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fqwtm"] Nov 29 07:19:23 crc kubenswrapper[4947]: I1129 07:19:23.055990 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fqwtm" event={"ID":"b86cae48-3c9d-4647-96ac-bd6ac89ce895","Type":"ContainerStarted","Data":"1d0f55b5526b7481e4ab5b747ebb30a8819b8207b1601eccf01cc6b35440beb9"} Nov 29 07:19:23 crc kubenswrapper[4947]: I1129 07:19:23.056604 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fqwtm" event={"ID":"b86cae48-3c9d-4647-96ac-bd6ac89ce895","Type":"ContainerStarted","Data":"748e74bcb3af1d8e2757dc943c33ebb666af99209f1018d3f070baf13f1824a4"} Nov 29 07:19:23 crc kubenswrapper[4947]: I1129 07:19:23.085907 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fqwtm" podStartSLOduration=1.5612960660000001 podStartE2EDuration="2.085886386s" podCreationTimestamp="2025-11-29 07:19:21 +0000 UTC" firstStartedPulling="2025-11-29 07:19:22.063414194 +0000 UTC m=+2713.107796265" lastFinishedPulling="2025-11-29 07:19:22.588004514 +0000 UTC m=+2713.632386585" observedRunningTime="2025-11-29 07:19:23.07420259 +0000 UTC m=+2714.118584681" watchObservedRunningTime="2025-11-29 07:19:23.085886386 +0000 UTC m=+2714.130268467" Nov 29 07:19:28 crc kubenswrapper[4947]: I1129 07:19:28.106925 4947 generic.go:334] "Generic (PLEG): container finished" podID="b86cae48-3c9d-4647-96ac-bd6ac89ce895" containerID="1d0f55b5526b7481e4ab5b747ebb30a8819b8207b1601eccf01cc6b35440beb9" exitCode=0 Nov 29 07:19:28 crc kubenswrapper[4947]: I1129 07:19:28.107062 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fqwtm" event={"ID":"b86cae48-3c9d-4647-96ac-bd6ac89ce895","Type":"ContainerDied","Data":"1d0f55b5526b7481e4ab5b747ebb30a8819b8207b1601eccf01cc6b35440beb9"} Nov 29 07:19:29 crc kubenswrapper[4947]: I1129 07:19:29.574687 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fqwtm" Nov 29 07:19:29 crc kubenswrapper[4947]: I1129 07:19:29.683014 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtkvx\" (UniqueName: \"kubernetes.io/projected/b86cae48-3c9d-4647-96ac-bd6ac89ce895-kube-api-access-mtkvx\") pod \"b86cae48-3c9d-4647-96ac-bd6ac89ce895\" (UID: \"b86cae48-3c9d-4647-96ac-bd6ac89ce895\") " Nov 29 07:19:29 crc kubenswrapper[4947]: I1129 07:19:29.683083 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b86cae48-3c9d-4647-96ac-bd6ac89ce895-ssh-key\") pod \"b86cae48-3c9d-4647-96ac-bd6ac89ce895\" (UID: \"b86cae48-3c9d-4647-96ac-bd6ac89ce895\") " Nov 29 07:19:29 crc kubenswrapper[4947]: I1129 07:19:29.683928 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b86cae48-3c9d-4647-96ac-bd6ac89ce895-inventory\") pod \"b86cae48-3c9d-4647-96ac-bd6ac89ce895\" (UID: \"b86cae48-3c9d-4647-96ac-bd6ac89ce895\") " Nov 29 07:19:29 crc kubenswrapper[4947]: I1129 07:19:29.684312 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b86cae48-3c9d-4647-96ac-bd6ac89ce895-ceph\") pod \"b86cae48-3c9d-4647-96ac-bd6ac89ce895\" (UID: \"b86cae48-3c9d-4647-96ac-bd6ac89ce895\") " Nov 29 07:19:29 crc kubenswrapper[4947]: I1129 07:19:29.689663 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b86cae48-3c9d-4647-96ac-bd6ac89ce895-kube-api-access-mtkvx" (OuterVolumeSpecName: "kube-api-access-mtkvx") pod "b86cae48-3c9d-4647-96ac-bd6ac89ce895" (UID: "b86cae48-3c9d-4647-96ac-bd6ac89ce895"). InnerVolumeSpecName "kube-api-access-mtkvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:19:29 crc kubenswrapper[4947]: I1129 07:19:29.690030 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b86cae48-3c9d-4647-96ac-bd6ac89ce895-ceph" (OuterVolumeSpecName: "ceph") pod "b86cae48-3c9d-4647-96ac-bd6ac89ce895" (UID: "b86cae48-3c9d-4647-96ac-bd6ac89ce895"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:19:29 crc kubenswrapper[4947]: I1129 07:19:29.715923 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b86cae48-3c9d-4647-96ac-bd6ac89ce895-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b86cae48-3c9d-4647-96ac-bd6ac89ce895" (UID: "b86cae48-3c9d-4647-96ac-bd6ac89ce895"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:19:29 crc kubenswrapper[4947]: I1129 07:19:29.716478 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b86cae48-3c9d-4647-96ac-bd6ac89ce895-inventory" (OuterVolumeSpecName: "inventory") pod "b86cae48-3c9d-4647-96ac-bd6ac89ce895" (UID: "b86cae48-3c9d-4647-96ac-bd6ac89ce895"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:19:29 crc kubenswrapper[4947]: I1129 07:19:29.787412 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtkvx\" (UniqueName: \"kubernetes.io/projected/b86cae48-3c9d-4647-96ac-bd6ac89ce895-kube-api-access-mtkvx\") on node \"crc\" DevicePath \"\"" Nov 29 07:19:29 crc kubenswrapper[4947]: I1129 07:19:29.787473 4947 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b86cae48-3c9d-4647-96ac-bd6ac89ce895-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:19:29 crc kubenswrapper[4947]: I1129 07:19:29.787488 4947 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b86cae48-3c9d-4647-96ac-bd6ac89ce895-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:19:29 crc kubenswrapper[4947]: I1129 07:19:29.787500 4947 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b86cae48-3c9d-4647-96ac-bd6ac89ce895-ceph\") on node \"crc\" DevicePath \"\"" Nov 29 07:19:30 crc kubenswrapper[4947]: I1129 07:19:30.126990 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fqwtm" event={"ID":"b86cae48-3c9d-4647-96ac-bd6ac89ce895","Type":"ContainerDied","Data":"748e74bcb3af1d8e2757dc943c33ebb666af99209f1018d3f070baf13f1824a4"} Nov 29 07:19:30 crc kubenswrapper[4947]: I1129 07:19:30.127041 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="748e74bcb3af1d8e2757dc943c33ebb666af99209f1018d3f070baf13f1824a4" Nov 29 07:19:30 crc kubenswrapper[4947]: I1129 07:19:30.127128 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fqwtm" Nov 29 07:19:30 crc kubenswrapper[4947]: I1129 07:19:30.260705 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-nkrl9"] Nov 29 07:19:30 crc kubenswrapper[4947]: E1129 07:19:30.261156 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b86cae48-3c9d-4647-96ac-bd6ac89ce895" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 29 07:19:30 crc kubenswrapper[4947]: I1129 07:19:30.261178 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="b86cae48-3c9d-4647-96ac-bd6ac89ce895" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 29 07:19:30 crc kubenswrapper[4947]: I1129 07:19:30.261395 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="b86cae48-3c9d-4647-96ac-bd6ac89ce895" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 29 07:19:30 crc kubenswrapper[4947]: I1129 07:19:30.262200 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nkrl9" Nov 29 07:19:30 crc kubenswrapper[4947]: I1129 07:19:30.267808 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 29 07:19:30 crc kubenswrapper[4947]: I1129 07:19:30.268199 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:19:30 crc kubenswrapper[4947]: I1129 07:19:30.268369 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xvljs" Nov 29 07:19:30 crc kubenswrapper[4947]: I1129 07:19:30.268470 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:19:30 crc kubenswrapper[4947]: I1129 07:19:30.268556 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:19:30 crc kubenswrapper[4947]: I1129 07:19:30.293625 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-nkrl9"] Nov 29 07:19:30 crc kubenswrapper[4947]: I1129 07:19:30.297387 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4bb992c-9305-4cf1-aa77-789ee88999fd-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-nkrl9\" (UID: \"b4bb992c-9305-4cf1-aa77-789ee88999fd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nkrl9" Nov 29 07:19:30 crc kubenswrapper[4947]: I1129 07:19:30.297590 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjct8\" (UniqueName: \"kubernetes.io/projected/b4bb992c-9305-4cf1-aa77-789ee88999fd-kube-api-access-fjct8\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-nkrl9\" (UID: \"b4bb992c-9305-4cf1-aa77-789ee88999fd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nkrl9" Nov 29 07:19:30 crc kubenswrapper[4947]: I1129 07:19:30.297691 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b4bb992c-9305-4cf1-aa77-789ee88999fd-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-nkrl9\" (UID: \"b4bb992c-9305-4cf1-aa77-789ee88999fd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nkrl9" Nov 29 07:19:30 crc kubenswrapper[4947]: I1129 07:19:30.297741 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b4bb992c-9305-4cf1-aa77-789ee88999fd-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-nkrl9\" (UID: \"b4bb992c-9305-4cf1-aa77-789ee88999fd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nkrl9" Nov 29 07:19:30 crc kubenswrapper[4947]: I1129 07:19:30.400338 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjct8\" (UniqueName: \"kubernetes.io/projected/b4bb992c-9305-4cf1-aa77-789ee88999fd-kube-api-access-fjct8\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-nkrl9\" (UID: \"b4bb992c-9305-4cf1-aa77-789ee88999fd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nkrl9" Nov 29 07:19:30 crc kubenswrapper[4947]: I1129 07:19:30.400544 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b4bb992c-9305-4cf1-aa77-789ee88999fd-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-nkrl9\" (UID: \"b4bb992c-9305-4cf1-aa77-789ee88999fd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nkrl9" Nov 29 07:19:30 crc kubenswrapper[4947]: I1129 07:19:30.400611 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b4bb992c-9305-4cf1-aa77-789ee88999fd-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-nkrl9\" (UID: \"b4bb992c-9305-4cf1-aa77-789ee88999fd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nkrl9" Nov 29 07:19:30 crc kubenswrapper[4947]: I1129 07:19:30.400670 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4bb992c-9305-4cf1-aa77-789ee88999fd-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-nkrl9\" (UID: \"b4bb992c-9305-4cf1-aa77-789ee88999fd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nkrl9" Nov 29 07:19:30 crc kubenswrapper[4947]: I1129 07:19:30.417038 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b4bb992c-9305-4cf1-aa77-789ee88999fd-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-nkrl9\" (UID: \"b4bb992c-9305-4cf1-aa77-789ee88999fd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nkrl9" Nov 29 07:19:30 crc kubenswrapper[4947]: I1129 07:19:30.420146 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b4bb992c-9305-4cf1-aa77-789ee88999fd-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-nkrl9\" (UID: \"b4bb992c-9305-4cf1-aa77-789ee88999fd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nkrl9" Nov 29 07:19:30 crc kubenswrapper[4947]: I1129 07:19:30.425493 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4bb992c-9305-4cf1-aa77-789ee88999fd-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-nkrl9\" (UID: \"b4bb992c-9305-4cf1-aa77-789ee88999fd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nkrl9" Nov 29 07:19:30 crc kubenswrapper[4947]: I1129 07:19:30.453616 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjct8\" (UniqueName: \"kubernetes.io/projected/b4bb992c-9305-4cf1-aa77-789ee88999fd-kube-api-access-fjct8\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-nkrl9\" (UID: \"b4bb992c-9305-4cf1-aa77-789ee88999fd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nkrl9" Nov 29 07:19:30 crc kubenswrapper[4947]: I1129 07:19:30.591439 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nkrl9" Nov 29 07:19:31 crc kubenswrapper[4947]: I1129 07:19:31.177177 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-nkrl9"] Nov 29 07:19:32 crc kubenswrapper[4947]: I1129 07:19:32.156209 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nkrl9" event={"ID":"b4bb992c-9305-4cf1-aa77-789ee88999fd","Type":"ContainerStarted","Data":"e0a80f9a18a247c9294bf77fb05567c0ebbb33e82fbfa2c6931ab1097eee2a49"} Nov 29 07:19:33 crc kubenswrapper[4947]: I1129 07:19:33.167044 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nkrl9" event={"ID":"b4bb992c-9305-4cf1-aa77-789ee88999fd","Type":"ContainerStarted","Data":"3940c009bbd24b777a1bae56b170c8b40f0925b7c8b8e4cbe3261f054247ee18"} Nov 29 07:19:33 crc kubenswrapper[4947]: I1129 07:19:33.195564 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nkrl9" podStartSLOduration=1.5536863539999999 podStartE2EDuration="3.195536018s" podCreationTimestamp="2025-11-29 07:19:30 +0000 UTC" firstStartedPulling="2025-11-29 07:19:31.18552556 +0000 UTC m=+2722.229907641" lastFinishedPulling="2025-11-29 07:19:32.827375224 +0000 UTC m=+2723.871757305" observedRunningTime="2025-11-29 07:19:33.188778387 +0000 UTC m=+2724.233160468" watchObservedRunningTime="2025-11-29 07:19:33.195536018 +0000 UTC m=+2724.239918099" Nov 29 07:19:35 crc kubenswrapper[4947]: I1129 07:19:35.180036 4947 scope.go:117] "RemoveContainer" containerID="4e8a1bb4365f266c0a40f1757eac36e4c4debcbd11bd1184ebc913d9f9683bb6" Nov 29 07:19:36 crc kubenswrapper[4947]: I1129 07:19:36.199910 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" event={"ID":"5f4d791f-bb61-4aaa-a09c-3007b59645a7","Type":"ContainerStarted","Data":"c579e62fbc7ec20ab5411cd9ba7d8f85ddfcbbe286f5b6a301b4c37126dd7a87"} Nov 29 07:20:21 crc kubenswrapper[4947]: I1129 07:20:21.645300 4947 generic.go:334] "Generic (PLEG): container finished" podID="b4bb992c-9305-4cf1-aa77-789ee88999fd" containerID="3940c009bbd24b777a1bae56b170c8b40f0925b7c8b8e4cbe3261f054247ee18" exitCode=0 Nov 29 07:20:21 crc kubenswrapper[4947]: I1129 07:20:21.645395 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nkrl9" event={"ID":"b4bb992c-9305-4cf1-aa77-789ee88999fd","Type":"ContainerDied","Data":"3940c009bbd24b777a1bae56b170c8b40f0925b7c8b8e4cbe3261f054247ee18"} Nov 29 07:20:23 crc kubenswrapper[4947]: I1129 07:20:23.156294 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nkrl9" Nov 29 07:20:23 crc kubenswrapper[4947]: I1129 07:20:23.293904 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4bb992c-9305-4cf1-aa77-789ee88999fd-inventory\") pod \"b4bb992c-9305-4cf1-aa77-789ee88999fd\" (UID: \"b4bb992c-9305-4cf1-aa77-789ee88999fd\") " Nov 29 07:20:23 crc kubenswrapper[4947]: I1129 07:20:23.294206 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b4bb992c-9305-4cf1-aa77-789ee88999fd-ceph\") pod \"b4bb992c-9305-4cf1-aa77-789ee88999fd\" (UID: \"b4bb992c-9305-4cf1-aa77-789ee88999fd\") " Nov 29 07:20:23 crc kubenswrapper[4947]: I1129 07:20:23.294320 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjct8\" (UniqueName: \"kubernetes.io/projected/b4bb992c-9305-4cf1-aa77-789ee88999fd-kube-api-access-fjct8\") pod \"b4bb992c-9305-4cf1-aa77-789ee88999fd\" (UID: \"b4bb992c-9305-4cf1-aa77-789ee88999fd\") " Nov 29 07:20:23 crc kubenswrapper[4947]: I1129 07:20:23.294368 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b4bb992c-9305-4cf1-aa77-789ee88999fd-ssh-key\") pod \"b4bb992c-9305-4cf1-aa77-789ee88999fd\" (UID: \"b4bb992c-9305-4cf1-aa77-789ee88999fd\") " Nov 29 07:20:23 crc kubenswrapper[4947]: I1129 07:20:23.302657 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4bb992c-9305-4cf1-aa77-789ee88999fd-ceph" (OuterVolumeSpecName: "ceph") pod "b4bb992c-9305-4cf1-aa77-789ee88999fd" (UID: "b4bb992c-9305-4cf1-aa77-789ee88999fd"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:20:23 crc kubenswrapper[4947]: I1129 07:20:23.302685 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4bb992c-9305-4cf1-aa77-789ee88999fd-kube-api-access-fjct8" (OuterVolumeSpecName: "kube-api-access-fjct8") pod "b4bb992c-9305-4cf1-aa77-789ee88999fd" (UID: "b4bb992c-9305-4cf1-aa77-789ee88999fd"). InnerVolumeSpecName "kube-api-access-fjct8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:20:23 crc kubenswrapper[4947]: I1129 07:20:23.327668 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4bb992c-9305-4cf1-aa77-789ee88999fd-inventory" (OuterVolumeSpecName: "inventory") pod "b4bb992c-9305-4cf1-aa77-789ee88999fd" (UID: "b4bb992c-9305-4cf1-aa77-789ee88999fd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:20:23 crc kubenswrapper[4947]: I1129 07:20:23.329378 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4bb992c-9305-4cf1-aa77-789ee88999fd-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b4bb992c-9305-4cf1-aa77-789ee88999fd" (UID: "b4bb992c-9305-4cf1-aa77-789ee88999fd"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:20:23 crc kubenswrapper[4947]: I1129 07:20:23.396963 4947 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b4bb992c-9305-4cf1-aa77-789ee88999fd-ceph\") on node \"crc\" DevicePath \"\"" Nov 29 07:20:23 crc kubenswrapper[4947]: I1129 07:20:23.396999 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjct8\" (UniqueName: \"kubernetes.io/projected/b4bb992c-9305-4cf1-aa77-789ee88999fd-kube-api-access-fjct8\") on node \"crc\" DevicePath \"\"" Nov 29 07:20:23 crc kubenswrapper[4947]: I1129 07:20:23.397010 4947 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b4bb992c-9305-4cf1-aa77-789ee88999fd-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:20:23 crc kubenswrapper[4947]: I1129 07:20:23.397040 4947 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4bb992c-9305-4cf1-aa77-789ee88999fd-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:20:23 crc kubenswrapper[4947]: I1129 07:20:23.665342 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nkrl9" event={"ID":"b4bb992c-9305-4cf1-aa77-789ee88999fd","Type":"ContainerDied","Data":"e0a80f9a18a247c9294bf77fb05567c0ebbb33e82fbfa2c6931ab1097eee2a49"} Nov 29 07:20:23 crc kubenswrapper[4947]: I1129 07:20:23.665674 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0a80f9a18a247c9294bf77fb05567c0ebbb33e82fbfa2c6931ab1097eee2a49" Nov 29 07:20:23 crc kubenswrapper[4947]: I1129 07:20:23.665419 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nkrl9" Nov 29 07:20:23 crc kubenswrapper[4947]: I1129 07:20:23.774507 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sqrwc"] Nov 29 07:20:23 crc kubenswrapper[4947]: E1129 07:20:23.775204 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4bb992c-9305-4cf1-aa77-789ee88999fd" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:20:23 crc kubenswrapper[4947]: I1129 07:20:23.775247 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4bb992c-9305-4cf1-aa77-789ee88999fd" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:20:23 crc kubenswrapper[4947]: I1129 07:20:23.775455 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4bb992c-9305-4cf1-aa77-789ee88999fd" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:20:23 crc kubenswrapper[4947]: I1129 07:20:23.776202 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sqrwc" Nov 29 07:20:23 crc kubenswrapper[4947]: I1129 07:20:23.781960 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xvljs" Nov 29 07:20:23 crc kubenswrapper[4947]: I1129 07:20:23.783004 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:20:23 crc kubenswrapper[4947]: I1129 07:20:23.783192 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 29 07:20:23 crc kubenswrapper[4947]: I1129 07:20:23.783592 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:20:23 crc kubenswrapper[4947]: I1129 07:20:23.787964 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:20:23 crc kubenswrapper[4947]: I1129 07:20:23.790470 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sqrwc"] Nov 29 07:20:23 crc kubenswrapper[4947]: I1129 07:20:23.910252 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5f41a587-e5e5-4f2a-becb-4870793e41c9-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sqrwc\" (UID: \"5f41a587-e5e5-4f2a-becb-4870793e41c9\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sqrwc" Nov 29 07:20:23 crc kubenswrapper[4947]: I1129 07:20:23.910392 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f41a587-e5e5-4f2a-becb-4870793e41c9-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sqrwc\" (UID: \"5f41a587-e5e5-4f2a-becb-4870793e41c9\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sqrwc" Nov 29 07:20:23 crc kubenswrapper[4947]: I1129 07:20:23.910462 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5f41a587-e5e5-4f2a-becb-4870793e41c9-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sqrwc\" (UID: \"5f41a587-e5e5-4f2a-becb-4870793e41c9\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sqrwc" Nov 29 07:20:23 crc kubenswrapper[4947]: I1129 07:20:23.910497 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87lv8\" (UniqueName: \"kubernetes.io/projected/5f41a587-e5e5-4f2a-becb-4870793e41c9-kube-api-access-87lv8\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sqrwc\" (UID: \"5f41a587-e5e5-4f2a-becb-4870793e41c9\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sqrwc" Nov 29 07:20:24 crc kubenswrapper[4947]: I1129 07:20:24.012290 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5f41a587-e5e5-4f2a-becb-4870793e41c9-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sqrwc\" (UID: \"5f41a587-e5e5-4f2a-becb-4870793e41c9\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sqrwc" Nov 29 07:20:24 crc kubenswrapper[4947]: I1129 07:20:24.012351 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87lv8\" (UniqueName: \"kubernetes.io/projected/5f41a587-e5e5-4f2a-becb-4870793e41c9-kube-api-access-87lv8\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sqrwc\" (UID: \"5f41a587-e5e5-4f2a-becb-4870793e41c9\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sqrwc" Nov 29 07:20:24 crc kubenswrapper[4947]: I1129 07:20:24.012453 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5f41a587-e5e5-4f2a-becb-4870793e41c9-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sqrwc\" (UID: \"5f41a587-e5e5-4f2a-becb-4870793e41c9\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sqrwc" Nov 29 07:20:24 crc kubenswrapper[4947]: I1129 07:20:24.012515 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f41a587-e5e5-4f2a-becb-4870793e41c9-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sqrwc\" (UID: \"5f41a587-e5e5-4f2a-becb-4870793e41c9\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sqrwc" Nov 29 07:20:24 crc kubenswrapper[4947]: I1129 07:20:24.017100 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5f41a587-e5e5-4f2a-becb-4870793e41c9-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sqrwc\" (UID: \"5f41a587-e5e5-4f2a-becb-4870793e41c9\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sqrwc" Nov 29 07:20:24 crc kubenswrapper[4947]: I1129 07:20:24.017527 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f41a587-e5e5-4f2a-becb-4870793e41c9-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sqrwc\" (UID: \"5f41a587-e5e5-4f2a-becb-4870793e41c9\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sqrwc" Nov 29 07:20:24 crc kubenswrapper[4947]: I1129 07:20:24.017475 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5f41a587-e5e5-4f2a-becb-4870793e41c9-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sqrwc\" (UID: \"5f41a587-e5e5-4f2a-becb-4870793e41c9\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sqrwc" Nov 29 07:20:24 crc kubenswrapper[4947]: I1129 07:20:24.034851 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87lv8\" (UniqueName: \"kubernetes.io/projected/5f41a587-e5e5-4f2a-becb-4870793e41c9-kube-api-access-87lv8\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sqrwc\" (UID: \"5f41a587-e5e5-4f2a-becb-4870793e41c9\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sqrwc" Nov 29 07:20:24 crc kubenswrapper[4947]: I1129 07:20:24.100888 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sqrwc" Nov 29 07:20:24 crc kubenswrapper[4947]: I1129 07:20:24.648747 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sqrwc"] Nov 29 07:20:24 crc kubenswrapper[4947]: I1129 07:20:24.688560 4947 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 07:20:25 crc kubenswrapper[4947]: I1129 07:20:25.758500 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sqrwc" event={"ID":"5f41a587-e5e5-4f2a-becb-4870793e41c9","Type":"ContainerStarted","Data":"c612d85d66b1c4f8b6bf059226ffe1e7c5bac3dba72c72d03da8c074dd019c9f"} Nov 29 07:20:26 crc kubenswrapper[4947]: I1129 07:20:26.770747 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sqrwc" event={"ID":"5f41a587-e5e5-4f2a-becb-4870793e41c9","Type":"ContainerStarted","Data":"b30c7aa2b461f38745ea3a21eaa722bdcdbd113c151d1fd2ea0eb310d390f554"} Nov 29 07:20:26 crc kubenswrapper[4947]: I1129 07:20:26.797299 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sqrwc" podStartSLOduration=2.930886776 podStartE2EDuration="3.797262269s" podCreationTimestamp="2025-11-29 07:20:23 +0000 UTC" firstStartedPulling="2025-11-29 07:20:24.688325454 +0000 UTC m=+2775.732707545" lastFinishedPulling="2025-11-29 07:20:25.554700957 +0000 UTC m=+2776.599083038" observedRunningTime="2025-11-29 07:20:26.787784639 +0000 UTC m=+2777.832166720" watchObservedRunningTime="2025-11-29 07:20:26.797262269 +0000 UTC m=+2777.841644350" Nov 29 07:20:30 crc kubenswrapper[4947]: I1129 07:20:30.823565 4947 generic.go:334] "Generic (PLEG): container finished" podID="5f41a587-e5e5-4f2a-becb-4870793e41c9" containerID="b30c7aa2b461f38745ea3a21eaa722bdcdbd113c151d1fd2ea0eb310d390f554" exitCode=0 Nov 29 07:20:30 crc kubenswrapper[4947]: I1129 07:20:30.823903 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sqrwc" event={"ID":"5f41a587-e5e5-4f2a-becb-4870793e41c9","Type":"ContainerDied","Data":"b30c7aa2b461f38745ea3a21eaa722bdcdbd113c151d1fd2ea0eb310d390f554"} Nov 29 07:20:32 crc kubenswrapper[4947]: I1129 07:20:32.261477 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sqrwc" Nov 29 07:20:32 crc kubenswrapper[4947]: I1129 07:20:32.409726 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f41a587-e5e5-4f2a-becb-4870793e41c9-inventory\") pod \"5f41a587-e5e5-4f2a-becb-4870793e41c9\" (UID: \"5f41a587-e5e5-4f2a-becb-4870793e41c9\") " Nov 29 07:20:32 crc kubenswrapper[4947]: I1129 07:20:32.409831 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5f41a587-e5e5-4f2a-becb-4870793e41c9-ssh-key\") pod \"5f41a587-e5e5-4f2a-becb-4870793e41c9\" (UID: \"5f41a587-e5e5-4f2a-becb-4870793e41c9\") " Nov 29 07:20:32 crc kubenswrapper[4947]: I1129 07:20:32.410090 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5f41a587-e5e5-4f2a-becb-4870793e41c9-ceph\") pod \"5f41a587-e5e5-4f2a-becb-4870793e41c9\" (UID: \"5f41a587-e5e5-4f2a-becb-4870793e41c9\") " Nov 29 07:20:32 crc kubenswrapper[4947]: I1129 07:20:32.410196 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87lv8\" (UniqueName: \"kubernetes.io/projected/5f41a587-e5e5-4f2a-becb-4870793e41c9-kube-api-access-87lv8\") pod \"5f41a587-e5e5-4f2a-becb-4870793e41c9\" (UID: \"5f41a587-e5e5-4f2a-becb-4870793e41c9\") " Nov 29 07:20:32 crc kubenswrapper[4947]: I1129 07:20:32.416188 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f41a587-e5e5-4f2a-becb-4870793e41c9-ceph" (OuterVolumeSpecName: "ceph") pod "5f41a587-e5e5-4f2a-becb-4870793e41c9" (UID: "5f41a587-e5e5-4f2a-becb-4870793e41c9"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:20:32 crc kubenswrapper[4947]: I1129 07:20:32.417874 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f41a587-e5e5-4f2a-becb-4870793e41c9-kube-api-access-87lv8" (OuterVolumeSpecName: "kube-api-access-87lv8") pod "5f41a587-e5e5-4f2a-becb-4870793e41c9" (UID: "5f41a587-e5e5-4f2a-becb-4870793e41c9"). InnerVolumeSpecName "kube-api-access-87lv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:20:32 crc kubenswrapper[4947]: I1129 07:20:32.440686 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f41a587-e5e5-4f2a-becb-4870793e41c9-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5f41a587-e5e5-4f2a-becb-4870793e41c9" (UID: "5f41a587-e5e5-4f2a-becb-4870793e41c9"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:20:32 crc kubenswrapper[4947]: I1129 07:20:32.441568 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f41a587-e5e5-4f2a-becb-4870793e41c9-inventory" (OuterVolumeSpecName: "inventory") pod "5f41a587-e5e5-4f2a-becb-4870793e41c9" (UID: "5f41a587-e5e5-4f2a-becb-4870793e41c9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:20:32 crc kubenswrapper[4947]: I1129 07:20:32.513586 4947 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f41a587-e5e5-4f2a-becb-4870793e41c9-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:20:32 crc kubenswrapper[4947]: I1129 07:20:32.513655 4947 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5f41a587-e5e5-4f2a-becb-4870793e41c9-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:20:32 crc kubenswrapper[4947]: I1129 07:20:32.513668 4947 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5f41a587-e5e5-4f2a-becb-4870793e41c9-ceph\") on node \"crc\" DevicePath \"\"" Nov 29 07:20:32 crc kubenswrapper[4947]: I1129 07:20:32.513679 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87lv8\" (UniqueName: \"kubernetes.io/projected/5f41a587-e5e5-4f2a-becb-4870793e41c9-kube-api-access-87lv8\") on node \"crc\" DevicePath \"\"" Nov 29 07:20:32 crc kubenswrapper[4947]: I1129 07:20:32.848025 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sqrwc" event={"ID":"5f41a587-e5e5-4f2a-becb-4870793e41c9","Type":"ContainerDied","Data":"c612d85d66b1c4f8b6bf059226ffe1e7c5bac3dba72c72d03da8c074dd019c9f"} Nov 29 07:20:32 crc kubenswrapper[4947]: I1129 07:20:32.848070 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c612d85d66b1c4f8b6bf059226ffe1e7c5bac3dba72c72d03da8c074dd019c9f" Nov 29 07:20:32 crc kubenswrapper[4947]: I1129 07:20:32.848128 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sqrwc" Nov 29 07:20:32 crc kubenswrapper[4947]: I1129 07:20:32.935949 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6p547"] Nov 29 07:20:32 crc kubenswrapper[4947]: E1129 07:20:32.936672 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f41a587-e5e5-4f2a-becb-4870793e41c9" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Nov 29 07:20:32 crc kubenswrapper[4947]: I1129 07:20:32.936694 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f41a587-e5e5-4f2a-becb-4870793e41c9" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Nov 29 07:20:32 crc kubenswrapper[4947]: I1129 07:20:32.936893 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f41a587-e5e5-4f2a-becb-4870793e41c9" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Nov 29 07:20:32 crc kubenswrapper[4947]: I1129 07:20:32.941820 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6p547" Nov 29 07:20:32 crc kubenswrapper[4947]: I1129 07:20:32.944966 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xvljs" Nov 29 07:20:32 crc kubenswrapper[4947]: I1129 07:20:32.945371 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:20:32 crc kubenswrapper[4947]: I1129 07:20:32.945551 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 29 07:20:32 crc kubenswrapper[4947]: I1129 07:20:32.945732 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:20:32 crc kubenswrapper[4947]: I1129 07:20:32.949377 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:20:32 crc kubenswrapper[4947]: I1129 07:20:32.949646 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6p547"] Nov 29 07:20:33 crc kubenswrapper[4947]: I1129 07:20:33.022201 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/decf89ef-31dd-410d-a70c-a19245e90e55-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6p547\" (UID: \"decf89ef-31dd-410d-a70c-a19245e90e55\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6p547" Nov 29 07:20:33 crc kubenswrapper[4947]: I1129 07:20:33.022625 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/decf89ef-31dd-410d-a70c-a19245e90e55-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6p547\" (UID: \"decf89ef-31dd-410d-a70c-a19245e90e55\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6p547" Nov 29 07:20:33 crc kubenswrapper[4947]: I1129 07:20:33.022770 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/decf89ef-31dd-410d-a70c-a19245e90e55-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6p547\" (UID: \"decf89ef-31dd-410d-a70c-a19245e90e55\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6p547" Nov 29 07:20:33 crc kubenswrapper[4947]: I1129 07:20:33.023143 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htjrh\" (UniqueName: \"kubernetes.io/projected/decf89ef-31dd-410d-a70c-a19245e90e55-kube-api-access-htjrh\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6p547\" (UID: \"decf89ef-31dd-410d-a70c-a19245e90e55\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6p547" Nov 29 07:20:33 crc kubenswrapper[4947]: I1129 07:20:33.124747 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/decf89ef-31dd-410d-a70c-a19245e90e55-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6p547\" (UID: \"decf89ef-31dd-410d-a70c-a19245e90e55\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6p547" Nov 29 07:20:33 crc kubenswrapper[4947]: I1129 07:20:33.124832 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/decf89ef-31dd-410d-a70c-a19245e90e55-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6p547\" (UID: \"decf89ef-31dd-410d-a70c-a19245e90e55\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6p547" Nov 29 07:20:33 crc kubenswrapper[4947]: I1129 07:20:33.124898 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/decf89ef-31dd-410d-a70c-a19245e90e55-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6p547\" (UID: \"decf89ef-31dd-410d-a70c-a19245e90e55\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6p547" Nov 29 07:20:33 crc kubenswrapper[4947]: I1129 07:20:33.125017 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htjrh\" (UniqueName: \"kubernetes.io/projected/decf89ef-31dd-410d-a70c-a19245e90e55-kube-api-access-htjrh\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6p547\" (UID: \"decf89ef-31dd-410d-a70c-a19245e90e55\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6p547" Nov 29 07:20:33 crc kubenswrapper[4947]: I1129 07:20:33.131014 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/decf89ef-31dd-410d-a70c-a19245e90e55-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6p547\" (UID: \"decf89ef-31dd-410d-a70c-a19245e90e55\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6p547" Nov 29 07:20:33 crc kubenswrapper[4947]: I1129 07:20:33.131238 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/decf89ef-31dd-410d-a70c-a19245e90e55-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6p547\" (UID: \"decf89ef-31dd-410d-a70c-a19245e90e55\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6p547" Nov 29 07:20:33 crc kubenswrapper[4947]: I1129 07:20:33.136187 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/decf89ef-31dd-410d-a70c-a19245e90e55-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6p547\" (UID: \"decf89ef-31dd-410d-a70c-a19245e90e55\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6p547" Nov 29 07:20:33 crc kubenswrapper[4947]: I1129 07:20:33.150491 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htjrh\" (UniqueName: \"kubernetes.io/projected/decf89ef-31dd-410d-a70c-a19245e90e55-kube-api-access-htjrh\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6p547\" (UID: \"decf89ef-31dd-410d-a70c-a19245e90e55\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6p547" Nov 29 07:20:33 crc kubenswrapper[4947]: I1129 07:20:33.268962 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6p547" Nov 29 07:20:33 crc kubenswrapper[4947]: I1129 07:20:33.859778 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6p547"] Nov 29 07:20:34 crc kubenswrapper[4947]: I1129 07:20:34.875408 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6p547" event={"ID":"decf89ef-31dd-410d-a70c-a19245e90e55","Type":"ContainerStarted","Data":"0a0f4c186158174ee454e3c4039a0aa81cee0df51915d3fddfe8bf8089b18361"} Nov 29 07:20:34 crc kubenswrapper[4947]: I1129 07:20:34.876543 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6p547" event={"ID":"decf89ef-31dd-410d-a70c-a19245e90e55","Type":"ContainerStarted","Data":"59abf2875b0583957feae0ee6e908537bdace1a7cc32e4bbbd946435b6a3fa10"} Nov 29 07:20:34 crc kubenswrapper[4947]: I1129 07:20:34.904614 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6p547" podStartSLOduration=2.466266854 podStartE2EDuration="2.904594036s" podCreationTimestamp="2025-11-29 07:20:32 +0000 UTC" firstStartedPulling="2025-11-29 07:20:33.852463203 +0000 UTC m=+2784.896845284" lastFinishedPulling="2025-11-29 07:20:34.290790375 +0000 UTC m=+2785.335172466" observedRunningTime="2025-11-29 07:20:34.903750385 +0000 UTC m=+2785.948132466" watchObservedRunningTime="2025-11-29 07:20:34.904594036 +0000 UTC m=+2785.948976117" Nov 29 07:21:22 crc kubenswrapper[4947]: I1129 07:21:22.518790 4947 generic.go:334] "Generic (PLEG): container finished" podID="decf89ef-31dd-410d-a70c-a19245e90e55" containerID="0a0f4c186158174ee454e3c4039a0aa81cee0df51915d3fddfe8bf8089b18361" exitCode=0 Nov 29 07:21:22 crc kubenswrapper[4947]: I1129 07:21:22.518902 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6p547" event={"ID":"decf89ef-31dd-410d-a70c-a19245e90e55","Type":"ContainerDied","Data":"0a0f4c186158174ee454e3c4039a0aa81cee0df51915d3fddfe8bf8089b18361"} Nov 29 07:21:24 crc kubenswrapper[4947]: I1129 07:21:24.071782 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6p547" Nov 29 07:21:24 crc kubenswrapper[4947]: I1129 07:21:24.130242 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/decf89ef-31dd-410d-a70c-a19245e90e55-ssh-key\") pod \"decf89ef-31dd-410d-a70c-a19245e90e55\" (UID: \"decf89ef-31dd-410d-a70c-a19245e90e55\") " Nov 29 07:21:24 crc kubenswrapper[4947]: I1129 07:21:24.130343 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htjrh\" (UniqueName: \"kubernetes.io/projected/decf89ef-31dd-410d-a70c-a19245e90e55-kube-api-access-htjrh\") pod \"decf89ef-31dd-410d-a70c-a19245e90e55\" (UID: \"decf89ef-31dd-410d-a70c-a19245e90e55\") " Nov 29 07:21:24 crc kubenswrapper[4947]: I1129 07:21:24.130507 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/decf89ef-31dd-410d-a70c-a19245e90e55-inventory\") pod \"decf89ef-31dd-410d-a70c-a19245e90e55\" (UID: \"decf89ef-31dd-410d-a70c-a19245e90e55\") " Nov 29 07:21:24 crc kubenswrapper[4947]: I1129 07:21:24.130559 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/decf89ef-31dd-410d-a70c-a19245e90e55-ceph\") pod \"decf89ef-31dd-410d-a70c-a19245e90e55\" (UID: \"decf89ef-31dd-410d-a70c-a19245e90e55\") " Nov 29 07:21:24 crc kubenswrapper[4947]: I1129 07:21:24.140180 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/decf89ef-31dd-410d-a70c-a19245e90e55-kube-api-access-htjrh" (OuterVolumeSpecName: "kube-api-access-htjrh") pod "decf89ef-31dd-410d-a70c-a19245e90e55" (UID: "decf89ef-31dd-410d-a70c-a19245e90e55"). InnerVolumeSpecName "kube-api-access-htjrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:21:24 crc kubenswrapper[4947]: I1129 07:21:24.140963 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/decf89ef-31dd-410d-a70c-a19245e90e55-ceph" (OuterVolumeSpecName: "ceph") pod "decf89ef-31dd-410d-a70c-a19245e90e55" (UID: "decf89ef-31dd-410d-a70c-a19245e90e55"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:21:24 crc kubenswrapper[4947]: E1129 07:21:24.167376 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/decf89ef-31dd-410d-a70c-a19245e90e55-inventory podName:decf89ef-31dd-410d-a70c-a19245e90e55 nodeName:}" failed. No retries permitted until 2025-11-29 07:21:24.667343816 +0000 UTC m=+2835.711725897 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "inventory" (UniqueName: "kubernetes.io/secret/decf89ef-31dd-410d-a70c-a19245e90e55-inventory") pod "decf89ef-31dd-410d-a70c-a19245e90e55" (UID: "decf89ef-31dd-410d-a70c-a19245e90e55") : error deleting /var/lib/kubelet/pods/decf89ef-31dd-410d-a70c-a19245e90e55/volume-subpaths: remove /var/lib/kubelet/pods/decf89ef-31dd-410d-a70c-a19245e90e55/volume-subpaths: no such file or directory Nov 29 07:21:24 crc kubenswrapper[4947]: I1129 07:21:24.172690 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/decf89ef-31dd-410d-a70c-a19245e90e55-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "decf89ef-31dd-410d-a70c-a19245e90e55" (UID: "decf89ef-31dd-410d-a70c-a19245e90e55"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:21:24 crc kubenswrapper[4947]: I1129 07:21:24.232729 4947 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/decf89ef-31dd-410d-a70c-a19245e90e55-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:24 crc kubenswrapper[4947]: I1129 07:21:24.232761 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htjrh\" (UniqueName: \"kubernetes.io/projected/decf89ef-31dd-410d-a70c-a19245e90e55-kube-api-access-htjrh\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:24 crc kubenswrapper[4947]: I1129 07:21:24.232774 4947 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/decf89ef-31dd-410d-a70c-a19245e90e55-ceph\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:24 crc kubenswrapper[4947]: I1129 07:21:24.539393 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6p547" event={"ID":"decf89ef-31dd-410d-a70c-a19245e90e55","Type":"ContainerDied","Data":"59abf2875b0583957feae0ee6e908537bdace1a7cc32e4bbbd946435b6a3fa10"} Nov 29 07:21:24 crc kubenswrapper[4947]: I1129 07:21:24.539451 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59abf2875b0583957feae0ee6e908537bdace1a7cc32e4bbbd946435b6a3fa10" Nov 29 07:21:24 crc kubenswrapper[4947]: I1129 07:21:24.539788 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6p547" Nov 29 07:21:24 crc kubenswrapper[4947]: I1129 07:21:24.650368 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-xkggt"] Nov 29 07:21:24 crc kubenswrapper[4947]: E1129 07:21:24.650832 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="decf89ef-31dd-410d-a70c-a19245e90e55" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:21:24 crc kubenswrapper[4947]: I1129 07:21:24.650848 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="decf89ef-31dd-410d-a70c-a19245e90e55" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:21:24 crc kubenswrapper[4947]: I1129 07:21:24.651019 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="decf89ef-31dd-410d-a70c-a19245e90e55" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:21:24 crc kubenswrapper[4947]: I1129 07:21:24.651745 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-xkggt" Nov 29 07:21:24 crc kubenswrapper[4947]: I1129 07:21:24.665345 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-xkggt"] Nov 29 07:21:24 crc kubenswrapper[4947]: I1129 07:21:24.744876 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/decf89ef-31dd-410d-a70c-a19245e90e55-inventory\") pod \"decf89ef-31dd-410d-a70c-a19245e90e55\" (UID: \"decf89ef-31dd-410d-a70c-a19245e90e55\") " Nov 29 07:21:24 crc kubenswrapper[4947]: I1129 07:21:24.745309 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e3ebc847-298f-4b57-920d-ed63fb69a427-ceph\") pod \"ssh-known-hosts-edpm-deployment-xkggt\" (UID: \"e3ebc847-298f-4b57-920d-ed63fb69a427\") " pod="openstack/ssh-known-hosts-edpm-deployment-xkggt" Nov 29 07:21:24 crc kubenswrapper[4947]: I1129 07:21:24.745338 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e3ebc847-298f-4b57-920d-ed63fb69a427-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-xkggt\" (UID: \"e3ebc847-298f-4b57-920d-ed63fb69a427\") " pod="openstack/ssh-known-hosts-edpm-deployment-xkggt" Nov 29 07:21:24 crc kubenswrapper[4947]: I1129 07:21:24.745375 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e3ebc847-298f-4b57-920d-ed63fb69a427-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-xkggt\" (UID: \"e3ebc847-298f-4b57-920d-ed63fb69a427\") " pod="openstack/ssh-known-hosts-edpm-deployment-xkggt" Nov 29 07:21:24 crc kubenswrapper[4947]: I1129 07:21:24.745441 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22z7z\" (UniqueName: \"kubernetes.io/projected/e3ebc847-298f-4b57-920d-ed63fb69a427-kube-api-access-22z7z\") pod \"ssh-known-hosts-edpm-deployment-xkggt\" (UID: \"e3ebc847-298f-4b57-920d-ed63fb69a427\") " pod="openstack/ssh-known-hosts-edpm-deployment-xkggt" Nov 29 07:21:24 crc kubenswrapper[4947]: I1129 07:21:24.749968 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/decf89ef-31dd-410d-a70c-a19245e90e55-inventory" (OuterVolumeSpecName: "inventory") pod "decf89ef-31dd-410d-a70c-a19245e90e55" (UID: "decf89ef-31dd-410d-a70c-a19245e90e55"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:21:24 crc kubenswrapper[4947]: I1129 07:21:24.847785 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e3ebc847-298f-4b57-920d-ed63fb69a427-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-xkggt\" (UID: \"e3ebc847-298f-4b57-920d-ed63fb69a427\") " pod="openstack/ssh-known-hosts-edpm-deployment-xkggt" Nov 29 07:21:24 crc kubenswrapper[4947]: I1129 07:21:24.847989 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22z7z\" (UniqueName: \"kubernetes.io/projected/e3ebc847-298f-4b57-920d-ed63fb69a427-kube-api-access-22z7z\") pod \"ssh-known-hosts-edpm-deployment-xkggt\" (UID: \"e3ebc847-298f-4b57-920d-ed63fb69a427\") " pod="openstack/ssh-known-hosts-edpm-deployment-xkggt" Nov 29 07:21:24 crc kubenswrapper[4947]: I1129 07:21:24.848210 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e3ebc847-298f-4b57-920d-ed63fb69a427-ceph\") pod \"ssh-known-hosts-edpm-deployment-xkggt\" (UID: \"e3ebc847-298f-4b57-920d-ed63fb69a427\") " pod="openstack/ssh-known-hosts-edpm-deployment-xkggt" Nov 29 07:21:24 crc kubenswrapper[4947]: I1129 07:21:24.848290 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e3ebc847-298f-4b57-920d-ed63fb69a427-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-xkggt\" (UID: \"e3ebc847-298f-4b57-920d-ed63fb69a427\") " pod="openstack/ssh-known-hosts-edpm-deployment-xkggt" Nov 29 07:21:24 crc kubenswrapper[4947]: I1129 07:21:24.848421 4947 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/decf89ef-31dd-410d-a70c-a19245e90e55-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:24 crc kubenswrapper[4947]: I1129 07:21:24.855450 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e3ebc847-298f-4b57-920d-ed63fb69a427-ceph\") pod \"ssh-known-hosts-edpm-deployment-xkggt\" (UID: \"e3ebc847-298f-4b57-920d-ed63fb69a427\") " pod="openstack/ssh-known-hosts-edpm-deployment-xkggt" Nov 29 07:21:24 crc kubenswrapper[4947]: I1129 07:21:24.855847 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e3ebc847-298f-4b57-920d-ed63fb69a427-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-xkggt\" (UID: \"e3ebc847-298f-4b57-920d-ed63fb69a427\") " pod="openstack/ssh-known-hosts-edpm-deployment-xkggt" Nov 29 07:21:24 crc kubenswrapper[4947]: I1129 07:21:24.856016 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e3ebc847-298f-4b57-920d-ed63fb69a427-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-xkggt\" (UID: \"e3ebc847-298f-4b57-920d-ed63fb69a427\") " pod="openstack/ssh-known-hosts-edpm-deployment-xkggt" Nov 29 07:21:24 crc kubenswrapper[4947]: I1129 07:21:24.869549 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22z7z\" (UniqueName: \"kubernetes.io/projected/e3ebc847-298f-4b57-920d-ed63fb69a427-kube-api-access-22z7z\") pod \"ssh-known-hosts-edpm-deployment-xkggt\" (UID: \"e3ebc847-298f-4b57-920d-ed63fb69a427\") " pod="openstack/ssh-known-hosts-edpm-deployment-xkggt" Nov 29 07:21:24 crc kubenswrapper[4947]: I1129 07:21:24.975805 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-xkggt" Nov 29 07:21:25 crc kubenswrapper[4947]: I1129 07:21:25.582852 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-xkggt"] Nov 29 07:21:26 crc kubenswrapper[4947]: I1129 07:21:26.559110 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-xkggt" event={"ID":"e3ebc847-298f-4b57-920d-ed63fb69a427","Type":"ContainerStarted","Data":"751765c6197ff2d4cc48e8fd23c3437852c517ff6bd6ab87aca73d6e93d045e1"} Nov 29 07:21:26 crc kubenswrapper[4947]: I1129 07:21:26.559518 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-xkggt" event={"ID":"e3ebc847-298f-4b57-920d-ed63fb69a427","Type":"ContainerStarted","Data":"d2144707acdf158b1863ef389826f4573c764e074296c4b0bd4924e8ca53399c"} Nov 29 07:21:26 crc kubenswrapper[4947]: I1129 07:21:26.583643 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-xkggt" podStartSLOduration=2.037342654 podStartE2EDuration="2.583623214s" podCreationTimestamp="2025-11-29 07:21:24 +0000 UTC" firstStartedPulling="2025-11-29 07:21:25.583627212 +0000 UTC m=+2836.628009293" lastFinishedPulling="2025-11-29 07:21:26.129907772 +0000 UTC m=+2837.174289853" observedRunningTime="2025-11-29 07:21:26.576619606 +0000 UTC m=+2837.621001707" watchObservedRunningTime="2025-11-29 07:21:26.583623214 +0000 UTC m=+2837.628005295" Nov 29 07:21:37 crc kubenswrapper[4947]: I1129 07:21:37.658260 4947 generic.go:334] "Generic (PLEG): container finished" podID="e3ebc847-298f-4b57-920d-ed63fb69a427" containerID="751765c6197ff2d4cc48e8fd23c3437852c517ff6bd6ab87aca73d6e93d045e1" exitCode=0 Nov 29 07:21:37 crc kubenswrapper[4947]: I1129 07:21:37.658372 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-xkggt" event={"ID":"e3ebc847-298f-4b57-920d-ed63fb69a427","Type":"ContainerDied","Data":"751765c6197ff2d4cc48e8fd23c3437852c517ff6bd6ab87aca73d6e93d045e1"} Nov 29 07:21:39 crc kubenswrapper[4947]: I1129 07:21:39.146820 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-xkggt" Nov 29 07:21:39 crc kubenswrapper[4947]: I1129 07:21:39.261172 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22z7z\" (UniqueName: \"kubernetes.io/projected/e3ebc847-298f-4b57-920d-ed63fb69a427-kube-api-access-22z7z\") pod \"e3ebc847-298f-4b57-920d-ed63fb69a427\" (UID: \"e3ebc847-298f-4b57-920d-ed63fb69a427\") " Nov 29 07:21:39 crc kubenswrapper[4947]: I1129 07:21:39.261289 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e3ebc847-298f-4b57-920d-ed63fb69a427-inventory-0\") pod \"e3ebc847-298f-4b57-920d-ed63fb69a427\" (UID: \"e3ebc847-298f-4b57-920d-ed63fb69a427\") " Nov 29 07:21:39 crc kubenswrapper[4947]: I1129 07:21:39.261628 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e3ebc847-298f-4b57-920d-ed63fb69a427-ceph\") pod \"e3ebc847-298f-4b57-920d-ed63fb69a427\" (UID: \"e3ebc847-298f-4b57-920d-ed63fb69a427\") " Nov 29 07:21:39 crc kubenswrapper[4947]: I1129 07:21:39.261751 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e3ebc847-298f-4b57-920d-ed63fb69a427-ssh-key-openstack-edpm-ipam\") pod \"e3ebc847-298f-4b57-920d-ed63fb69a427\" (UID: \"e3ebc847-298f-4b57-920d-ed63fb69a427\") " Nov 29 07:21:39 crc kubenswrapper[4947]: I1129 07:21:39.281654 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3ebc847-298f-4b57-920d-ed63fb69a427-kube-api-access-22z7z" (OuterVolumeSpecName: "kube-api-access-22z7z") pod "e3ebc847-298f-4b57-920d-ed63fb69a427" (UID: "e3ebc847-298f-4b57-920d-ed63fb69a427"). InnerVolumeSpecName "kube-api-access-22z7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:21:39 crc kubenswrapper[4947]: I1129 07:21:39.282154 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3ebc847-298f-4b57-920d-ed63fb69a427-ceph" (OuterVolumeSpecName: "ceph") pod "e3ebc847-298f-4b57-920d-ed63fb69a427" (UID: "e3ebc847-298f-4b57-920d-ed63fb69a427"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:21:39 crc kubenswrapper[4947]: I1129 07:21:39.297238 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3ebc847-298f-4b57-920d-ed63fb69a427-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "e3ebc847-298f-4b57-920d-ed63fb69a427" (UID: "e3ebc847-298f-4b57-920d-ed63fb69a427"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:21:39 crc kubenswrapper[4947]: I1129 07:21:39.299504 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3ebc847-298f-4b57-920d-ed63fb69a427-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e3ebc847-298f-4b57-920d-ed63fb69a427" (UID: "e3ebc847-298f-4b57-920d-ed63fb69a427"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:21:39 crc kubenswrapper[4947]: I1129 07:21:39.365178 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22z7z\" (UniqueName: \"kubernetes.io/projected/e3ebc847-298f-4b57-920d-ed63fb69a427-kube-api-access-22z7z\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:39 crc kubenswrapper[4947]: I1129 07:21:39.365258 4947 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e3ebc847-298f-4b57-920d-ed63fb69a427-inventory-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:39 crc kubenswrapper[4947]: I1129 07:21:39.365274 4947 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e3ebc847-298f-4b57-920d-ed63fb69a427-ceph\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:39 crc kubenswrapper[4947]: I1129 07:21:39.365285 4947 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e3ebc847-298f-4b57-920d-ed63fb69a427-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:39 crc kubenswrapper[4947]: I1129 07:21:39.684609 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-xkggt" event={"ID":"e3ebc847-298f-4b57-920d-ed63fb69a427","Type":"ContainerDied","Data":"d2144707acdf158b1863ef389826f4573c764e074296c4b0bd4924e8ca53399c"} Nov 29 07:21:39 crc kubenswrapper[4947]: I1129 07:21:39.684863 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2144707acdf158b1863ef389826f4573c764e074296c4b0bd4924e8ca53399c" Nov 29 07:21:39 crc kubenswrapper[4947]: I1129 07:21:39.684918 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-xkggt" Nov 29 07:21:39 crc kubenswrapper[4947]: I1129 07:21:39.751194 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-lhmkw"] Nov 29 07:21:39 crc kubenswrapper[4947]: E1129 07:21:39.751644 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3ebc847-298f-4b57-920d-ed63fb69a427" containerName="ssh-known-hosts-edpm-deployment" Nov 29 07:21:39 crc kubenswrapper[4947]: I1129 07:21:39.751662 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3ebc847-298f-4b57-920d-ed63fb69a427" containerName="ssh-known-hosts-edpm-deployment" Nov 29 07:21:39 crc kubenswrapper[4947]: I1129 07:21:39.751864 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3ebc847-298f-4b57-920d-ed63fb69a427" containerName="ssh-known-hosts-edpm-deployment" Nov 29 07:21:39 crc kubenswrapper[4947]: I1129 07:21:39.765417 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-lhmkw"] Nov 29 07:21:39 crc kubenswrapper[4947]: I1129 07:21:39.765550 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lhmkw" Nov 29 07:21:39 crc kubenswrapper[4947]: I1129 07:21:39.768041 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xvljs" Nov 29 07:21:39 crc kubenswrapper[4947]: I1129 07:21:39.769555 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 29 07:21:39 crc kubenswrapper[4947]: I1129 07:21:39.769805 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:21:39 crc kubenswrapper[4947]: I1129 07:21:39.770711 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:21:39 crc kubenswrapper[4947]: I1129 07:21:39.776627 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:21:39 crc kubenswrapper[4947]: I1129 07:21:39.879393 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e443fdc5-130b-4c65-b8f6-54c118beadc6-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lhmkw\" (UID: \"e443fdc5-130b-4c65-b8f6-54c118beadc6\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lhmkw" Nov 29 07:21:39 crc kubenswrapper[4947]: I1129 07:21:39.879477 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjnkw\" (UniqueName: \"kubernetes.io/projected/e443fdc5-130b-4c65-b8f6-54c118beadc6-kube-api-access-wjnkw\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lhmkw\" (UID: \"e443fdc5-130b-4c65-b8f6-54c118beadc6\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lhmkw" Nov 29 07:21:39 crc kubenswrapper[4947]: I1129 07:21:39.879849 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e443fdc5-130b-4c65-b8f6-54c118beadc6-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lhmkw\" (UID: \"e443fdc5-130b-4c65-b8f6-54c118beadc6\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lhmkw" Nov 29 07:21:39 crc kubenswrapper[4947]: I1129 07:21:39.880124 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e443fdc5-130b-4c65-b8f6-54c118beadc6-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lhmkw\" (UID: \"e443fdc5-130b-4c65-b8f6-54c118beadc6\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lhmkw" Nov 29 07:21:39 crc kubenswrapper[4947]: I1129 07:21:39.982528 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e443fdc5-130b-4c65-b8f6-54c118beadc6-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lhmkw\" (UID: \"e443fdc5-130b-4c65-b8f6-54c118beadc6\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lhmkw" Nov 29 07:21:39 crc kubenswrapper[4947]: I1129 07:21:39.982629 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e443fdc5-130b-4c65-b8f6-54c118beadc6-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lhmkw\" (UID: \"e443fdc5-130b-4c65-b8f6-54c118beadc6\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lhmkw" Nov 29 07:21:39 crc kubenswrapper[4947]: I1129 07:21:39.982744 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e443fdc5-130b-4c65-b8f6-54c118beadc6-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lhmkw\" (UID: \"e443fdc5-130b-4c65-b8f6-54c118beadc6\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lhmkw" Nov 29 07:21:39 crc kubenswrapper[4947]: I1129 07:21:39.982798 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjnkw\" (UniqueName: \"kubernetes.io/projected/e443fdc5-130b-4c65-b8f6-54c118beadc6-kube-api-access-wjnkw\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lhmkw\" (UID: \"e443fdc5-130b-4c65-b8f6-54c118beadc6\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lhmkw" Nov 29 07:21:39 crc kubenswrapper[4947]: I1129 07:21:39.992662 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e443fdc5-130b-4c65-b8f6-54c118beadc6-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lhmkw\" (UID: \"e443fdc5-130b-4c65-b8f6-54c118beadc6\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lhmkw" Nov 29 07:21:39 crc kubenswrapper[4947]: I1129 07:21:39.993537 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e443fdc5-130b-4c65-b8f6-54c118beadc6-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lhmkw\" (UID: \"e443fdc5-130b-4c65-b8f6-54c118beadc6\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lhmkw" Nov 29 07:21:40 crc kubenswrapper[4947]: I1129 07:21:40.001752 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e443fdc5-130b-4c65-b8f6-54c118beadc6-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lhmkw\" (UID: \"e443fdc5-130b-4c65-b8f6-54c118beadc6\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lhmkw" Nov 29 07:21:40 crc kubenswrapper[4947]: I1129 07:21:40.002052 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjnkw\" (UniqueName: \"kubernetes.io/projected/e443fdc5-130b-4c65-b8f6-54c118beadc6-kube-api-access-wjnkw\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lhmkw\" (UID: \"e443fdc5-130b-4c65-b8f6-54c118beadc6\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lhmkw" Nov 29 07:21:40 crc kubenswrapper[4947]: I1129 07:21:40.094834 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lhmkw" Nov 29 07:21:40 crc kubenswrapper[4947]: I1129 07:21:40.682069 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-lhmkw"] Nov 29 07:21:40 crc kubenswrapper[4947]: W1129 07:21:40.710781 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode443fdc5_130b_4c65_b8f6_54c118beadc6.slice/crio-b36c8b5cc48e8de87185f07d9ca5f0b786e15cbd7d445370464c50bd9a3f72d7 WatchSource:0}: Error finding container b36c8b5cc48e8de87185f07d9ca5f0b786e15cbd7d445370464c50bd9a3f72d7: Status 404 returned error can't find the container with id b36c8b5cc48e8de87185f07d9ca5f0b786e15cbd7d445370464c50bd9a3f72d7 Nov 29 07:21:41 crc kubenswrapper[4947]: I1129 07:21:41.719061 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lhmkw" event={"ID":"e443fdc5-130b-4c65-b8f6-54c118beadc6","Type":"ContainerStarted","Data":"b36c8b5cc48e8de87185f07d9ca5f0b786e15cbd7d445370464c50bd9a3f72d7"} Nov 29 07:21:42 crc kubenswrapper[4947]: I1129 07:21:42.731492 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lhmkw" event={"ID":"e443fdc5-130b-4c65-b8f6-54c118beadc6","Type":"ContainerStarted","Data":"0f494a256fe5a5a2f1fdaae449f7574e5903eeeebef4c4c7ad315e255835969a"} Nov 29 07:21:42 crc kubenswrapper[4947]: I1129 07:21:42.762119 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lhmkw" podStartSLOduration=2.906076352 podStartE2EDuration="3.762096325s" podCreationTimestamp="2025-11-29 07:21:39 +0000 UTC" firstStartedPulling="2025-11-29 07:21:40.714711768 +0000 UTC m=+2851.759093849" lastFinishedPulling="2025-11-29 07:21:41.570731741 +0000 UTC m=+2852.615113822" observedRunningTime="2025-11-29 07:21:42.748124571 +0000 UTC m=+2853.792506652" watchObservedRunningTime="2025-11-29 07:21:42.762096325 +0000 UTC m=+2853.806478426" Nov 29 07:21:50 crc kubenswrapper[4947]: I1129 07:21:50.818507 4947 generic.go:334] "Generic (PLEG): container finished" podID="e443fdc5-130b-4c65-b8f6-54c118beadc6" containerID="0f494a256fe5a5a2f1fdaae449f7574e5903eeeebef4c4c7ad315e255835969a" exitCode=0 Nov 29 07:21:50 crc kubenswrapper[4947]: I1129 07:21:50.819173 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lhmkw" event={"ID":"e443fdc5-130b-4c65-b8f6-54c118beadc6","Type":"ContainerDied","Data":"0f494a256fe5a5a2f1fdaae449f7574e5903eeeebef4c4c7ad315e255835969a"} Nov 29 07:21:52 crc kubenswrapper[4947]: I1129 07:21:52.309606 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lhmkw" Nov 29 07:21:52 crc kubenswrapper[4947]: I1129 07:21:52.423403 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e443fdc5-130b-4c65-b8f6-54c118beadc6-inventory\") pod \"e443fdc5-130b-4c65-b8f6-54c118beadc6\" (UID: \"e443fdc5-130b-4c65-b8f6-54c118beadc6\") " Nov 29 07:21:52 crc kubenswrapper[4947]: I1129 07:21:52.423772 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e443fdc5-130b-4c65-b8f6-54c118beadc6-ssh-key\") pod \"e443fdc5-130b-4c65-b8f6-54c118beadc6\" (UID: \"e443fdc5-130b-4c65-b8f6-54c118beadc6\") " Nov 29 07:21:52 crc kubenswrapper[4947]: I1129 07:21:52.423807 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e443fdc5-130b-4c65-b8f6-54c118beadc6-ceph\") pod \"e443fdc5-130b-4c65-b8f6-54c118beadc6\" (UID: \"e443fdc5-130b-4c65-b8f6-54c118beadc6\") " Nov 29 07:21:52 crc kubenswrapper[4947]: I1129 07:21:52.423900 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjnkw\" (UniqueName: \"kubernetes.io/projected/e443fdc5-130b-4c65-b8f6-54c118beadc6-kube-api-access-wjnkw\") pod \"e443fdc5-130b-4c65-b8f6-54c118beadc6\" (UID: \"e443fdc5-130b-4c65-b8f6-54c118beadc6\") " Nov 29 07:21:52 crc kubenswrapper[4947]: I1129 07:21:52.430417 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e443fdc5-130b-4c65-b8f6-54c118beadc6-ceph" (OuterVolumeSpecName: "ceph") pod "e443fdc5-130b-4c65-b8f6-54c118beadc6" (UID: "e443fdc5-130b-4c65-b8f6-54c118beadc6"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:21:52 crc kubenswrapper[4947]: I1129 07:21:52.430836 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e443fdc5-130b-4c65-b8f6-54c118beadc6-kube-api-access-wjnkw" (OuterVolumeSpecName: "kube-api-access-wjnkw") pod "e443fdc5-130b-4c65-b8f6-54c118beadc6" (UID: "e443fdc5-130b-4c65-b8f6-54c118beadc6"). InnerVolumeSpecName "kube-api-access-wjnkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:21:52 crc kubenswrapper[4947]: I1129 07:21:52.454007 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e443fdc5-130b-4c65-b8f6-54c118beadc6-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e443fdc5-130b-4c65-b8f6-54c118beadc6" (UID: "e443fdc5-130b-4c65-b8f6-54c118beadc6"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:21:52 crc kubenswrapper[4947]: I1129 07:21:52.455578 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e443fdc5-130b-4c65-b8f6-54c118beadc6-inventory" (OuterVolumeSpecName: "inventory") pod "e443fdc5-130b-4c65-b8f6-54c118beadc6" (UID: "e443fdc5-130b-4c65-b8f6-54c118beadc6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:21:52 crc kubenswrapper[4947]: I1129 07:21:52.526418 4947 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e443fdc5-130b-4c65-b8f6-54c118beadc6-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:52 crc kubenswrapper[4947]: I1129 07:21:52.526471 4947 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e443fdc5-130b-4c65-b8f6-54c118beadc6-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:52 crc kubenswrapper[4947]: I1129 07:21:52.526483 4947 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e443fdc5-130b-4c65-b8f6-54c118beadc6-ceph\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:52 crc kubenswrapper[4947]: I1129 07:21:52.526498 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjnkw\" (UniqueName: \"kubernetes.io/projected/e443fdc5-130b-4c65-b8f6-54c118beadc6-kube-api-access-wjnkw\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:52 crc kubenswrapper[4947]: I1129 07:21:52.840500 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lhmkw" event={"ID":"e443fdc5-130b-4c65-b8f6-54c118beadc6","Type":"ContainerDied","Data":"b36c8b5cc48e8de87185f07d9ca5f0b786e15cbd7d445370464c50bd9a3f72d7"} Nov 29 07:21:52 crc kubenswrapper[4947]: I1129 07:21:52.840542 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b36c8b5cc48e8de87185f07d9ca5f0b786e15cbd7d445370464c50bd9a3f72d7" Nov 29 07:21:52 crc kubenswrapper[4947]: I1129 07:21:52.840583 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lhmkw" Nov 29 07:21:52 crc kubenswrapper[4947]: I1129 07:21:52.979788 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7wvgn"] Nov 29 07:21:52 crc kubenswrapper[4947]: E1129 07:21:52.980393 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e443fdc5-130b-4c65-b8f6-54c118beadc6" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:21:52 crc kubenswrapper[4947]: I1129 07:21:52.980419 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="e443fdc5-130b-4c65-b8f6-54c118beadc6" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:21:52 crc kubenswrapper[4947]: I1129 07:21:52.980648 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="e443fdc5-130b-4c65-b8f6-54c118beadc6" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:21:52 crc kubenswrapper[4947]: I1129 07:21:52.981486 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7wvgn" Nov 29 07:21:52 crc kubenswrapper[4947]: I1129 07:21:52.984311 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:21:52 crc kubenswrapper[4947]: I1129 07:21:52.984596 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 29 07:21:52 crc kubenswrapper[4947]: I1129 07:21:52.985138 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:21:52 crc kubenswrapper[4947]: I1129 07:21:52.985244 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xvljs" Nov 29 07:21:52 crc kubenswrapper[4947]: I1129 07:21:52.986988 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:21:52 crc kubenswrapper[4947]: I1129 07:21:52.987173 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:21:52 crc kubenswrapper[4947]: I1129 07:21:52.987237 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:21:52 crc kubenswrapper[4947]: I1129 07:21:52.995422 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7wvgn"] Nov 29 07:21:53 crc kubenswrapper[4947]: I1129 07:21:53.038676 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7119042f-970c-41db-8f48-6543710b205e-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7wvgn\" (UID: \"7119042f-970c-41db-8f48-6543710b205e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7wvgn" Nov 29 07:21:53 crc kubenswrapper[4947]: I1129 07:21:53.038739 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7119042f-970c-41db-8f48-6543710b205e-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7wvgn\" (UID: \"7119042f-970c-41db-8f48-6543710b205e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7wvgn" Nov 29 07:21:53 crc kubenswrapper[4947]: I1129 07:21:53.038809 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpmlj\" (UniqueName: \"kubernetes.io/projected/7119042f-970c-41db-8f48-6543710b205e-kube-api-access-tpmlj\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7wvgn\" (UID: \"7119042f-970c-41db-8f48-6543710b205e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7wvgn" Nov 29 07:21:53 crc kubenswrapper[4947]: I1129 07:21:53.038843 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7119042f-970c-41db-8f48-6543710b205e-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7wvgn\" (UID: \"7119042f-970c-41db-8f48-6543710b205e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7wvgn" Nov 29 07:21:53 crc kubenswrapper[4947]: I1129 07:21:53.139466 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7119042f-970c-41db-8f48-6543710b205e-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7wvgn\" (UID: \"7119042f-970c-41db-8f48-6543710b205e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7wvgn" Nov 29 07:21:53 crc kubenswrapper[4947]: I1129 07:21:53.139550 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7119042f-970c-41db-8f48-6543710b205e-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7wvgn\" (UID: \"7119042f-970c-41db-8f48-6543710b205e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7wvgn" Nov 29 07:21:53 crc kubenswrapper[4947]: I1129 07:21:53.139613 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpmlj\" (UniqueName: \"kubernetes.io/projected/7119042f-970c-41db-8f48-6543710b205e-kube-api-access-tpmlj\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7wvgn\" (UID: \"7119042f-970c-41db-8f48-6543710b205e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7wvgn" Nov 29 07:21:53 crc kubenswrapper[4947]: I1129 07:21:53.139638 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7119042f-970c-41db-8f48-6543710b205e-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7wvgn\" (UID: \"7119042f-970c-41db-8f48-6543710b205e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7wvgn" Nov 29 07:21:53 crc kubenswrapper[4947]: I1129 07:21:53.147074 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7119042f-970c-41db-8f48-6543710b205e-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7wvgn\" (UID: \"7119042f-970c-41db-8f48-6543710b205e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7wvgn" Nov 29 07:21:53 crc kubenswrapper[4947]: I1129 07:21:53.147117 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7119042f-970c-41db-8f48-6543710b205e-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7wvgn\" (UID: \"7119042f-970c-41db-8f48-6543710b205e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7wvgn" Nov 29 07:21:53 crc kubenswrapper[4947]: I1129 07:21:53.147522 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7119042f-970c-41db-8f48-6543710b205e-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7wvgn\" (UID: \"7119042f-970c-41db-8f48-6543710b205e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7wvgn" Nov 29 07:21:53 crc kubenswrapper[4947]: I1129 07:21:53.158192 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpmlj\" (UniqueName: \"kubernetes.io/projected/7119042f-970c-41db-8f48-6543710b205e-kube-api-access-tpmlj\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7wvgn\" (UID: \"7119042f-970c-41db-8f48-6543710b205e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7wvgn" Nov 29 07:21:53 crc kubenswrapper[4947]: I1129 07:21:53.306565 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7wvgn" Nov 29 07:21:53 crc kubenswrapper[4947]: I1129 07:21:53.851252 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7wvgn"] Nov 29 07:21:54 crc kubenswrapper[4947]: I1129 07:21:54.861781 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7wvgn" event={"ID":"7119042f-970c-41db-8f48-6543710b205e","Type":"ContainerStarted","Data":"8d8b92acecf77ce60a79d366253adf85eb3959ffba10896ba188d823a0170389"} Nov 29 07:21:55 crc kubenswrapper[4947]: I1129 07:21:55.871607 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7wvgn" event={"ID":"7119042f-970c-41db-8f48-6543710b205e","Type":"ContainerStarted","Data":"7dfaf4d7f02eff39f50f565252e97eccab55710dab4f82e709ba65e7cadbdfb5"} Nov 29 07:21:55 crc kubenswrapper[4947]: I1129 07:21:55.899453 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7wvgn" podStartSLOduration=2.477804091 podStartE2EDuration="3.899428261s" podCreationTimestamp="2025-11-29 07:21:52 +0000 UTC" firstStartedPulling="2025-11-29 07:21:53.862628955 +0000 UTC m=+2864.907011036" lastFinishedPulling="2025-11-29 07:21:55.284253125 +0000 UTC m=+2866.328635206" observedRunningTime="2025-11-29 07:21:55.894462006 +0000 UTC m=+2866.938844097" watchObservedRunningTime="2025-11-29 07:21:55.899428261 +0000 UTC m=+2866.943810342" Nov 29 07:22:08 crc kubenswrapper[4947]: I1129 07:22:08.122292 4947 generic.go:334] "Generic (PLEG): container finished" podID="7119042f-970c-41db-8f48-6543710b205e" containerID="7dfaf4d7f02eff39f50f565252e97eccab55710dab4f82e709ba65e7cadbdfb5" exitCode=0 Nov 29 07:22:08 crc kubenswrapper[4947]: I1129 07:22:08.122384 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7wvgn" event={"ID":"7119042f-970c-41db-8f48-6543710b205e","Type":"ContainerDied","Data":"7dfaf4d7f02eff39f50f565252e97eccab55710dab4f82e709ba65e7cadbdfb5"} Nov 29 07:22:09 crc kubenswrapper[4947]: I1129 07:22:09.628954 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7wvgn" Nov 29 07:22:09 crc kubenswrapper[4947]: I1129 07:22:09.774276 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7119042f-970c-41db-8f48-6543710b205e-ssh-key\") pod \"7119042f-970c-41db-8f48-6543710b205e\" (UID: \"7119042f-970c-41db-8f48-6543710b205e\") " Nov 29 07:22:09 crc kubenswrapper[4947]: I1129 07:22:09.774459 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpmlj\" (UniqueName: \"kubernetes.io/projected/7119042f-970c-41db-8f48-6543710b205e-kube-api-access-tpmlj\") pod \"7119042f-970c-41db-8f48-6543710b205e\" (UID: \"7119042f-970c-41db-8f48-6543710b205e\") " Nov 29 07:22:09 crc kubenswrapper[4947]: I1129 07:22:09.774569 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7119042f-970c-41db-8f48-6543710b205e-inventory\") pod \"7119042f-970c-41db-8f48-6543710b205e\" (UID: \"7119042f-970c-41db-8f48-6543710b205e\") " Nov 29 07:22:09 crc kubenswrapper[4947]: I1129 07:22:09.774672 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7119042f-970c-41db-8f48-6543710b205e-ceph\") pod \"7119042f-970c-41db-8f48-6543710b205e\" (UID: \"7119042f-970c-41db-8f48-6543710b205e\") " Nov 29 07:22:09 crc kubenswrapper[4947]: I1129 07:22:09.784556 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7119042f-970c-41db-8f48-6543710b205e-kube-api-access-tpmlj" (OuterVolumeSpecName: "kube-api-access-tpmlj") pod "7119042f-970c-41db-8f48-6543710b205e" (UID: "7119042f-970c-41db-8f48-6543710b205e"). InnerVolumeSpecName "kube-api-access-tpmlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:22:09 crc kubenswrapper[4947]: I1129 07:22:09.784575 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7119042f-970c-41db-8f48-6543710b205e-ceph" (OuterVolumeSpecName: "ceph") pod "7119042f-970c-41db-8f48-6543710b205e" (UID: "7119042f-970c-41db-8f48-6543710b205e"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:22:09 crc kubenswrapper[4947]: I1129 07:22:09.807899 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7119042f-970c-41db-8f48-6543710b205e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7119042f-970c-41db-8f48-6543710b205e" (UID: "7119042f-970c-41db-8f48-6543710b205e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:22:09 crc kubenswrapper[4947]: I1129 07:22:09.811934 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7119042f-970c-41db-8f48-6543710b205e-inventory" (OuterVolumeSpecName: "inventory") pod "7119042f-970c-41db-8f48-6543710b205e" (UID: "7119042f-970c-41db-8f48-6543710b205e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:22:09 crc kubenswrapper[4947]: I1129 07:22:09.877696 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tpmlj\" (UniqueName: \"kubernetes.io/projected/7119042f-970c-41db-8f48-6543710b205e-kube-api-access-tpmlj\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:09 crc kubenswrapper[4947]: I1129 07:22:09.877753 4947 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7119042f-970c-41db-8f48-6543710b205e-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:09 crc kubenswrapper[4947]: I1129 07:22:09.877766 4947 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7119042f-970c-41db-8f48-6543710b205e-ceph\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:09 crc kubenswrapper[4947]: I1129 07:22:09.877776 4947 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7119042f-970c-41db-8f48-6543710b205e-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:10 crc kubenswrapper[4947]: I1129 07:22:10.144629 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7wvgn" event={"ID":"7119042f-970c-41db-8f48-6543710b205e","Type":"ContainerDied","Data":"8d8b92acecf77ce60a79d366253adf85eb3959ffba10896ba188d823a0170389"} Nov 29 07:22:10 crc kubenswrapper[4947]: I1129 07:22:10.144909 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7wvgn" Nov 29 07:22:10 crc kubenswrapper[4947]: I1129 07:22:10.145095 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d8b92acecf77ce60a79d366253adf85eb3959ffba10896ba188d823a0170389" Nov 29 07:22:10 crc kubenswrapper[4947]: I1129 07:22:10.290135 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd"] Nov 29 07:22:10 crc kubenswrapper[4947]: E1129 07:22:10.290659 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7119042f-970c-41db-8f48-6543710b205e" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:22:10 crc kubenswrapper[4947]: I1129 07:22:10.290682 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="7119042f-970c-41db-8f48-6543710b205e" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:22:10 crc kubenswrapper[4947]: I1129 07:22:10.290866 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="7119042f-970c-41db-8f48-6543710b205e" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:22:10 crc kubenswrapper[4947]: I1129 07:22:10.291652 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd" Nov 29 07:22:10 crc kubenswrapper[4947]: I1129 07:22:10.297670 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xvljs" Nov 29 07:22:10 crc kubenswrapper[4947]: I1129 07:22:10.297976 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:22:10 crc kubenswrapper[4947]: I1129 07:22:10.298031 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:22:10 crc kubenswrapper[4947]: I1129 07:22:10.298202 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Nov 29 07:22:10 crc kubenswrapper[4947]: I1129 07:22:10.298645 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:22:10 crc kubenswrapper[4947]: I1129 07:22:10.298987 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Nov 29 07:22:10 crc kubenswrapper[4947]: I1129 07:22:10.300305 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 29 07:22:10 crc kubenswrapper[4947]: I1129 07:22:10.300310 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Nov 29 07:22:10 crc kubenswrapper[4947]: I1129 07:22:10.306285 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd"] Nov 29 07:22:10 crc kubenswrapper[4947]: I1129 07:22:10.490735 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd\" (UID: \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd" Nov 29 07:22:10 crc kubenswrapper[4947]: I1129 07:22:10.490843 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd\" (UID: \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd" Nov 29 07:22:10 crc kubenswrapper[4947]: I1129 07:22:10.490882 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd\" (UID: \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd" Nov 29 07:22:10 crc kubenswrapper[4947]: I1129 07:22:10.490916 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd\" (UID: \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd" Nov 29 07:22:10 crc kubenswrapper[4947]: I1129 07:22:10.490945 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpgdr\" (UniqueName: \"kubernetes.io/projected/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-kube-api-access-mpgdr\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd\" (UID: \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd" Nov 29 07:22:10 crc kubenswrapper[4947]: I1129 07:22:10.490978 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd\" (UID: \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd" Nov 29 07:22:10 crc kubenswrapper[4947]: I1129 07:22:10.491058 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd\" (UID: \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd" Nov 29 07:22:10 crc kubenswrapper[4947]: I1129 07:22:10.491119 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd\" (UID: \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd" Nov 29 07:22:10 crc kubenswrapper[4947]: I1129 07:22:10.491151 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd\" (UID: \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd" Nov 29 07:22:10 crc kubenswrapper[4947]: I1129 07:22:10.491186 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd\" (UID: \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd" Nov 29 07:22:10 crc kubenswrapper[4947]: I1129 07:22:10.491236 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd\" (UID: \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd" Nov 29 07:22:10 crc kubenswrapper[4947]: I1129 07:22:10.491271 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd\" (UID: \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd" Nov 29 07:22:10 crc kubenswrapper[4947]: I1129 07:22:10.491310 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd\" (UID: \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd" Nov 29 07:22:10 crc kubenswrapper[4947]: I1129 07:22:10.594139 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd\" (UID: \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd" Nov 29 07:22:10 crc kubenswrapper[4947]: I1129 07:22:10.594237 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd\" (UID: \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd" Nov 29 07:22:10 crc kubenswrapper[4947]: I1129 07:22:10.594271 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpgdr\" (UniqueName: \"kubernetes.io/projected/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-kube-api-access-mpgdr\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd\" (UID: \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd" Nov 29 07:22:10 crc kubenswrapper[4947]: I1129 07:22:10.594304 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd\" (UID: \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd" Nov 29 07:22:10 crc kubenswrapper[4947]: I1129 07:22:10.594392 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd\" (UID: \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd" Nov 29 07:22:10 crc kubenswrapper[4947]: I1129 07:22:10.594471 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd\" (UID: \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd" Nov 29 07:22:10 crc kubenswrapper[4947]: I1129 07:22:10.594501 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd\" (UID: \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd" Nov 29 07:22:10 crc kubenswrapper[4947]: I1129 07:22:10.594541 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd\" (UID: \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd" Nov 29 07:22:10 crc kubenswrapper[4947]: I1129 07:22:10.594589 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd\" (UID: \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd" Nov 29 07:22:10 crc kubenswrapper[4947]: I1129 07:22:10.594619 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd\" (UID: \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd" Nov 29 07:22:10 crc kubenswrapper[4947]: I1129 07:22:10.594667 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd\" (UID: \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd" Nov 29 07:22:10 crc kubenswrapper[4947]: I1129 07:22:10.594707 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd\" (UID: \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd" Nov 29 07:22:10 crc kubenswrapper[4947]: I1129 07:22:10.594763 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd\" (UID: \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd" Nov 29 07:22:10 crc kubenswrapper[4947]: I1129 07:22:10.602284 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd\" (UID: \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd" Nov 29 07:22:10 crc kubenswrapper[4947]: I1129 07:22:10.603503 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd\" (UID: \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd" Nov 29 07:22:10 crc kubenswrapper[4947]: I1129 07:22:10.604070 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd\" (UID: \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd" Nov 29 07:22:10 crc kubenswrapper[4947]: I1129 07:22:10.606966 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd\" (UID: \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd" Nov 29 07:22:10 crc kubenswrapper[4947]: I1129 07:22:10.607773 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd\" (UID: \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd" Nov 29 07:22:10 crc kubenswrapper[4947]: I1129 07:22:10.611137 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd\" (UID: \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd" Nov 29 07:22:10 crc kubenswrapper[4947]: I1129 07:22:10.614094 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd\" (UID: \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd" Nov 29 07:22:10 crc kubenswrapper[4947]: I1129 07:22:10.614178 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd\" (UID: \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd" Nov 29 07:22:10 crc kubenswrapper[4947]: I1129 07:22:10.614425 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd\" (UID: \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd" Nov 29 07:22:10 crc kubenswrapper[4947]: I1129 07:22:10.614509 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd\" (UID: \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd" Nov 29 07:22:10 crc kubenswrapper[4947]: I1129 07:22:10.615689 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd\" (UID: \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd" Nov 29 07:22:10 crc kubenswrapper[4947]: I1129 07:22:10.619545 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd\" (UID: \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd" Nov 29 07:22:10 crc kubenswrapper[4947]: I1129 07:22:10.625102 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpgdr\" (UniqueName: \"kubernetes.io/projected/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-kube-api-access-mpgdr\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd\" (UID: \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd" Nov 29 07:22:10 crc kubenswrapper[4947]: I1129 07:22:10.913963 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd" Nov 29 07:22:11 crc kubenswrapper[4947]: I1129 07:22:11.479784 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd"] Nov 29 07:22:12 crc kubenswrapper[4947]: I1129 07:22:12.177329 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd" event={"ID":"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7","Type":"ContainerStarted","Data":"b0130b0b9cab6d1668e80a2d90054681a1ef6375294b43f619854be321df372d"} Nov 29 07:22:13 crc kubenswrapper[4947]: I1129 07:22:13.195007 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd" event={"ID":"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7","Type":"ContainerStarted","Data":"6dd83adfe2683c898cb48fe9095fad7cfe192701c52cd110c40a214ee822d1fc"} Nov 29 07:22:13 crc kubenswrapper[4947]: I1129 07:22:13.212594 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd" podStartSLOduration=2.454024187 podStartE2EDuration="3.212574768s" podCreationTimestamp="2025-11-29 07:22:10 +0000 UTC" firstStartedPulling="2025-11-29 07:22:11.491691301 +0000 UTC m=+2882.536073382" lastFinishedPulling="2025-11-29 07:22:12.250241882 +0000 UTC m=+2883.294623963" observedRunningTime="2025-11-29 07:22:13.208932326 +0000 UTC m=+2884.253314407" watchObservedRunningTime="2025-11-29 07:22:13.212574768 +0000 UTC m=+2884.256956849" Nov 29 07:22:22 crc kubenswrapper[4947]: I1129 07:22:22.987455 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:22:22 crc kubenswrapper[4947]: I1129 07:22:22.988664 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:22:46 crc kubenswrapper[4947]: I1129 07:22:46.511896 4947 generic.go:334] "Generic (PLEG): container finished" podID="d33adfe1-c0d7-4896-8acf-00e6c0a4afc7" containerID="6dd83adfe2683c898cb48fe9095fad7cfe192701c52cd110c40a214ee822d1fc" exitCode=0 Nov 29 07:22:46 crc kubenswrapper[4947]: I1129 07:22:46.511992 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd" event={"ID":"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7","Type":"ContainerDied","Data":"6dd83adfe2683c898cb48fe9095fad7cfe192701c52cd110c40a214ee822d1fc"} Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.033062 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd" Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.127239 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-ceph\") pod \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\" (UID: \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\") " Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.127327 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-neutron-metadata-combined-ca-bundle\") pod \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\" (UID: \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\") " Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.127409 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\" (UID: \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\") " Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.128384 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-libvirt-combined-ca-bundle\") pod \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\" (UID: \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\") " Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.128437 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-openstack-edpm-ipam-ovn-default-certs-0\") pod \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\" (UID: \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\") " Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.128469 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-nova-combined-ca-bundle\") pod \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\" (UID: \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\") " Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.128525 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-ovn-combined-ca-bundle\") pod \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\" (UID: \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\") " Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.128546 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-repo-setup-combined-ca-bundle\") pod \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\" (UID: \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\") " Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.128580 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-bootstrap-combined-ca-bundle\") pod \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\" (UID: \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\") " Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.128656 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-ssh-key\") pod \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\" (UID: \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\") " Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.128684 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\" (UID: \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\") " Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.128787 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-inventory\") pod \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\" (UID: \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\") " Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.128817 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpgdr\" (UniqueName: \"kubernetes.io/projected/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-kube-api-access-mpgdr\") pod \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\" (UID: \"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7\") " Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.136535 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "d33adfe1-c0d7-4896-8acf-00e6c0a4afc7" (UID: "d33adfe1-c0d7-4896-8acf-00e6c0a4afc7"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.136629 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "d33adfe1-c0d7-4896-8acf-00e6c0a4afc7" (UID: "d33adfe1-c0d7-4896-8acf-00e6c0a4afc7"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.136652 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "d33adfe1-c0d7-4896-8acf-00e6c0a4afc7" (UID: "d33adfe1-c0d7-4896-8acf-00e6c0a4afc7"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.137233 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "d33adfe1-c0d7-4896-8acf-00e6c0a4afc7" (UID: "d33adfe1-c0d7-4896-8acf-00e6c0a4afc7"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.139577 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-ceph" (OuterVolumeSpecName: "ceph") pod "d33adfe1-c0d7-4896-8acf-00e6c0a4afc7" (UID: "d33adfe1-c0d7-4896-8acf-00e6c0a4afc7"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.139752 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "d33adfe1-c0d7-4896-8acf-00e6c0a4afc7" (UID: "d33adfe1-c0d7-4896-8acf-00e6c0a4afc7"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.140484 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "d33adfe1-c0d7-4896-8acf-00e6c0a4afc7" (UID: "d33adfe1-c0d7-4896-8acf-00e6c0a4afc7"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.141275 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "d33adfe1-c0d7-4896-8acf-00e6c0a4afc7" (UID: "d33adfe1-c0d7-4896-8acf-00e6c0a4afc7"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.141911 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "d33adfe1-c0d7-4896-8acf-00e6c0a4afc7" (UID: "d33adfe1-c0d7-4896-8acf-00e6c0a4afc7"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.145684 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "d33adfe1-c0d7-4896-8acf-00e6c0a4afc7" (UID: "d33adfe1-c0d7-4896-8acf-00e6c0a4afc7"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.149241 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-kube-api-access-mpgdr" (OuterVolumeSpecName: "kube-api-access-mpgdr") pod "d33adfe1-c0d7-4896-8acf-00e6c0a4afc7" (UID: "d33adfe1-c0d7-4896-8acf-00e6c0a4afc7"). InnerVolumeSpecName "kube-api-access-mpgdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.166264 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d33adfe1-c0d7-4896-8acf-00e6c0a4afc7" (UID: "d33adfe1-c0d7-4896-8acf-00e6c0a4afc7"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.169084 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-inventory" (OuterVolumeSpecName: "inventory") pod "d33adfe1-c0d7-4896-8acf-00e6c0a4afc7" (UID: "d33adfe1-c0d7-4896-8acf-00e6c0a4afc7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.233072 4947 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.233140 4947 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.233161 4947 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.233176 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpgdr\" (UniqueName: \"kubernetes.io/projected/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-kube-api-access-mpgdr\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.233188 4947 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-ceph\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.233197 4947 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.233210 4947 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.233239 4947 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.233249 4947 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.233258 4947 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.233268 4947 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.233277 4947 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.233286 4947 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d33adfe1-c0d7-4896-8acf-00e6c0a4afc7-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.547165 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd" event={"ID":"d33adfe1-c0d7-4896-8acf-00e6c0a4afc7","Type":"ContainerDied","Data":"b0130b0b9cab6d1668e80a2d90054681a1ef6375294b43f619854be321df372d"} Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.547280 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0130b0b9cab6d1668e80a2d90054681a1ef6375294b43f619854be321df372d" Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.547405 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd" Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.700908 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-kwkfr"] Nov 29 07:22:48 crc kubenswrapper[4947]: E1129 07:22:48.701591 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d33adfe1-c0d7-4896-8acf-00e6c0a4afc7" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.701621 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="d33adfe1-c0d7-4896-8acf-00e6c0a4afc7" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.701883 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="d33adfe1-c0d7-4896-8acf-00e6c0a4afc7" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.702777 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-kwkfr" Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.706974 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.707559 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.708289 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.708398 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xvljs" Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.755286 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-kwkfr"] Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.764016 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.850563 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5210261f-d57b-4ed1-bf74-7c4f73cb6a8f-ssh-key\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-kwkfr\" (UID: \"5210261f-d57b-4ed1-bf74-7c4f73cb6a8f\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-kwkfr" Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.850696 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5210261f-d57b-4ed1-bf74-7c4f73cb6a8f-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-kwkfr\" (UID: \"5210261f-d57b-4ed1-bf74-7c4f73cb6a8f\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-kwkfr" Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.850805 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgnk8\" (UniqueName: \"kubernetes.io/projected/5210261f-d57b-4ed1-bf74-7c4f73cb6a8f-kube-api-access-vgnk8\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-kwkfr\" (UID: \"5210261f-d57b-4ed1-bf74-7c4f73cb6a8f\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-kwkfr" Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.850850 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5210261f-d57b-4ed1-bf74-7c4f73cb6a8f-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-kwkfr\" (UID: \"5210261f-d57b-4ed1-bf74-7c4f73cb6a8f\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-kwkfr" Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.953356 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgnk8\" (UniqueName: \"kubernetes.io/projected/5210261f-d57b-4ed1-bf74-7c4f73cb6a8f-kube-api-access-vgnk8\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-kwkfr\" (UID: \"5210261f-d57b-4ed1-bf74-7c4f73cb6a8f\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-kwkfr" Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.953445 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5210261f-d57b-4ed1-bf74-7c4f73cb6a8f-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-kwkfr\" (UID: \"5210261f-d57b-4ed1-bf74-7c4f73cb6a8f\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-kwkfr" Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.953579 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5210261f-d57b-4ed1-bf74-7c4f73cb6a8f-ssh-key\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-kwkfr\" (UID: \"5210261f-d57b-4ed1-bf74-7c4f73cb6a8f\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-kwkfr" Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.953671 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5210261f-d57b-4ed1-bf74-7c4f73cb6a8f-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-kwkfr\" (UID: \"5210261f-d57b-4ed1-bf74-7c4f73cb6a8f\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-kwkfr" Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.960423 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5210261f-d57b-4ed1-bf74-7c4f73cb6a8f-ssh-key\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-kwkfr\" (UID: \"5210261f-d57b-4ed1-bf74-7c4f73cb6a8f\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-kwkfr" Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.960711 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5210261f-d57b-4ed1-bf74-7c4f73cb6a8f-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-kwkfr\" (UID: \"5210261f-d57b-4ed1-bf74-7c4f73cb6a8f\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-kwkfr" Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.966970 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5210261f-d57b-4ed1-bf74-7c4f73cb6a8f-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-kwkfr\" (UID: \"5210261f-d57b-4ed1-bf74-7c4f73cb6a8f\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-kwkfr" Nov 29 07:22:48 crc kubenswrapper[4947]: I1129 07:22:48.981084 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgnk8\" (UniqueName: \"kubernetes.io/projected/5210261f-d57b-4ed1-bf74-7c4f73cb6a8f-kube-api-access-vgnk8\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-kwkfr\" (UID: \"5210261f-d57b-4ed1-bf74-7c4f73cb6a8f\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-kwkfr" Nov 29 07:22:49 crc kubenswrapper[4947]: I1129 07:22:49.047200 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-kwkfr" Nov 29 07:22:49 crc kubenswrapper[4947]: I1129 07:22:49.440630 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-kwkfr"] Nov 29 07:22:49 crc kubenswrapper[4947]: I1129 07:22:49.559055 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-kwkfr" event={"ID":"5210261f-d57b-4ed1-bf74-7c4f73cb6a8f","Type":"ContainerStarted","Data":"8a82e3599492a6fdd26cdff182b3b5186927766b7f59b2af079e4bcdd6fe2dcb"} Nov 29 07:22:52 crc kubenswrapper[4947]: I1129 07:22:52.605495 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-kwkfr" event={"ID":"5210261f-d57b-4ed1-bf74-7c4f73cb6a8f","Type":"ContainerStarted","Data":"ef269d66de302016219e4b77e69f4d0cea5b496d447c6d7b70d700e203601958"} Nov 29 07:22:52 crc kubenswrapper[4947]: I1129 07:22:52.631539 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-kwkfr" podStartSLOduration=2.724548442 podStartE2EDuration="4.631515848s" podCreationTimestamp="2025-11-29 07:22:48 +0000 UTC" firstStartedPulling="2025-11-29 07:22:49.454845346 +0000 UTC m=+2920.499227427" lastFinishedPulling="2025-11-29 07:22:51.361812752 +0000 UTC m=+2922.406194833" observedRunningTime="2025-11-29 07:22:52.626088091 +0000 UTC m=+2923.670470192" watchObservedRunningTime="2025-11-29 07:22:52.631515848 +0000 UTC m=+2923.675897929" Nov 29 07:22:52 crc kubenswrapper[4947]: I1129 07:22:52.988063 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:22:52 crc kubenswrapper[4947]: I1129 07:22:52.988144 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:22:52 crc kubenswrapper[4947]: I1129 07:22:52.988203 4947 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" Nov 29 07:22:52 crc kubenswrapper[4947]: I1129 07:22:52.989382 4947 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c579e62fbc7ec20ab5411cd9ba7d8f85ddfcbbe286f5b6a301b4c37126dd7a87"} pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 07:22:52 crc kubenswrapper[4947]: I1129 07:22:52.989454 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" containerID="cri-o://c579e62fbc7ec20ab5411cd9ba7d8f85ddfcbbe286f5b6a301b4c37126dd7a87" gracePeriod=600 Nov 29 07:22:53 crc kubenswrapper[4947]: I1129 07:22:53.620237 4947 generic.go:334] "Generic (PLEG): container finished" podID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerID="c579e62fbc7ec20ab5411cd9ba7d8f85ddfcbbe286f5b6a301b4c37126dd7a87" exitCode=0 Nov 29 07:22:53 crc kubenswrapper[4947]: I1129 07:22:53.620341 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" event={"ID":"5f4d791f-bb61-4aaa-a09c-3007b59645a7","Type":"ContainerDied","Data":"c579e62fbc7ec20ab5411cd9ba7d8f85ddfcbbe286f5b6a301b4c37126dd7a87"} Nov 29 07:22:53 crc kubenswrapper[4947]: I1129 07:22:53.620428 4947 scope.go:117] "RemoveContainer" containerID="4e8a1bb4365f266c0a40f1757eac36e4c4debcbd11bd1184ebc913d9f9683bb6" Nov 29 07:22:54 crc kubenswrapper[4947]: I1129 07:22:54.633051 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" event={"ID":"5f4d791f-bb61-4aaa-a09c-3007b59645a7","Type":"ContainerStarted","Data":"9f307de5f94b3683e9f730374e74b4cfb62a198f3b5d373575c97183b50a6cd9"} Nov 29 07:22:58 crc kubenswrapper[4947]: I1129 07:22:58.680361 4947 generic.go:334] "Generic (PLEG): container finished" podID="5210261f-d57b-4ed1-bf74-7c4f73cb6a8f" containerID="ef269d66de302016219e4b77e69f4d0cea5b496d447c6d7b70d700e203601958" exitCode=0 Nov 29 07:22:58 crc kubenswrapper[4947]: I1129 07:22:58.680443 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-kwkfr" event={"ID":"5210261f-d57b-4ed1-bf74-7c4f73cb6a8f","Type":"ContainerDied","Data":"ef269d66de302016219e4b77e69f4d0cea5b496d447c6d7b70d700e203601958"} Nov 29 07:23:00 crc kubenswrapper[4947]: I1129 07:23:00.173528 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-kwkfr" Nov 29 07:23:00 crc kubenswrapper[4947]: I1129 07:23:00.370791 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5210261f-d57b-4ed1-bf74-7c4f73cb6a8f-ssh-key\") pod \"5210261f-d57b-4ed1-bf74-7c4f73cb6a8f\" (UID: \"5210261f-d57b-4ed1-bf74-7c4f73cb6a8f\") " Nov 29 07:23:00 crc kubenswrapper[4947]: I1129 07:23:00.370948 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5210261f-d57b-4ed1-bf74-7c4f73cb6a8f-ceph\") pod \"5210261f-d57b-4ed1-bf74-7c4f73cb6a8f\" (UID: \"5210261f-d57b-4ed1-bf74-7c4f73cb6a8f\") " Nov 29 07:23:00 crc kubenswrapper[4947]: I1129 07:23:00.371029 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5210261f-d57b-4ed1-bf74-7c4f73cb6a8f-inventory\") pod \"5210261f-d57b-4ed1-bf74-7c4f73cb6a8f\" (UID: \"5210261f-d57b-4ed1-bf74-7c4f73cb6a8f\") " Nov 29 07:23:00 crc kubenswrapper[4947]: I1129 07:23:00.371090 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgnk8\" (UniqueName: \"kubernetes.io/projected/5210261f-d57b-4ed1-bf74-7c4f73cb6a8f-kube-api-access-vgnk8\") pod \"5210261f-d57b-4ed1-bf74-7c4f73cb6a8f\" (UID: \"5210261f-d57b-4ed1-bf74-7c4f73cb6a8f\") " Nov 29 07:23:00 crc kubenswrapper[4947]: I1129 07:23:00.378947 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5210261f-d57b-4ed1-bf74-7c4f73cb6a8f-ceph" (OuterVolumeSpecName: "ceph") pod "5210261f-d57b-4ed1-bf74-7c4f73cb6a8f" (UID: "5210261f-d57b-4ed1-bf74-7c4f73cb6a8f"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:23:00 crc kubenswrapper[4947]: I1129 07:23:00.384500 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5210261f-d57b-4ed1-bf74-7c4f73cb6a8f-kube-api-access-vgnk8" (OuterVolumeSpecName: "kube-api-access-vgnk8") pod "5210261f-d57b-4ed1-bf74-7c4f73cb6a8f" (UID: "5210261f-d57b-4ed1-bf74-7c4f73cb6a8f"). InnerVolumeSpecName "kube-api-access-vgnk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:23:00 crc kubenswrapper[4947]: I1129 07:23:00.404996 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5210261f-d57b-4ed1-bf74-7c4f73cb6a8f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5210261f-d57b-4ed1-bf74-7c4f73cb6a8f" (UID: "5210261f-d57b-4ed1-bf74-7c4f73cb6a8f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:23:00 crc kubenswrapper[4947]: I1129 07:23:00.410328 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5210261f-d57b-4ed1-bf74-7c4f73cb6a8f-inventory" (OuterVolumeSpecName: "inventory") pod "5210261f-d57b-4ed1-bf74-7c4f73cb6a8f" (UID: "5210261f-d57b-4ed1-bf74-7c4f73cb6a8f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:23:00 crc kubenswrapper[4947]: I1129 07:23:00.474400 4947 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5210261f-d57b-4ed1-bf74-7c4f73cb6a8f-ceph\") on node \"crc\" DevicePath \"\"" Nov 29 07:23:00 crc kubenswrapper[4947]: I1129 07:23:00.474487 4947 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5210261f-d57b-4ed1-bf74-7c4f73cb6a8f-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:23:00 crc kubenswrapper[4947]: I1129 07:23:00.474514 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgnk8\" (UniqueName: \"kubernetes.io/projected/5210261f-d57b-4ed1-bf74-7c4f73cb6a8f-kube-api-access-vgnk8\") on node \"crc\" DevicePath \"\"" Nov 29 07:23:00 crc kubenswrapper[4947]: I1129 07:23:00.474537 4947 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5210261f-d57b-4ed1-bf74-7c4f73cb6a8f-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:23:00 crc kubenswrapper[4947]: I1129 07:23:00.701356 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-kwkfr" event={"ID":"5210261f-d57b-4ed1-bf74-7c4f73cb6a8f","Type":"ContainerDied","Data":"8a82e3599492a6fdd26cdff182b3b5186927766b7f59b2af079e4bcdd6fe2dcb"} Nov 29 07:23:00 crc kubenswrapper[4947]: I1129 07:23:00.701765 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a82e3599492a6fdd26cdff182b3b5186927766b7f59b2af079e4bcdd6fe2dcb" Nov 29 07:23:00 crc kubenswrapper[4947]: I1129 07:23:00.701440 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-kwkfr" Nov 29 07:23:00 crc kubenswrapper[4947]: I1129 07:23:00.884675 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-t5tbd"] Nov 29 07:23:00 crc kubenswrapper[4947]: E1129 07:23:00.885118 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5210261f-d57b-4ed1-bf74-7c4f73cb6a8f" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Nov 29 07:23:00 crc kubenswrapper[4947]: I1129 07:23:00.885145 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="5210261f-d57b-4ed1-bf74-7c4f73cb6a8f" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Nov 29 07:23:00 crc kubenswrapper[4947]: I1129 07:23:00.885508 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="5210261f-d57b-4ed1-bf74-7c4f73cb6a8f" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Nov 29 07:23:00 crc kubenswrapper[4947]: I1129 07:23:00.886275 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t5tbd" Nov 29 07:23:00 crc kubenswrapper[4947]: I1129 07:23:00.889587 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Nov 29 07:23:00 crc kubenswrapper[4947]: I1129 07:23:00.889899 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:23:00 crc kubenswrapper[4947]: I1129 07:23:00.890096 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:23:00 crc kubenswrapper[4947]: I1129 07:23:00.891646 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 29 07:23:00 crc kubenswrapper[4947]: I1129 07:23:00.891722 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:23:00 crc kubenswrapper[4947]: I1129 07:23:00.893035 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xvljs" Nov 29 07:23:00 crc kubenswrapper[4947]: I1129 07:23:00.897007 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-t5tbd"] Nov 29 07:23:00 crc kubenswrapper[4947]: I1129 07:23:00.988704 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ac3f3931-63b8-4dea-a2b4-33faca7b3a93-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t5tbd\" (UID: \"ac3f3931-63b8-4dea-a2b4-33faca7b3a93\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t5tbd" Nov 29 07:23:00 crc kubenswrapper[4947]: I1129 07:23:00.988800 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac3f3931-63b8-4dea-a2b4-33faca7b3a93-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t5tbd\" (UID: \"ac3f3931-63b8-4dea-a2b4-33faca7b3a93\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t5tbd" Nov 29 07:23:00 crc kubenswrapper[4947]: I1129 07:23:00.989354 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ac3f3931-63b8-4dea-a2b4-33faca7b3a93-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t5tbd\" (UID: \"ac3f3931-63b8-4dea-a2b4-33faca7b3a93\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t5tbd" Nov 29 07:23:00 crc kubenswrapper[4947]: I1129 07:23:00.989493 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ac3f3931-63b8-4dea-a2b4-33faca7b3a93-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t5tbd\" (UID: \"ac3f3931-63b8-4dea-a2b4-33faca7b3a93\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t5tbd" Nov 29 07:23:00 crc kubenswrapper[4947]: I1129 07:23:00.989658 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac3f3931-63b8-4dea-a2b4-33faca7b3a93-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t5tbd\" (UID: \"ac3f3931-63b8-4dea-a2b4-33faca7b3a93\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t5tbd" Nov 29 07:23:00 crc kubenswrapper[4947]: I1129 07:23:00.989708 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pz4r\" (UniqueName: \"kubernetes.io/projected/ac3f3931-63b8-4dea-a2b4-33faca7b3a93-kube-api-access-6pz4r\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t5tbd\" (UID: \"ac3f3931-63b8-4dea-a2b4-33faca7b3a93\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t5tbd" Nov 29 07:23:01 crc kubenswrapper[4947]: I1129 07:23:01.090515 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ac3f3931-63b8-4dea-a2b4-33faca7b3a93-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t5tbd\" (UID: \"ac3f3931-63b8-4dea-a2b4-33faca7b3a93\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t5tbd" Nov 29 07:23:01 crc kubenswrapper[4947]: I1129 07:23:01.090588 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ac3f3931-63b8-4dea-a2b4-33faca7b3a93-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t5tbd\" (UID: \"ac3f3931-63b8-4dea-a2b4-33faca7b3a93\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t5tbd" Nov 29 07:23:01 crc kubenswrapper[4947]: I1129 07:23:01.090639 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac3f3931-63b8-4dea-a2b4-33faca7b3a93-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t5tbd\" (UID: \"ac3f3931-63b8-4dea-a2b4-33faca7b3a93\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t5tbd" Nov 29 07:23:01 crc kubenswrapper[4947]: I1129 07:23:01.090663 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pz4r\" (UniqueName: \"kubernetes.io/projected/ac3f3931-63b8-4dea-a2b4-33faca7b3a93-kube-api-access-6pz4r\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t5tbd\" (UID: \"ac3f3931-63b8-4dea-a2b4-33faca7b3a93\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t5tbd" Nov 29 07:23:01 crc kubenswrapper[4947]: I1129 07:23:01.090697 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ac3f3931-63b8-4dea-a2b4-33faca7b3a93-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t5tbd\" (UID: \"ac3f3931-63b8-4dea-a2b4-33faca7b3a93\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t5tbd" Nov 29 07:23:01 crc kubenswrapper[4947]: I1129 07:23:01.090776 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac3f3931-63b8-4dea-a2b4-33faca7b3a93-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t5tbd\" (UID: \"ac3f3931-63b8-4dea-a2b4-33faca7b3a93\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t5tbd" Nov 29 07:23:01 crc kubenswrapper[4947]: I1129 07:23:01.092632 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ac3f3931-63b8-4dea-a2b4-33faca7b3a93-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t5tbd\" (UID: \"ac3f3931-63b8-4dea-a2b4-33faca7b3a93\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t5tbd" Nov 29 07:23:01 crc kubenswrapper[4947]: I1129 07:23:01.096419 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ac3f3931-63b8-4dea-a2b4-33faca7b3a93-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t5tbd\" (UID: \"ac3f3931-63b8-4dea-a2b4-33faca7b3a93\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t5tbd" Nov 29 07:23:01 crc kubenswrapper[4947]: I1129 07:23:01.096787 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ac3f3931-63b8-4dea-a2b4-33faca7b3a93-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t5tbd\" (UID: \"ac3f3931-63b8-4dea-a2b4-33faca7b3a93\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t5tbd" Nov 29 07:23:01 crc kubenswrapper[4947]: I1129 07:23:01.106250 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac3f3931-63b8-4dea-a2b4-33faca7b3a93-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t5tbd\" (UID: \"ac3f3931-63b8-4dea-a2b4-33faca7b3a93\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t5tbd" Nov 29 07:23:01 crc kubenswrapper[4947]: I1129 07:23:01.108471 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac3f3931-63b8-4dea-a2b4-33faca7b3a93-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t5tbd\" (UID: \"ac3f3931-63b8-4dea-a2b4-33faca7b3a93\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t5tbd" Nov 29 07:23:01 crc kubenswrapper[4947]: I1129 07:23:01.108968 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pz4r\" (UniqueName: \"kubernetes.io/projected/ac3f3931-63b8-4dea-a2b4-33faca7b3a93-kube-api-access-6pz4r\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t5tbd\" (UID: \"ac3f3931-63b8-4dea-a2b4-33faca7b3a93\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t5tbd" Nov 29 07:23:01 crc kubenswrapper[4947]: I1129 07:23:01.214599 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t5tbd" Nov 29 07:23:01 crc kubenswrapper[4947]: I1129 07:23:01.817196 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-t5tbd"] Nov 29 07:23:02 crc kubenswrapper[4947]: I1129 07:23:02.735551 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t5tbd" event={"ID":"ac3f3931-63b8-4dea-a2b4-33faca7b3a93","Type":"ContainerStarted","Data":"bedd4a38e499e32fbbb5a2185ca10736d78806b82f39931db61487cbb09ac019"} Nov 29 07:23:03 crc kubenswrapper[4947]: I1129 07:23:03.748519 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t5tbd" event={"ID":"ac3f3931-63b8-4dea-a2b4-33faca7b3a93","Type":"ContainerStarted","Data":"4615b2a0a6e5161834555038736202836fb91b330ff62569fc239b72f2f18465"} Nov 29 07:23:03 crc kubenswrapper[4947]: I1129 07:23:03.776597 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t5tbd" podStartSLOduration=3.159030629 podStartE2EDuration="3.77655836s" podCreationTimestamp="2025-11-29 07:23:00 +0000 UTC" firstStartedPulling="2025-11-29 07:23:01.827491142 +0000 UTC m=+2932.871873223" lastFinishedPulling="2025-11-29 07:23:02.445018873 +0000 UTC m=+2933.489400954" observedRunningTime="2025-11-29 07:23:03.770768264 +0000 UTC m=+2934.815150375" watchObservedRunningTime="2025-11-29 07:23:03.77655836 +0000 UTC m=+2934.820940441" Nov 29 07:23:19 crc kubenswrapper[4947]: I1129 07:23:19.150084 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vjglk"] Nov 29 07:23:19 crc kubenswrapper[4947]: I1129 07:23:19.153394 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vjglk" Nov 29 07:23:19 crc kubenswrapper[4947]: I1129 07:23:19.171432 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vjglk"] Nov 29 07:23:19 crc kubenswrapper[4947]: I1129 07:23:19.329189 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4810cfa-8049-4990-bbb4-765ea7f4c264-utilities\") pod \"community-operators-vjglk\" (UID: \"f4810cfa-8049-4990-bbb4-765ea7f4c264\") " pod="openshift-marketplace/community-operators-vjglk" Nov 29 07:23:19 crc kubenswrapper[4947]: I1129 07:23:19.329355 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4810cfa-8049-4990-bbb4-765ea7f4c264-catalog-content\") pod \"community-operators-vjglk\" (UID: \"f4810cfa-8049-4990-bbb4-765ea7f4c264\") " pod="openshift-marketplace/community-operators-vjglk" Nov 29 07:23:19 crc kubenswrapper[4947]: I1129 07:23:19.329380 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mpvv\" (UniqueName: \"kubernetes.io/projected/f4810cfa-8049-4990-bbb4-765ea7f4c264-kube-api-access-6mpvv\") pod \"community-operators-vjglk\" (UID: \"f4810cfa-8049-4990-bbb4-765ea7f4c264\") " pod="openshift-marketplace/community-operators-vjglk" Nov 29 07:23:19 crc kubenswrapper[4947]: I1129 07:23:19.432710 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4810cfa-8049-4990-bbb4-765ea7f4c264-utilities\") pod \"community-operators-vjglk\" (UID: \"f4810cfa-8049-4990-bbb4-765ea7f4c264\") " pod="openshift-marketplace/community-operators-vjglk" Nov 29 07:23:19 crc kubenswrapper[4947]: I1129 07:23:19.432765 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4810cfa-8049-4990-bbb4-765ea7f4c264-catalog-content\") pod \"community-operators-vjglk\" (UID: \"f4810cfa-8049-4990-bbb4-765ea7f4c264\") " pod="openshift-marketplace/community-operators-vjglk" Nov 29 07:23:19 crc kubenswrapper[4947]: I1129 07:23:19.432788 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mpvv\" (UniqueName: \"kubernetes.io/projected/f4810cfa-8049-4990-bbb4-765ea7f4c264-kube-api-access-6mpvv\") pod \"community-operators-vjglk\" (UID: \"f4810cfa-8049-4990-bbb4-765ea7f4c264\") " pod="openshift-marketplace/community-operators-vjglk" Nov 29 07:23:19 crc kubenswrapper[4947]: I1129 07:23:19.433401 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4810cfa-8049-4990-bbb4-765ea7f4c264-utilities\") pod \"community-operators-vjglk\" (UID: \"f4810cfa-8049-4990-bbb4-765ea7f4c264\") " pod="openshift-marketplace/community-operators-vjglk" Nov 29 07:23:19 crc kubenswrapper[4947]: I1129 07:23:19.434242 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4810cfa-8049-4990-bbb4-765ea7f4c264-catalog-content\") pod \"community-operators-vjglk\" (UID: \"f4810cfa-8049-4990-bbb4-765ea7f4c264\") " pod="openshift-marketplace/community-operators-vjglk" Nov 29 07:23:19 crc kubenswrapper[4947]: I1129 07:23:19.461066 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mpvv\" (UniqueName: \"kubernetes.io/projected/f4810cfa-8049-4990-bbb4-765ea7f4c264-kube-api-access-6mpvv\") pod \"community-operators-vjglk\" (UID: \"f4810cfa-8049-4990-bbb4-765ea7f4c264\") " pod="openshift-marketplace/community-operators-vjglk" Nov 29 07:23:19 crc kubenswrapper[4947]: I1129 07:23:19.488688 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vjglk" Nov 29 07:23:20 crc kubenswrapper[4947]: I1129 07:23:20.083208 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vjglk"] Nov 29 07:23:20 crc kubenswrapper[4947]: W1129 07:23:20.085906 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4810cfa_8049_4990_bbb4_765ea7f4c264.slice/crio-ff259d2f2caa80769024d75bb6ee631a39b3dd33070698f058059b02d712bab3 WatchSource:0}: Error finding container ff259d2f2caa80769024d75bb6ee631a39b3dd33070698f058059b02d712bab3: Status 404 returned error can't find the container with id ff259d2f2caa80769024d75bb6ee631a39b3dd33070698f058059b02d712bab3 Nov 29 07:23:20 crc kubenswrapper[4947]: I1129 07:23:20.942672 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vjglk" event={"ID":"f4810cfa-8049-4990-bbb4-765ea7f4c264","Type":"ContainerStarted","Data":"ff259d2f2caa80769024d75bb6ee631a39b3dd33070698f058059b02d712bab3"} Nov 29 07:23:23 crc kubenswrapper[4947]: I1129 07:23:23.974358 4947 generic.go:334] "Generic (PLEG): container finished" podID="f4810cfa-8049-4990-bbb4-765ea7f4c264" containerID="af5f0eeefb86033e781f8f1c10f17543691664c8d1adbd8e0f91f2a2ded39a1f" exitCode=0 Nov 29 07:23:23 crc kubenswrapper[4947]: I1129 07:23:23.974455 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vjglk" event={"ID":"f4810cfa-8049-4990-bbb4-765ea7f4c264","Type":"ContainerDied","Data":"af5f0eeefb86033e781f8f1c10f17543691664c8d1adbd8e0f91f2a2ded39a1f"} Nov 29 07:23:27 crc kubenswrapper[4947]: I1129 07:23:27.011454 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vjglk" event={"ID":"f4810cfa-8049-4990-bbb4-765ea7f4c264","Type":"ContainerStarted","Data":"4241b6e0c73ea58fb05673211a8fc5233a0c5a67df41dffa3f54ee3198005cd0"} Nov 29 07:23:28 crc kubenswrapper[4947]: I1129 07:23:28.025280 4947 generic.go:334] "Generic (PLEG): container finished" podID="f4810cfa-8049-4990-bbb4-765ea7f4c264" containerID="4241b6e0c73ea58fb05673211a8fc5233a0c5a67df41dffa3f54ee3198005cd0" exitCode=0 Nov 29 07:23:28 crc kubenswrapper[4947]: I1129 07:23:28.025420 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vjglk" event={"ID":"f4810cfa-8049-4990-bbb4-765ea7f4c264","Type":"ContainerDied","Data":"4241b6e0c73ea58fb05673211a8fc5233a0c5a67df41dffa3f54ee3198005cd0"} Nov 29 07:23:33 crc kubenswrapper[4947]: I1129 07:23:33.082293 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vjglk" event={"ID":"f4810cfa-8049-4990-bbb4-765ea7f4c264","Type":"ContainerStarted","Data":"8130dffb0692bca35bdd6fb433d947aa38bfcacb6447ee1bf87fd2ffb357a8bd"} Nov 29 07:23:33 crc kubenswrapper[4947]: I1129 07:23:33.106165 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vjglk" podStartSLOduration=7.093228572 podStartE2EDuration="14.106146549s" podCreationTimestamp="2025-11-29 07:23:19 +0000 UTC" firstStartedPulling="2025-11-29 07:23:23.977952998 +0000 UTC m=+2955.022335079" lastFinishedPulling="2025-11-29 07:23:30.990870975 +0000 UTC m=+2962.035253056" observedRunningTime="2025-11-29 07:23:33.102881877 +0000 UTC m=+2964.147263958" watchObservedRunningTime="2025-11-29 07:23:33.106146549 +0000 UTC m=+2964.150528620" Nov 29 07:23:39 crc kubenswrapper[4947]: I1129 07:23:39.490342 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vjglk" Nov 29 07:23:39 crc kubenswrapper[4947]: I1129 07:23:39.492129 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vjglk" Nov 29 07:23:39 crc kubenswrapper[4947]: I1129 07:23:39.540143 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vjglk" Nov 29 07:23:40 crc kubenswrapper[4947]: I1129 07:23:40.203715 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vjglk" Nov 29 07:23:40 crc kubenswrapper[4947]: I1129 07:23:40.268831 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vjglk"] Nov 29 07:23:42 crc kubenswrapper[4947]: I1129 07:23:42.166461 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vjglk" podUID="f4810cfa-8049-4990-bbb4-765ea7f4c264" containerName="registry-server" containerID="cri-o://8130dffb0692bca35bdd6fb433d947aa38bfcacb6447ee1bf87fd2ffb357a8bd" gracePeriod=2 Nov 29 07:23:42 crc kubenswrapper[4947]: I1129 07:23:42.694081 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vjglk" Nov 29 07:23:42 crc kubenswrapper[4947]: I1129 07:23:42.853503 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mpvv\" (UniqueName: \"kubernetes.io/projected/f4810cfa-8049-4990-bbb4-765ea7f4c264-kube-api-access-6mpvv\") pod \"f4810cfa-8049-4990-bbb4-765ea7f4c264\" (UID: \"f4810cfa-8049-4990-bbb4-765ea7f4c264\") " Nov 29 07:23:42 crc kubenswrapper[4947]: I1129 07:23:42.853908 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4810cfa-8049-4990-bbb4-765ea7f4c264-utilities\") pod \"f4810cfa-8049-4990-bbb4-765ea7f4c264\" (UID: \"f4810cfa-8049-4990-bbb4-765ea7f4c264\") " Nov 29 07:23:42 crc kubenswrapper[4947]: I1129 07:23:42.854080 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4810cfa-8049-4990-bbb4-765ea7f4c264-catalog-content\") pod \"f4810cfa-8049-4990-bbb4-765ea7f4c264\" (UID: \"f4810cfa-8049-4990-bbb4-765ea7f4c264\") " Nov 29 07:23:42 crc kubenswrapper[4947]: I1129 07:23:42.855182 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4810cfa-8049-4990-bbb4-765ea7f4c264-utilities" (OuterVolumeSpecName: "utilities") pod "f4810cfa-8049-4990-bbb4-765ea7f4c264" (UID: "f4810cfa-8049-4990-bbb4-765ea7f4c264"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:23:42 crc kubenswrapper[4947]: I1129 07:23:42.862643 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4810cfa-8049-4990-bbb4-765ea7f4c264-kube-api-access-6mpvv" (OuterVolumeSpecName: "kube-api-access-6mpvv") pod "f4810cfa-8049-4990-bbb4-765ea7f4c264" (UID: "f4810cfa-8049-4990-bbb4-765ea7f4c264"). InnerVolumeSpecName "kube-api-access-6mpvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:23:42 crc kubenswrapper[4947]: I1129 07:23:42.917075 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4810cfa-8049-4990-bbb4-765ea7f4c264-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f4810cfa-8049-4990-bbb4-765ea7f4c264" (UID: "f4810cfa-8049-4990-bbb4-765ea7f4c264"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:23:42 crc kubenswrapper[4947]: I1129 07:23:42.957127 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4810cfa-8049-4990-bbb4-765ea7f4c264-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:23:42 crc kubenswrapper[4947]: I1129 07:23:42.957191 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mpvv\" (UniqueName: \"kubernetes.io/projected/f4810cfa-8049-4990-bbb4-765ea7f4c264-kube-api-access-6mpvv\") on node \"crc\" DevicePath \"\"" Nov 29 07:23:42 crc kubenswrapper[4947]: I1129 07:23:42.957202 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4810cfa-8049-4990-bbb4-765ea7f4c264-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:23:43 crc kubenswrapper[4947]: I1129 07:23:43.181282 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vjglk" Nov 29 07:23:43 crc kubenswrapper[4947]: I1129 07:23:43.181166 4947 generic.go:334] "Generic (PLEG): container finished" podID="f4810cfa-8049-4990-bbb4-765ea7f4c264" containerID="8130dffb0692bca35bdd6fb433d947aa38bfcacb6447ee1bf87fd2ffb357a8bd" exitCode=0 Nov 29 07:23:43 crc kubenswrapper[4947]: I1129 07:23:43.191419 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vjglk" event={"ID":"f4810cfa-8049-4990-bbb4-765ea7f4c264","Type":"ContainerDied","Data":"8130dffb0692bca35bdd6fb433d947aa38bfcacb6447ee1bf87fd2ffb357a8bd"} Nov 29 07:23:43 crc kubenswrapper[4947]: I1129 07:23:43.191491 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vjglk" event={"ID":"f4810cfa-8049-4990-bbb4-765ea7f4c264","Type":"ContainerDied","Data":"ff259d2f2caa80769024d75bb6ee631a39b3dd33070698f058059b02d712bab3"} Nov 29 07:23:43 crc kubenswrapper[4947]: I1129 07:23:43.191521 4947 scope.go:117] "RemoveContainer" containerID="8130dffb0692bca35bdd6fb433d947aa38bfcacb6447ee1bf87fd2ffb357a8bd" Nov 29 07:23:43 crc kubenswrapper[4947]: I1129 07:23:43.224579 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vjglk"] Nov 29 07:23:43 crc kubenswrapper[4947]: I1129 07:23:43.225455 4947 scope.go:117] "RemoveContainer" containerID="4241b6e0c73ea58fb05673211a8fc5233a0c5a67df41dffa3f54ee3198005cd0" Nov 29 07:23:43 crc kubenswrapper[4947]: I1129 07:23:43.236900 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vjglk"] Nov 29 07:23:43 crc kubenswrapper[4947]: I1129 07:23:43.255415 4947 scope.go:117] "RemoveContainer" containerID="af5f0eeefb86033e781f8f1c10f17543691664c8d1adbd8e0f91f2a2ded39a1f" Nov 29 07:23:43 crc kubenswrapper[4947]: I1129 07:23:43.307508 4947 scope.go:117] "RemoveContainer" containerID="8130dffb0692bca35bdd6fb433d947aa38bfcacb6447ee1bf87fd2ffb357a8bd" Nov 29 07:23:43 crc kubenswrapper[4947]: E1129 07:23:43.308877 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8130dffb0692bca35bdd6fb433d947aa38bfcacb6447ee1bf87fd2ffb357a8bd\": container with ID starting with 8130dffb0692bca35bdd6fb433d947aa38bfcacb6447ee1bf87fd2ffb357a8bd not found: ID does not exist" containerID="8130dffb0692bca35bdd6fb433d947aa38bfcacb6447ee1bf87fd2ffb357a8bd" Nov 29 07:23:43 crc kubenswrapper[4947]: I1129 07:23:43.308958 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8130dffb0692bca35bdd6fb433d947aa38bfcacb6447ee1bf87fd2ffb357a8bd"} err="failed to get container status \"8130dffb0692bca35bdd6fb433d947aa38bfcacb6447ee1bf87fd2ffb357a8bd\": rpc error: code = NotFound desc = could not find container \"8130dffb0692bca35bdd6fb433d947aa38bfcacb6447ee1bf87fd2ffb357a8bd\": container with ID starting with 8130dffb0692bca35bdd6fb433d947aa38bfcacb6447ee1bf87fd2ffb357a8bd not found: ID does not exist" Nov 29 07:23:43 crc kubenswrapper[4947]: I1129 07:23:43.309001 4947 scope.go:117] "RemoveContainer" containerID="4241b6e0c73ea58fb05673211a8fc5233a0c5a67df41dffa3f54ee3198005cd0" Nov 29 07:23:43 crc kubenswrapper[4947]: E1129 07:23:43.309848 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4241b6e0c73ea58fb05673211a8fc5233a0c5a67df41dffa3f54ee3198005cd0\": container with ID starting with 4241b6e0c73ea58fb05673211a8fc5233a0c5a67df41dffa3f54ee3198005cd0 not found: ID does not exist" containerID="4241b6e0c73ea58fb05673211a8fc5233a0c5a67df41dffa3f54ee3198005cd0" Nov 29 07:23:43 crc kubenswrapper[4947]: I1129 07:23:43.309943 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4241b6e0c73ea58fb05673211a8fc5233a0c5a67df41dffa3f54ee3198005cd0"} err="failed to get container status \"4241b6e0c73ea58fb05673211a8fc5233a0c5a67df41dffa3f54ee3198005cd0\": rpc error: code = NotFound desc = could not find container \"4241b6e0c73ea58fb05673211a8fc5233a0c5a67df41dffa3f54ee3198005cd0\": container with ID starting with 4241b6e0c73ea58fb05673211a8fc5233a0c5a67df41dffa3f54ee3198005cd0 not found: ID does not exist" Nov 29 07:23:43 crc kubenswrapper[4947]: I1129 07:23:43.309993 4947 scope.go:117] "RemoveContainer" containerID="af5f0eeefb86033e781f8f1c10f17543691664c8d1adbd8e0f91f2a2ded39a1f" Nov 29 07:23:43 crc kubenswrapper[4947]: E1129 07:23:43.310877 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af5f0eeefb86033e781f8f1c10f17543691664c8d1adbd8e0f91f2a2ded39a1f\": container with ID starting with af5f0eeefb86033e781f8f1c10f17543691664c8d1adbd8e0f91f2a2ded39a1f not found: ID does not exist" containerID="af5f0eeefb86033e781f8f1c10f17543691664c8d1adbd8e0f91f2a2ded39a1f" Nov 29 07:23:43 crc kubenswrapper[4947]: I1129 07:23:43.310938 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af5f0eeefb86033e781f8f1c10f17543691664c8d1adbd8e0f91f2a2ded39a1f"} err="failed to get container status \"af5f0eeefb86033e781f8f1c10f17543691664c8d1adbd8e0f91f2a2ded39a1f\": rpc error: code = NotFound desc = could not find container \"af5f0eeefb86033e781f8f1c10f17543691664c8d1adbd8e0f91f2a2ded39a1f\": container with ID starting with af5f0eeefb86033e781f8f1c10f17543691664c8d1adbd8e0f91f2a2ded39a1f not found: ID does not exist" Nov 29 07:23:45 crc kubenswrapper[4947]: I1129 07:23:45.191558 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4810cfa-8049-4990-bbb4-765ea7f4c264" path="/var/lib/kubelet/pods/f4810cfa-8049-4990-bbb4-765ea7f4c264/volumes" Nov 29 07:24:18 crc kubenswrapper[4947]: I1129 07:24:18.535267 4947 generic.go:334] "Generic (PLEG): container finished" podID="ac3f3931-63b8-4dea-a2b4-33faca7b3a93" containerID="4615b2a0a6e5161834555038736202836fb91b330ff62569fc239b72f2f18465" exitCode=0 Nov 29 07:24:18 crc kubenswrapper[4947]: I1129 07:24:18.535401 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t5tbd" event={"ID":"ac3f3931-63b8-4dea-a2b4-33faca7b3a93","Type":"ContainerDied","Data":"4615b2a0a6e5161834555038736202836fb91b330ff62569fc239b72f2f18465"} Nov 29 07:24:20 crc kubenswrapper[4947]: I1129 07:24:20.071683 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t5tbd" Nov 29 07:24:20 crc kubenswrapper[4947]: I1129 07:24:20.139172 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ac3f3931-63b8-4dea-a2b4-33faca7b3a93-ssh-key\") pod \"ac3f3931-63b8-4dea-a2b4-33faca7b3a93\" (UID: \"ac3f3931-63b8-4dea-a2b4-33faca7b3a93\") " Nov 29 07:24:20 crc kubenswrapper[4947]: I1129 07:24:20.140463 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ac3f3931-63b8-4dea-a2b4-33faca7b3a93-ceph\") pod \"ac3f3931-63b8-4dea-a2b4-33faca7b3a93\" (UID: \"ac3f3931-63b8-4dea-a2b4-33faca7b3a93\") " Nov 29 07:24:20 crc kubenswrapper[4947]: I1129 07:24:20.140645 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac3f3931-63b8-4dea-a2b4-33faca7b3a93-inventory\") pod \"ac3f3931-63b8-4dea-a2b4-33faca7b3a93\" (UID: \"ac3f3931-63b8-4dea-a2b4-33faca7b3a93\") " Nov 29 07:24:20 crc kubenswrapper[4947]: I1129 07:24:20.140754 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ac3f3931-63b8-4dea-a2b4-33faca7b3a93-ovncontroller-config-0\") pod \"ac3f3931-63b8-4dea-a2b4-33faca7b3a93\" (UID: \"ac3f3931-63b8-4dea-a2b4-33faca7b3a93\") " Nov 29 07:24:20 crc kubenswrapper[4947]: I1129 07:24:20.140862 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pz4r\" (UniqueName: \"kubernetes.io/projected/ac3f3931-63b8-4dea-a2b4-33faca7b3a93-kube-api-access-6pz4r\") pod \"ac3f3931-63b8-4dea-a2b4-33faca7b3a93\" (UID: \"ac3f3931-63b8-4dea-a2b4-33faca7b3a93\") " Nov 29 07:24:20 crc kubenswrapper[4947]: I1129 07:24:20.140904 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac3f3931-63b8-4dea-a2b4-33faca7b3a93-ovn-combined-ca-bundle\") pod \"ac3f3931-63b8-4dea-a2b4-33faca7b3a93\" (UID: \"ac3f3931-63b8-4dea-a2b4-33faca7b3a93\") " Nov 29 07:24:20 crc kubenswrapper[4947]: I1129 07:24:20.148426 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac3f3931-63b8-4dea-a2b4-33faca7b3a93-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "ac3f3931-63b8-4dea-a2b4-33faca7b3a93" (UID: "ac3f3931-63b8-4dea-a2b4-33faca7b3a93"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:24:20 crc kubenswrapper[4947]: I1129 07:24:20.148536 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac3f3931-63b8-4dea-a2b4-33faca7b3a93-kube-api-access-6pz4r" (OuterVolumeSpecName: "kube-api-access-6pz4r") pod "ac3f3931-63b8-4dea-a2b4-33faca7b3a93" (UID: "ac3f3931-63b8-4dea-a2b4-33faca7b3a93"). InnerVolumeSpecName "kube-api-access-6pz4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:24:20 crc kubenswrapper[4947]: I1129 07:24:20.148670 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac3f3931-63b8-4dea-a2b4-33faca7b3a93-ceph" (OuterVolumeSpecName: "ceph") pod "ac3f3931-63b8-4dea-a2b4-33faca7b3a93" (UID: "ac3f3931-63b8-4dea-a2b4-33faca7b3a93"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:24:20 crc kubenswrapper[4947]: I1129 07:24:20.169966 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac3f3931-63b8-4dea-a2b4-33faca7b3a93-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "ac3f3931-63b8-4dea-a2b4-33faca7b3a93" (UID: "ac3f3931-63b8-4dea-a2b4-33faca7b3a93"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:24:20 crc kubenswrapper[4947]: I1129 07:24:20.171766 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac3f3931-63b8-4dea-a2b4-33faca7b3a93-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ac3f3931-63b8-4dea-a2b4-33faca7b3a93" (UID: "ac3f3931-63b8-4dea-a2b4-33faca7b3a93"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:24:20 crc kubenswrapper[4947]: I1129 07:24:20.185300 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac3f3931-63b8-4dea-a2b4-33faca7b3a93-inventory" (OuterVolumeSpecName: "inventory") pod "ac3f3931-63b8-4dea-a2b4-33faca7b3a93" (UID: "ac3f3931-63b8-4dea-a2b4-33faca7b3a93"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:24:20 crc kubenswrapper[4947]: I1129 07:24:20.245331 4947 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ac3f3931-63b8-4dea-a2b4-33faca7b3a93-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:20 crc kubenswrapper[4947]: I1129 07:24:20.245372 4947 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ac3f3931-63b8-4dea-a2b4-33faca7b3a93-ceph\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:20 crc kubenswrapper[4947]: I1129 07:24:20.245387 4947 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac3f3931-63b8-4dea-a2b4-33faca7b3a93-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:20 crc kubenswrapper[4947]: I1129 07:24:20.245400 4947 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ac3f3931-63b8-4dea-a2b4-33faca7b3a93-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:20 crc kubenswrapper[4947]: I1129 07:24:20.245414 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pz4r\" (UniqueName: \"kubernetes.io/projected/ac3f3931-63b8-4dea-a2b4-33faca7b3a93-kube-api-access-6pz4r\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:20 crc kubenswrapper[4947]: I1129 07:24:20.245424 4947 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac3f3931-63b8-4dea-a2b4-33faca7b3a93-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:24:20 crc kubenswrapper[4947]: I1129 07:24:20.558124 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t5tbd" event={"ID":"ac3f3931-63b8-4dea-a2b4-33faca7b3a93","Type":"ContainerDied","Data":"bedd4a38e499e32fbbb5a2185ca10736d78806b82f39931db61487cbb09ac019"} Nov 29 07:24:20 crc kubenswrapper[4947]: I1129 07:24:20.558189 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bedd4a38e499e32fbbb5a2185ca10736d78806b82f39931db61487cbb09ac019" Nov 29 07:24:20 crc kubenswrapper[4947]: I1129 07:24:20.558555 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t5tbd" Nov 29 07:24:20 crc kubenswrapper[4947]: I1129 07:24:20.788072 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8sg9l"] Nov 29 07:24:20 crc kubenswrapper[4947]: E1129 07:24:20.789438 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac3f3931-63b8-4dea-a2b4-33faca7b3a93" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 29 07:24:20 crc kubenswrapper[4947]: I1129 07:24:20.789466 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac3f3931-63b8-4dea-a2b4-33faca7b3a93" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 29 07:24:20 crc kubenswrapper[4947]: E1129 07:24:20.789499 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4810cfa-8049-4990-bbb4-765ea7f4c264" containerName="extract-content" Nov 29 07:24:20 crc kubenswrapper[4947]: I1129 07:24:20.789508 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4810cfa-8049-4990-bbb4-765ea7f4c264" containerName="extract-content" Nov 29 07:24:20 crc kubenswrapper[4947]: E1129 07:24:20.789560 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4810cfa-8049-4990-bbb4-765ea7f4c264" containerName="registry-server" Nov 29 07:24:20 crc kubenswrapper[4947]: I1129 07:24:20.789569 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4810cfa-8049-4990-bbb4-765ea7f4c264" containerName="registry-server" Nov 29 07:24:20 crc kubenswrapper[4947]: E1129 07:24:20.789591 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4810cfa-8049-4990-bbb4-765ea7f4c264" containerName="extract-utilities" Nov 29 07:24:20 crc kubenswrapper[4947]: I1129 07:24:20.789601 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4810cfa-8049-4990-bbb4-765ea7f4c264" containerName="extract-utilities" Nov 29 07:24:20 crc kubenswrapper[4947]: I1129 07:24:20.789986 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac3f3931-63b8-4dea-a2b4-33faca7b3a93" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 29 07:24:20 crc kubenswrapper[4947]: I1129 07:24:20.790010 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4810cfa-8049-4990-bbb4-765ea7f4c264" containerName="registry-server" Nov 29 07:24:20 crc kubenswrapper[4947]: I1129 07:24:20.791518 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8sg9l" Nov 29 07:24:20 crc kubenswrapper[4947]: I1129 07:24:20.794918 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:24:20 crc kubenswrapper[4947]: I1129 07:24:20.795996 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xvljs" Nov 29 07:24:20 crc kubenswrapper[4947]: I1129 07:24:20.796350 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Nov 29 07:24:20 crc kubenswrapper[4947]: I1129 07:24:20.796716 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:24:20 crc kubenswrapper[4947]: I1129 07:24:20.796954 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 29 07:24:20 crc kubenswrapper[4947]: I1129 07:24:20.801391 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:24:20 crc kubenswrapper[4947]: I1129 07:24:20.801508 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Nov 29 07:24:20 crc kubenswrapper[4947]: I1129 07:24:20.811494 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8sg9l"] Nov 29 07:24:20 crc kubenswrapper[4947]: I1129 07:24:20.861495 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/60c10307-3d24-4e37-b1a6-e165784f8f3c-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-8sg9l\" (UID: \"60c10307-3d24-4e37-b1a6-e165784f8f3c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8sg9l" Nov 29 07:24:20 crc kubenswrapper[4947]: I1129 07:24:20.861586 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/60c10307-3d24-4e37-b1a6-e165784f8f3c-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-8sg9l\" (UID: \"60c10307-3d24-4e37-b1a6-e165784f8f3c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8sg9l" Nov 29 07:24:20 crc kubenswrapper[4947]: I1129 07:24:20.861642 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60c10307-3d24-4e37-b1a6-e165784f8f3c-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-8sg9l\" (UID: \"60c10307-3d24-4e37-b1a6-e165784f8f3c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8sg9l" Nov 29 07:24:20 crc kubenswrapper[4947]: I1129 07:24:20.861719 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/60c10307-3d24-4e37-b1a6-e165784f8f3c-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-8sg9l\" (UID: \"60c10307-3d24-4e37-b1a6-e165784f8f3c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8sg9l" Nov 29 07:24:20 crc kubenswrapper[4947]: I1129 07:24:20.862033 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/60c10307-3d24-4e37-b1a6-e165784f8f3c-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-8sg9l\" (UID: \"60c10307-3d24-4e37-b1a6-e165784f8f3c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8sg9l" Nov 29 07:24:20 crc kubenswrapper[4947]: I1129 07:24:20.862149 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc2zz\" (UniqueName: \"kubernetes.io/projected/60c10307-3d24-4e37-b1a6-e165784f8f3c-kube-api-access-kc2zz\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-8sg9l\" (UID: \"60c10307-3d24-4e37-b1a6-e165784f8f3c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8sg9l" Nov 29 07:24:20 crc kubenswrapper[4947]: I1129 07:24:20.862277 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/60c10307-3d24-4e37-b1a6-e165784f8f3c-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-8sg9l\" (UID: \"60c10307-3d24-4e37-b1a6-e165784f8f3c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8sg9l" Nov 29 07:24:20 crc kubenswrapper[4947]: I1129 07:24:20.965045 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/60c10307-3d24-4e37-b1a6-e165784f8f3c-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-8sg9l\" (UID: \"60c10307-3d24-4e37-b1a6-e165784f8f3c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8sg9l" Nov 29 07:24:20 crc kubenswrapper[4947]: I1129 07:24:20.965810 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/60c10307-3d24-4e37-b1a6-e165784f8f3c-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-8sg9l\" (UID: \"60c10307-3d24-4e37-b1a6-e165784f8f3c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8sg9l" Nov 29 07:24:20 crc kubenswrapper[4947]: I1129 07:24:20.966624 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/60c10307-3d24-4e37-b1a6-e165784f8f3c-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-8sg9l\" (UID: \"60c10307-3d24-4e37-b1a6-e165784f8f3c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8sg9l" Nov 29 07:24:20 crc kubenswrapper[4947]: I1129 07:24:20.966716 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60c10307-3d24-4e37-b1a6-e165784f8f3c-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-8sg9l\" (UID: \"60c10307-3d24-4e37-b1a6-e165784f8f3c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8sg9l" Nov 29 07:24:20 crc kubenswrapper[4947]: I1129 07:24:20.966829 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/60c10307-3d24-4e37-b1a6-e165784f8f3c-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-8sg9l\" (UID: \"60c10307-3d24-4e37-b1a6-e165784f8f3c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8sg9l" Nov 29 07:24:20 crc kubenswrapper[4947]: I1129 07:24:20.966919 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/60c10307-3d24-4e37-b1a6-e165784f8f3c-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-8sg9l\" (UID: \"60c10307-3d24-4e37-b1a6-e165784f8f3c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8sg9l" Nov 29 07:24:20 crc kubenswrapper[4947]: I1129 07:24:20.966956 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc2zz\" (UniqueName: \"kubernetes.io/projected/60c10307-3d24-4e37-b1a6-e165784f8f3c-kube-api-access-kc2zz\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-8sg9l\" (UID: \"60c10307-3d24-4e37-b1a6-e165784f8f3c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8sg9l" Nov 29 07:24:20 crc kubenswrapper[4947]: I1129 07:24:20.973582 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/60c10307-3d24-4e37-b1a6-e165784f8f3c-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-8sg9l\" (UID: \"60c10307-3d24-4e37-b1a6-e165784f8f3c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8sg9l" Nov 29 07:24:20 crc kubenswrapper[4947]: I1129 07:24:20.973582 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/60c10307-3d24-4e37-b1a6-e165784f8f3c-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-8sg9l\" (UID: \"60c10307-3d24-4e37-b1a6-e165784f8f3c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8sg9l" Nov 29 07:24:20 crc kubenswrapper[4947]: I1129 07:24:20.973593 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/60c10307-3d24-4e37-b1a6-e165784f8f3c-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-8sg9l\" (UID: \"60c10307-3d24-4e37-b1a6-e165784f8f3c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8sg9l" Nov 29 07:24:20 crc kubenswrapper[4947]: I1129 07:24:20.973686 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/60c10307-3d24-4e37-b1a6-e165784f8f3c-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-8sg9l\" (UID: \"60c10307-3d24-4e37-b1a6-e165784f8f3c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8sg9l" Nov 29 07:24:20 crc kubenswrapper[4947]: I1129 07:24:20.973892 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60c10307-3d24-4e37-b1a6-e165784f8f3c-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-8sg9l\" (UID: \"60c10307-3d24-4e37-b1a6-e165784f8f3c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8sg9l" Nov 29 07:24:20 crc kubenswrapper[4947]: I1129 07:24:20.974201 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/60c10307-3d24-4e37-b1a6-e165784f8f3c-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-8sg9l\" (UID: \"60c10307-3d24-4e37-b1a6-e165784f8f3c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8sg9l" Nov 29 07:24:20 crc kubenswrapper[4947]: I1129 07:24:20.990991 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc2zz\" (UniqueName: \"kubernetes.io/projected/60c10307-3d24-4e37-b1a6-e165784f8f3c-kube-api-access-kc2zz\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-8sg9l\" (UID: \"60c10307-3d24-4e37-b1a6-e165784f8f3c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8sg9l" Nov 29 07:24:21 crc kubenswrapper[4947]: I1129 07:24:21.128781 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8sg9l" Nov 29 07:24:21 crc kubenswrapper[4947]: I1129 07:24:21.702678 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8sg9l"] Nov 29 07:24:22 crc kubenswrapper[4947]: I1129 07:24:22.580164 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8sg9l" event={"ID":"60c10307-3d24-4e37-b1a6-e165784f8f3c","Type":"ContainerStarted","Data":"83894d9bc192c1d6d3f8348ab746b7bea0c7f939d2f6dffbcfb11a8fc925aa9a"} Nov 29 07:24:23 crc kubenswrapper[4947]: I1129 07:24:23.593277 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8sg9l" event={"ID":"60c10307-3d24-4e37-b1a6-e165784f8f3c","Type":"ContainerStarted","Data":"ac5b051186ffea1f2e31ba41ae70a4f6526f1e37186f1d651af1e402ae257569"} Nov 29 07:24:23 crc kubenswrapper[4947]: I1129 07:24:23.622689 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8sg9l" podStartSLOduration=2.232980474 podStartE2EDuration="3.62266149s" podCreationTimestamp="2025-11-29 07:24:20 +0000 UTC" firstStartedPulling="2025-11-29 07:24:21.709354944 +0000 UTC m=+3012.753737025" lastFinishedPulling="2025-11-29 07:24:23.09903595 +0000 UTC m=+3014.143418041" observedRunningTime="2025-11-29 07:24:23.61552746 +0000 UTC m=+3014.659909551" watchObservedRunningTime="2025-11-29 07:24:23.62266149 +0000 UTC m=+3014.667043571" Nov 29 07:24:54 crc kubenswrapper[4947]: I1129 07:24:54.173299 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-grc75"] Nov 29 07:24:54 crc kubenswrapper[4947]: I1129 07:24:54.176420 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-grc75" Nov 29 07:24:54 crc kubenswrapper[4947]: I1129 07:24:54.205091 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50eba4c0-5dd3-4965-888c-5dcd95b106f3-catalog-content\") pod \"redhat-marketplace-grc75\" (UID: \"50eba4c0-5dd3-4965-888c-5dcd95b106f3\") " pod="openshift-marketplace/redhat-marketplace-grc75" Nov 29 07:24:54 crc kubenswrapper[4947]: I1129 07:24:54.205261 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50eba4c0-5dd3-4965-888c-5dcd95b106f3-utilities\") pod \"redhat-marketplace-grc75\" (UID: \"50eba4c0-5dd3-4965-888c-5dcd95b106f3\") " pod="openshift-marketplace/redhat-marketplace-grc75" Nov 29 07:24:54 crc kubenswrapper[4947]: I1129 07:24:54.205332 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-557mc\" (UniqueName: \"kubernetes.io/projected/50eba4c0-5dd3-4965-888c-5dcd95b106f3-kube-api-access-557mc\") pod \"redhat-marketplace-grc75\" (UID: \"50eba4c0-5dd3-4965-888c-5dcd95b106f3\") " pod="openshift-marketplace/redhat-marketplace-grc75" Nov 29 07:24:54 crc kubenswrapper[4947]: I1129 07:24:54.208277 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-grc75"] Nov 29 07:24:54 crc kubenswrapper[4947]: I1129 07:24:54.308748 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50eba4c0-5dd3-4965-888c-5dcd95b106f3-catalog-content\") pod \"redhat-marketplace-grc75\" (UID: \"50eba4c0-5dd3-4965-888c-5dcd95b106f3\") " pod="openshift-marketplace/redhat-marketplace-grc75" Nov 29 07:24:54 crc kubenswrapper[4947]: I1129 07:24:54.308839 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50eba4c0-5dd3-4965-888c-5dcd95b106f3-utilities\") pod \"redhat-marketplace-grc75\" (UID: \"50eba4c0-5dd3-4965-888c-5dcd95b106f3\") " pod="openshift-marketplace/redhat-marketplace-grc75" Nov 29 07:24:54 crc kubenswrapper[4947]: I1129 07:24:54.308901 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-557mc\" (UniqueName: \"kubernetes.io/projected/50eba4c0-5dd3-4965-888c-5dcd95b106f3-kube-api-access-557mc\") pod \"redhat-marketplace-grc75\" (UID: \"50eba4c0-5dd3-4965-888c-5dcd95b106f3\") " pod="openshift-marketplace/redhat-marketplace-grc75" Nov 29 07:24:54 crc kubenswrapper[4947]: I1129 07:24:54.310092 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50eba4c0-5dd3-4965-888c-5dcd95b106f3-utilities\") pod \"redhat-marketplace-grc75\" (UID: \"50eba4c0-5dd3-4965-888c-5dcd95b106f3\") " pod="openshift-marketplace/redhat-marketplace-grc75" Nov 29 07:24:54 crc kubenswrapper[4947]: I1129 07:24:54.310527 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50eba4c0-5dd3-4965-888c-5dcd95b106f3-catalog-content\") pod \"redhat-marketplace-grc75\" (UID: \"50eba4c0-5dd3-4965-888c-5dcd95b106f3\") " pod="openshift-marketplace/redhat-marketplace-grc75" Nov 29 07:24:54 crc kubenswrapper[4947]: I1129 07:24:54.331801 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-557mc\" (UniqueName: \"kubernetes.io/projected/50eba4c0-5dd3-4965-888c-5dcd95b106f3-kube-api-access-557mc\") pod \"redhat-marketplace-grc75\" (UID: \"50eba4c0-5dd3-4965-888c-5dcd95b106f3\") " pod="openshift-marketplace/redhat-marketplace-grc75" Nov 29 07:24:54 crc kubenswrapper[4947]: I1129 07:24:54.508107 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-grc75" Nov 29 07:24:55 crc kubenswrapper[4947]: I1129 07:24:55.071843 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-grc75"] Nov 29 07:24:55 crc kubenswrapper[4947]: I1129 07:24:55.895147 4947 generic.go:334] "Generic (PLEG): container finished" podID="50eba4c0-5dd3-4965-888c-5dcd95b106f3" containerID="df6c32c5917f2a89aa79c0b28e6564ae329b5a986d621af24c5bd68930032294" exitCode=0 Nov 29 07:24:55 crc kubenswrapper[4947]: I1129 07:24:55.895237 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-grc75" event={"ID":"50eba4c0-5dd3-4965-888c-5dcd95b106f3","Type":"ContainerDied","Data":"df6c32c5917f2a89aa79c0b28e6564ae329b5a986d621af24c5bd68930032294"} Nov 29 07:24:55 crc kubenswrapper[4947]: I1129 07:24:55.895810 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-grc75" event={"ID":"50eba4c0-5dd3-4965-888c-5dcd95b106f3","Type":"ContainerStarted","Data":"c1931d550c75ce566debcb04f6320725b99df01771d090e33fe2c8a29b47bdaf"} Nov 29 07:24:57 crc kubenswrapper[4947]: I1129 07:24:57.927868 4947 generic.go:334] "Generic (PLEG): container finished" podID="50eba4c0-5dd3-4965-888c-5dcd95b106f3" containerID="e09bc8f3eb78624df7a91e5d3eb6362eaf54e14dbf1a1fc57225efc800cbbe48" exitCode=0 Nov 29 07:24:57 crc kubenswrapper[4947]: I1129 07:24:57.927982 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-grc75" event={"ID":"50eba4c0-5dd3-4965-888c-5dcd95b106f3","Type":"ContainerDied","Data":"e09bc8f3eb78624df7a91e5d3eb6362eaf54e14dbf1a1fc57225efc800cbbe48"} Nov 29 07:24:58 crc kubenswrapper[4947]: I1129 07:24:58.943124 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-grc75" event={"ID":"50eba4c0-5dd3-4965-888c-5dcd95b106f3","Type":"ContainerStarted","Data":"8115a45c46b02d2c09c65cc29605ff16f22e68934b3c019ddf7c41c0e710a7c8"} Nov 29 07:24:58 crc kubenswrapper[4947]: I1129 07:24:58.983882 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-grc75" podStartSLOduration=2.49763404 podStartE2EDuration="4.98385442s" podCreationTimestamp="2025-11-29 07:24:54 +0000 UTC" firstStartedPulling="2025-11-29 07:24:55.897197671 +0000 UTC m=+3046.941579752" lastFinishedPulling="2025-11-29 07:24:58.383418051 +0000 UTC m=+3049.427800132" observedRunningTime="2025-11-29 07:24:58.980490765 +0000 UTC m=+3050.024872856" watchObservedRunningTime="2025-11-29 07:24:58.98385442 +0000 UTC m=+3050.028236501" Nov 29 07:25:04 crc kubenswrapper[4947]: I1129 07:25:04.510052 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-grc75" Nov 29 07:25:04 crc kubenswrapper[4947]: I1129 07:25:04.510496 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-grc75" Nov 29 07:25:04 crc kubenswrapper[4947]: I1129 07:25:04.561202 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-grc75" Nov 29 07:25:05 crc kubenswrapper[4947]: I1129 07:25:05.060749 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-grc75" Nov 29 07:25:05 crc kubenswrapper[4947]: I1129 07:25:05.124466 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-grc75"] Nov 29 07:25:07 crc kubenswrapper[4947]: I1129 07:25:07.020463 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-grc75" podUID="50eba4c0-5dd3-4965-888c-5dcd95b106f3" containerName="registry-server" containerID="cri-o://8115a45c46b02d2c09c65cc29605ff16f22e68934b3c019ddf7c41c0e710a7c8" gracePeriod=2 Nov 29 07:25:08 crc kubenswrapper[4947]: I1129 07:25:08.024418 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-grc75" Nov 29 07:25:08 crc kubenswrapper[4947]: I1129 07:25:08.035009 4947 generic.go:334] "Generic (PLEG): container finished" podID="50eba4c0-5dd3-4965-888c-5dcd95b106f3" containerID="8115a45c46b02d2c09c65cc29605ff16f22e68934b3c019ddf7c41c0e710a7c8" exitCode=0 Nov 29 07:25:08 crc kubenswrapper[4947]: I1129 07:25:08.035082 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-grc75" Nov 29 07:25:08 crc kubenswrapper[4947]: I1129 07:25:08.035103 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-grc75" event={"ID":"50eba4c0-5dd3-4965-888c-5dcd95b106f3","Type":"ContainerDied","Data":"8115a45c46b02d2c09c65cc29605ff16f22e68934b3c019ddf7c41c0e710a7c8"} Nov 29 07:25:08 crc kubenswrapper[4947]: I1129 07:25:08.036196 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-grc75" event={"ID":"50eba4c0-5dd3-4965-888c-5dcd95b106f3","Type":"ContainerDied","Data":"c1931d550c75ce566debcb04f6320725b99df01771d090e33fe2c8a29b47bdaf"} Nov 29 07:25:08 crc kubenswrapper[4947]: I1129 07:25:08.036280 4947 scope.go:117] "RemoveContainer" containerID="8115a45c46b02d2c09c65cc29605ff16f22e68934b3c019ddf7c41c0e710a7c8" Nov 29 07:25:08 crc kubenswrapper[4947]: I1129 07:25:08.085037 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50eba4c0-5dd3-4965-888c-5dcd95b106f3-utilities\") pod \"50eba4c0-5dd3-4965-888c-5dcd95b106f3\" (UID: \"50eba4c0-5dd3-4965-888c-5dcd95b106f3\") " Nov 29 07:25:08 crc kubenswrapper[4947]: I1129 07:25:08.085461 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50eba4c0-5dd3-4965-888c-5dcd95b106f3-catalog-content\") pod \"50eba4c0-5dd3-4965-888c-5dcd95b106f3\" (UID: \"50eba4c0-5dd3-4965-888c-5dcd95b106f3\") " Nov 29 07:25:08 crc kubenswrapper[4947]: I1129 07:25:08.085648 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-557mc\" (UniqueName: \"kubernetes.io/projected/50eba4c0-5dd3-4965-888c-5dcd95b106f3-kube-api-access-557mc\") pod \"50eba4c0-5dd3-4965-888c-5dcd95b106f3\" (UID: \"50eba4c0-5dd3-4965-888c-5dcd95b106f3\") " Nov 29 07:25:08 crc kubenswrapper[4947]: I1129 07:25:08.088176 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50eba4c0-5dd3-4965-888c-5dcd95b106f3-utilities" (OuterVolumeSpecName: "utilities") pod "50eba4c0-5dd3-4965-888c-5dcd95b106f3" (UID: "50eba4c0-5dd3-4965-888c-5dcd95b106f3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:25:08 crc kubenswrapper[4947]: I1129 07:25:08.094666 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50eba4c0-5dd3-4965-888c-5dcd95b106f3-kube-api-access-557mc" (OuterVolumeSpecName: "kube-api-access-557mc") pod "50eba4c0-5dd3-4965-888c-5dcd95b106f3" (UID: "50eba4c0-5dd3-4965-888c-5dcd95b106f3"). InnerVolumeSpecName "kube-api-access-557mc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:25:08 crc kubenswrapper[4947]: I1129 07:25:08.096289 4947 scope.go:117] "RemoveContainer" containerID="e09bc8f3eb78624df7a91e5d3eb6362eaf54e14dbf1a1fc57225efc800cbbe48" Nov 29 07:25:08 crc kubenswrapper[4947]: I1129 07:25:08.117698 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50eba4c0-5dd3-4965-888c-5dcd95b106f3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "50eba4c0-5dd3-4965-888c-5dcd95b106f3" (UID: "50eba4c0-5dd3-4965-888c-5dcd95b106f3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:25:08 crc kubenswrapper[4947]: I1129 07:25:08.156880 4947 scope.go:117] "RemoveContainer" containerID="df6c32c5917f2a89aa79c0b28e6564ae329b5a986d621af24c5bd68930032294" Nov 29 07:25:08 crc kubenswrapper[4947]: I1129 07:25:08.191931 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-557mc\" (UniqueName: \"kubernetes.io/projected/50eba4c0-5dd3-4965-888c-5dcd95b106f3-kube-api-access-557mc\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:08 crc kubenswrapper[4947]: I1129 07:25:08.191979 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50eba4c0-5dd3-4965-888c-5dcd95b106f3-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:08 crc kubenswrapper[4947]: I1129 07:25:08.191990 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50eba4c0-5dd3-4965-888c-5dcd95b106f3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:25:08 crc kubenswrapper[4947]: I1129 07:25:08.230339 4947 scope.go:117] "RemoveContainer" containerID="8115a45c46b02d2c09c65cc29605ff16f22e68934b3c019ddf7c41c0e710a7c8" Nov 29 07:25:08 crc kubenswrapper[4947]: E1129 07:25:08.230851 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8115a45c46b02d2c09c65cc29605ff16f22e68934b3c019ddf7c41c0e710a7c8\": container with ID starting with 8115a45c46b02d2c09c65cc29605ff16f22e68934b3c019ddf7c41c0e710a7c8 not found: ID does not exist" containerID="8115a45c46b02d2c09c65cc29605ff16f22e68934b3c019ddf7c41c0e710a7c8" Nov 29 07:25:08 crc kubenswrapper[4947]: I1129 07:25:08.230890 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8115a45c46b02d2c09c65cc29605ff16f22e68934b3c019ddf7c41c0e710a7c8"} err="failed to get container status \"8115a45c46b02d2c09c65cc29605ff16f22e68934b3c019ddf7c41c0e710a7c8\": rpc error: code = NotFound desc = could not find container \"8115a45c46b02d2c09c65cc29605ff16f22e68934b3c019ddf7c41c0e710a7c8\": container with ID starting with 8115a45c46b02d2c09c65cc29605ff16f22e68934b3c019ddf7c41c0e710a7c8 not found: ID does not exist" Nov 29 07:25:08 crc kubenswrapper[4947]: I1129 07:25:08.230919 4947 scope.go:117] "RemoveContainer" containerID="e09bc8f3eb78624df7a91e5d3eb6362eaf54e14dbf1a1fc57225efc800cbbe48" Nov 29 07:25:08 crc kubenswrapper[4947]: E1129 07:25:08.231467 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e09bc8f3eb78624df7a91e5d3eb6362eaf54e14dbf1a1fc57225efc800cbbe48\": container with ID starting with e09bc8f3eb78624df7a91e5d3eb6362eaf54e14dbf1a1fc57225efc800cbbe48 not found: ID does not exist" containerID="e09bc8f3eb78624df7a91e5d3eb6362eaf54e14dbf1a1fc57225efc800cbbe48" Nov 29 07:25:08 crc kubenswrapper[4947]: I1129 07:25:08.231506 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e09bc8f3eb78624df7a91e5d3eb6362eaf54e14dbf1a1fc57225efc800cbbe48"} err="failed to get container status \"e09bc8f3eb78624df7a91e5d3eb6362eaf54e14dbf1a1fc57225efc800cbbe48\": rpc error: code = NotFound desc = could not find container \"e09bc8f3eb78624df7a91e5d3eb6362eaf54e14dbf1a1fc57225efc800cbbe48\": container with ID starting with e09bc8f3eb78624df7a91e5d3eb6362eaf54e14dbf1a1fc57225efc800cbbe48 not found: ID does not exist" Nov 29 07:25:08 crc kubenswrapper[4947]: I1129 07:25:08.231527 4947 scope.go:117] "RemoveContainer" containerID="df6c32c5917f2a89aa79c0b28e6564ae329b5a986d621af24c5bd68930032294" Nov 29 07:25:08 crc kubenswrapper[4947]: E1129 07:25:08.231834 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df6c32c5917f2a89aa79c0b28e6564ae329b5a986d621af24c5bd68930032294\": container with ID starting with df6c32c5917f2a89aa79c0b28e6564ae329b5a986d621af24c5bd68930032294 not found: ID does not exist" containerID="df6c32c5917f2a89aa79c0b28e6564ae329b5a986d621af24c5bd68930032294" Nov 29 07:25:08 crc kubenswrapper[4947]: I1129 07:25:08.231866 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df6c32c5917f2a89aa79c0b28e6564ae329b5a986d621af24c5bd68930032294"} err="failed to get container status \"df6c32c5917f2a89aa79c0b28e6564ae329b5a986d621af24c5bd68930032294\": rpc error: code = NotFound desc = could not find container \"df6c32c5917f2a89aa79c0b28e6564ae329b5a986d621af24c5bd68930032294\": container with ID starting with df6c32c5917f2a89aa79c0b28e6564ae329b5a986d621af24c5bd68930032294 not found: ID does not exist" Nov 29 07:25:08 crc kubenswrapper[4947]: I1129 07:25:08.381050 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-grc75"] Nov 29 07:25:08 crc kubenswrapper[4947]: I1129 07:25:08.392781 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-grc75"] Nov 29 07:25:09 crc kubenswrapper[4947]: I1129 07:25:09.195739 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50eba4c0-5dd3-4965-888c-5dcd95b106f3" path="/var/lib/kubelet/pods/50eba4c0-5dd3-4965-888c-5dcd95b106f3/volumes" Nov 29 07:25:22 crc kubenswrapper[4947]: I1129 07:25:22.987822 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:25:22 crc kubenswrapper[4947]: I1129 07:25:22.988469 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:25:52 crc kubenswrapper[4947]: I1129 07:25:52.988340 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:25:52 crc kubenswrapper[4947]: I1129 07:25:52.988937 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:25:59 crc kubenswrapper[4947]: I1129 07:25:59.551956 4947 generic.go:334] "Generic (PLEG): container finished" podID="60c10307-3d24-4e37-b1a6-e165784f8f3c" containerID="ac5b051186ffea1f2e31ba41ae70a4f6526f1e37186f1d651af1e402ae257569" exitCode=0 Nov 29 07:25:59 crc kubenswrapper[4947]: I1129 07:25:59.552250 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8sg9l" event={"ID":"60c10307-3d24-4e37-b1a6-e165784f8f3c","Type":"ContainerDied","Data":"ac5b051186ffea1f2e31ba41ae70a4f6526f1e37186f1d651af1e402ae257569"} Nov 29 07:26:01 crc kubenswrapper[4947]: I1129 07:26:01.022615 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8sg9l" Nov 29 07:26:01 crc kubenswrapper[4947]: I1129 07:26:01.101712 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/60c10307-3d24-4e37-b1a6-e165784f8f3c-nova-metadata-neutron-config-0\") pod \"60c10307-3d24-4e37-b1a6-e165784f8f3c\" (UID: \"60c10307-3d24-4e37-b1a6-e165784f8f3c\") " Nov 29 07:26:01 crc kubenswrapper[4947]: I1129 07:26:01.101773 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60c10307-3d24-4e37-b1a6-e165784f8f3c-neutron-metadata-combined-ca-bundle\") pod \"60c10307-3d24-4e37-b1a6-e165784f8f3c\" (UID: \"60c10307-3d24-4e37-b1a6-e165784f8f3c\") " Nov 29 07:26:01 crc kubenswrapper[4947]: I1129 07:26:01.102074 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/60c10307-3d24-4e37-b1a6-e165784f8f3c-neutron-ovn-metadata-agent-neutron-config-0\") pod \"60c10307-3d24-4e37-b1a6-e165784f8f3c\" (UID: \"60c10307-3d24-4e37-b1a6-e165784f8f3c\") " Nov 29 07:26:01 crc kubenswrapper[4947]: I1129 07:26:01.102181 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/60c10307-3d24-4e37-b1a6-e165784f8f3c-ceph\") pod \"60c10307-3d24-4e37-b1a6-e165784f8f3c\" (UID: \"60c10307-3d24-4e37-b1a6-e165784f8f3c\") " Nov 29 07:26:01 crc kubenswrapper[4947]: I1129 07:26:01.102242 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/60c10307-3d24-4e37-b1a6-e165784f8f3c-ssh-key\") pod \"60c10307-3d24-4e37-b1a6-e165784f8f3c\" (UID: \"60c10307-3d24-4e37-b1a6-e165784f8f3c\") " Nov 29 07:26:01 crc kubenswrapper[4947]: I1129 07:26:01.102359 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/60c10307-3d24-4e37-b1a6-e165784f8f3c-inventory\") pod \"60c10307-3d24-4e37-b1a6-e165784f8f3c\" (UID: \"60c10307-3d24-4e37-b1a6-e165784f8f3c\") " Nov 29 07:26:01 crc kubenswrapper[4947]: I1129 07:26:01.103114 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kc2zz\" (UniqueName: \"kubernetes.io/projected/60c10307-3d24-4e37-b1a6-e165784f8f3c-kube-api-access-kc2zz\") pod \"60c10307-3d24-4e37-b1a6-e165784f8f3c\" (UID: \"60c10307-3d24-4e37-b1a6-e165784f8f3c\") " Nov 29 07:26:01 crc kubenswrapper[4947]: I1129 07:26:01.109243 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60c10307-3d24-4e37-b1a6-e165784f8f3c-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "60c10307-3d24-4e37-b1a6-e165784f8f3c" (UID: "60c10307-3d24-4e37-b1a6-e165784f8f3c"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:01 crc kubenswrapper[4947]: I1129 07:26:01.109287 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60c10307-3d24-4e37-b1a6-e165784f8f3c-ceph" (OuterVolumeSpecName: "ceph") pod "60c10307-3d24-4e37-b1a6-e165784f8f3c" (UID: "60c10307-3d24-4e37-b1a6-e165784f8f3c"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:01 crc kubenswrapper[4947]: I1129 07:26:01.110103 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60c10307-3d24-4e37-b1a6-e165784f8f3c-kube-api-access-kc2zz" (OuterVolumeSpecName: "kube-api-access-kc2zz") pod "60c10307-3d24-4e37-b1a6-e165784f8f3c" (UID: "60c10307-3d24-4e37-b1a6-e165784f8f3c"). InnerVolumeSpecName "kube-api-access-kc2zz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:26:01 crc kubenswrapper[4947]: I1129 07:26:01.136559 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60c10307-3d24-4e37-b1a6-e165784f8f3c-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "60c10307-3d24-4e37-b1a6-e165784f8f3c" (UID: "60c10307-3d24-4e37-b1a6-e165784f8f3c"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:01 crc kubenswrapper[4947]: I1129 07:26:01.137866 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60c10307-3d24-4e37-b1a6-e165784f8f3c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "60c10307-3d24-4e37-b1a6-e165784f8f3c" (UID: "60c10307-3d24-4e37-b1a6-e165784f8f3c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:01 crc kubenswrapper[4947]: I1129 07:26:01.144610 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60c10307-3d24-4e37-b1a6-e165784f8f3c-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "60c10307-3d24-4e37-b1a6-e165784f8f3c" (UID: "60c10307-3d24-4e37-b1a6-e165784f8f3c"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:01 crc kubenswrapper[4947]: I1129 07:26:01.145534 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60c10307-3d24-4e37-b1a6-e165784f8f3c-inventory" (OuterVolumeSpecName: "inventory") pod "60c10307-3d24-4e37-b1a6-e165784f8f3c" (UID: "60c10307-3d24-4e37-b1a6-e165784f8f3c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:26:01 crc kubenswrapper[4947]: I1129 07:26:01.206366 4947 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/60c10307-3d24-4e37-b1a6-e165784f8f3c-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:01 crc kubenswrapper[4947]: I1129 07:26:01.206733 4947 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60c10307-3d24-4e37-b1a6-e165784f8f3c-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:01 crc kubenswrapper[4947]: I1129 07:26:01.206842 4947 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/60c10307-3d24-4e37-b1a6-e165784f8f3c-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:01 crc kubenswrapper[4947]: I1129 07:26:01.206958 4947 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/60c10307-3d24-4e37-b1a6-e165784f8f3c-ceph\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:01 crc kubenswrapper[4947]: I1129 07:26:01.207048 4947 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/60c10307-3d24-4e37-b1a6-e165784f8f3c-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:01 crc kubenswrapper[4947]: I1129 07:26:01.207138 4947 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/60c10307-3d24-4e37-b1a6-e165784f8f3c-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:01 crc kubenswrapper[4947]: I1129 07:26:01.207238 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kc2zz\" (UniqueName: \"kubernetes.io/projected/60c10307-3d24-4e37-b1a6-e165784f8f3c-kube-api-access-kc2zz\") on node \"crc\" DevicePath \"\"" Nov 29 07:26:01 crc kubenswrapper[4947]: I1129 07:26:01.575247 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8sg9l" event={"ID":"60c10307-3d24-4e37-b1a6-e165784f8f3c","Type":"ContainerDied","Data":"83894d9bc192c1d6d3f8348ab746b7bea0c7f939d2f6dffbcfb11a8fc925aa9a"} Nov 29 07:26:01 crc kubenswrapper[4947]: I1129 07:26:01.575298 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83894d9bc192c1d6d3f8348ab746b7bea0c7f939d2f6dffbcfb11a8fc925aa9a" Nov 29 07:26:01 crc kubenswrapper[4947]: I1129 07:26:01.575324 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8sg9l" Nov 29 07:26:01 crc kubenswrapper[4947]: I1129 07:26:01.693292 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4x8l8"] Nov 29 07:26:01 crc kubenswrapper[4947]: E1129 07:26:01.694012 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60c10307-3d24-4e37-b1a6-e165784f8f3c" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 29 07:26:01 crc kubenswrapper[4947]: I1129 07:26:01.694040 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="60c10307-3d24-4e37-b1a6-e165784f8f3c" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 29 07:26:01 crc kubenswrapper[4947]: E1129 07:26:01.694068 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50eba4c0-5dd3-4965-888c-5dcd95b106f3" containerName="extract-utilities" Nov 29 07:26:01 crc kubenswrapper[4947]: I1129 07:26:01.694077 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="50eba4c0-5dd3-4965-888c-5dcd95b106f3" containerName="extract-utilities" Nov 29 07:26:01 crc kubenswrapper[4947]: E1129 07:26:01.694104 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50eba4c0-5dd3-4965-888c-5dcd95b106f3" containerName="registry-server" Nov 29 07:26:01 crc kubenswrapper[4947]: I1129 07:26:01.694113 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="50eba4c0-5dd3-4965-888c-5dcd95b106f3" containerName="registry-server" Nov 29 07:26:01 crc kubenswrapper[4947]: E1129 07:26:01.694127 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50eba4c0-5dd3-4965-888c-5dcd95b106f3" containerName="extract-content" Nov 29 07:26:01 crc kubenswrapper[4947]: I1129 07:26:01.694134 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="50eba4c0-5dd3-4965-888c-5dcd95b106f3" containerName="extract-content" Nov 29 07:26:01 crc kubenswrapper[4947]: I1129 07:26:01.694444 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="60c10307-3d24-4e37-b1a6-e165784f8f3c" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 29 07:26:01 crc kubenswrapper[4947]: I1129 07:26:01.694479 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="50eba4c0-5dd3-4965-888c-5dcd95b106f3" containerName="registry-server" Nov 29 07:26:01 crc kubenswrapper[4947]: I1129 07:26:01.695278 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4x8l8" Nov 29 07:26:01 crc kubenswrapper[4947]: I1129 07:26:01.698623 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Nov 29 07:26:01 crc kubenswrapper[4947]: I1129 07:26:01.699904 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 29 07:26:01 crc kubenswrapper[4947]: I1129 07:26:01.704164 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:26:01 crc kubenswrapper[4947]: I1129 07:26:01.704477 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xvljs" Nov 29 07:26:01 crc kubenswrapper[4947]: I1129 07:26:01.704594 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:26:01 crc kubenswrapper[4947]: I1129 07:26:01.704651 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:26:01 crc kubenswrapper[4947]: I1129 07:26:01.706870 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4x8l8"] Nov 29 07:26:01 crc kubenswrapper[4947]: I1129 07:26:01.819077 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/97ae78bf-a258-4fbc-912c-f7d6ce706e54-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4x8l8\" (UID: \"97ae78bf-a258-4fbc-912c-f7d6ce706e54\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4x8l8" Nov 29 07:26:01 crc kubenswrapper[4947]: I1129 07:26:01.819495 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xglxk\" (UniqueName: \"kubernetes.io/projected/97ae78bf-a258-4fbc-912c-f7d6ce706e54-kube-api-access-xglxk\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4x8l8\" (UID: \"97ae78bf-a258-4fbc-912c-f7d6ce706e54\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4x8l8" Nov 29 07:26:01 crc kubenswrapper[4947]: I1129 07:26:01.819658 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97ae78bf-a258-4fbc-912c-f7d6ce706e54-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4x8l8\" (UID: \"97ae78bf-a258-4fbc-912c-f7d6ce706e54\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4x8l8" Nov 29 07:26:01 crc kubenswrapper[4947]: I1129 07:26:01.819780 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/97ae78bf-a258-4fbc-912c-f7d6ce706e54-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4x8l8\" (UID: \"97ae78bf-a258-4fbc-912c-f7d6ce706e54\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4x8l8" Nov 29 07:26:01 crc kubenswrapper[4947]: I1129 07:26:01.819986 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97ae78bf-a258-4fbc-912c-f7d6ce706e54-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4x8l8\" (UID: \"97ae78bf-a258-4fbc-912c-f7d6ce706e54\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4x8l8" Nov 29 07:26:01 crc kubenswrapper[4947]: I1129 07:26:01.820134 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/97ae78bf-a258-4fbc-912c-f7d6ce706e54-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4x8l8\" (UID: \"97ae78bf-a258-4fbc-912c-f7d6ce706e54\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4x8l8" Nov 29 07:26:01 crc kubenswrapper[4947]: I1129 07:26:01.922435 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/97ae78bf-a258-4fbc-912c-f7d6ce706e54-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4x8l8\" (UID: \"97ae78bf-a258-4fbc-912c-f7d6ce706e54\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4x8l8" Nov 29 07:26:01 crc kubenswrapper[4947]: I1129 07:26:01.922504 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xglxk\" (UniqueName: \"kubernetes.io/projected/97ae78bf-a258-4fbc-912c-f7d6ce706e54-kube-api-access-xglxk\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4x8l8\" (UID: \"97ae78bf-a258-4fbc-912c-f7d6ce706e54\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4x8l8" Nov 29 07:26:01 crc kubenswrapper[4947]: I1129 07:26:01.922547 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/97ae78bf-a258-4fbc-912c-f7d6ce706e54-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4x8l8\" (UID: \"97ae78bf-a258-4fbc-912c-f7d6ce706e54\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4x8l8" Nov 29 07:26:01 crc kubenswrapper[4947]: I1129 07:26:01.922568 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97ae78bf-a258-4fbc-912c-f7d6ce706e54-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4x8l8\" (UID: \"97ae78bf-a258-4fbc-912c-f7d6ce706e54\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4x8l8" Nov 29 07:26:01 crc kubenswrapper[4947]: I1129 07:26:01.922598 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97ae78bf-a258-4fbc-912c-f7d6ce706e54-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4x8l8\" (UID: \"97ae78bf-a258-4fbc-912c-f7d6ce706e54\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4x8l8" Nov 29 07:26:01 crc kubenswrapper[4947]: I1129 07:26:01.922640 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/97ae78bf-a258-4fbc-912c-f7d6ce706e54-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4x8l8\" (UID: \"97ae78bf-a258-4fbc-912c-f7d6ce706e54\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4x8l8" Nov 29 07:26:01 crc kubenswrapper[4947]: I1129 07:26:01.927120 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/97ae78bf-a258-4fbc-912c-f7d6ce706e54-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4x8l8\" (UID: \"97ae78bf-a258-4fbc-912c-f7d6ce706e54\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4x8l8" Nov 29 07:26:01 crc kubenswrapper[4947]: I1129 07:26:01.927177 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/97ae78bf-a258-4fbc-912c-f7d6ce706e54-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4x8l8\" (UID: \"97ae78bf-a258-4fbc-912c-f7d6ce706e54\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4x8l8" Nov 29 07:26:01 crc kubenswrapper[4947]: I1129 07:26:01.927227 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/97ae78bf-a258-4fbc-912c-f7d6ce706e54-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4x8l8\" (UID: \"97ae78bf-a258-4fbc-912c-f7d6ce706e54\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4x8l8" Nov 29 07:26:01 crc kubenswrapper[4947]: I1129 07:26:01.927644 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97ae78bf-a258-4fbc-912c-f7d6ce706e54-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4x8l8\" (UID: \"97ae78bf-a258-4fbc-912c-f7d6ce706e54\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4x8l8" Nov 29 07:26:01 crc kubenswrapper[4947]: I1129 07:26:01.934591 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97ae78bf-a258-4fbc-912c-f7d6ce706e54-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4x8l8\" (UID: \"97ae78bf-a258-4fbc-912c-f7d6ce706e54\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4x8l8" Nov 29 07:26:01 crc kubenswrapper[4947]: I1129 07:26:01.942903 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xglxk\" (UniqueName: \"kubernetes.io/projected/97ae78bf-a258-4fbc-912c-f7d6ce706e54-kube-api-access-xglxk\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4x8l8\" (UID: \"97ae78bf-a258-4fbc-912c-f7d6ce706e54\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4x8l8" Nov 29 07:26:02 crc kubenswrapper[4947]: I1129 07:26:02.023242 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4x8l8" Nov 29 07:26:02 crc kubenswrapper[4947]: I1129 07:26:02.577143 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4x8l8"] Nov 29 07:26:02 crc kubenswrapper[4947]: I1129 07:26:02.583147 4947 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 07:26:03 crc kubenswrapper[4947]: I1129 07:26:03.597365 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4x8l8" event={"ID":"97ae78bf-a258-4fbc-912c-f7d6ce706e54","Type":"ContainerStarted","Data":"2c616502c06489d694ccb03715648d275d2eb8099e1f42a8fa3ce06fa60e1a00"} Nov 29 07:26:04 crc kubenswrapper[4947]: I1129 07:26:04.607934 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4x8l8" event={"ID":"97ae78bf-a258-4fbc-912c-f7d6ce706e54","Type":"ContainerStarted","Data":"4a2a4576603769b101a0cc56d7e6dfc0b5b3949a5e2c9dc63c7e01ad109186fe"} Nov 29 07:26:04 crc kubenswrapper[4947]: I1129 07:26:04.634310 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4x8l8" podStartSLOduration=2.823738323 podStartE2EDuration="3.634271127s" podCreationTimestamp="2025-11-29 07:26:01 +0000 UTC" firstStartedPulling="2025-11-29 07:26:02.582929155 +0000 UTC m=+3113.627311236" lastFinishedPulling="2025-11-29 07:26:03.393461959 +0000 UTC m=+3114.437844040" observedRunningTime="2025-11-29 07:26:04.623914015 +0000 UTC m=+3115.668296146" watchObservedRunningTime="2025-11-29 07:26:04.634271127 +0000 UTC m=+3115.678653208" Nov 29 07:26:22 crc kubenswrapper[4947]: I1129 07:26:22.987260 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:26:22 crc kubenswrapper[4947]: I1129 07:26:22.987858 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:26:22 crc kubenswrapper[4947]: I1129 07:26:22.987912 4947 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" Nov 29 07:26:22 crc kubenswrapper[4947]: I1129 07:26:22.988721 4947 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9f307de5f94b3683e9f730374e74b4cfb62a198f3b5d373575c97183b50a6cd9"} pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 07:26:22 crc kubenswrapper[4947]: I1129 07:26:22.988781 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" containerID="cri-o://9f307de5f94b3683e9f730374e74b4cfb62a198f3b5d373575c97183b50a6cd9" gracePeriod=600 Nov 29 07:26:23 crc kubenswrapper[4947]: I1129 07:26:23.797601 4947 generic.go:334] "Generic (PLEG): container finished" podID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerID="9f307de5f94b3683e9f730374e74b4cfb62a198f3b5d373575c97183b50a6cd9" exitCode=0 Nov 29 07:26:23 crc kubenswrapper[4947]: I1129 07:26:23.797707 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" event={"ID":"5f4d791f-bb61-4aaa-a09c-3007b59645a7","Type":"ContainerDied","Data":"9f307de5f94b3683e9f730374e74b4cfb62a198f3b5d373575c97183b50a6cd9"} Nov 29 07:26:23 crc kubenswrapper[4947]: I1129 07:26:23.797831 4947 scope.go:117] "RemoveContainer" containerID="c579e62fbc7ec20ab5411cd9ba7d8f85ddfcbbe286f5b6a301b4c37126dd7a87" Nov 29 07:26:23 crc kubenswrapper[4947]: E1129 07:26:23.876488 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:26:24 crc kubenswrapper[4947]: I1129 07:26:24.809829 4947 scope.go:117] "RemoveContainer" containerID="9f307de5f94b3683e9f730374e74b4cfb62a198f3b5d373575c97183b50a6cd9" Nov 29 07:26:24 crc kubenswrapper[4947]: E1129 07:26:24.810616 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:26:36 crc kubenswrapper[4947]: I1129 07:26:36.178959 4947 scope.go:117] "RemoveContainer" containerID="9f307de5f94b3683e9f730374e74b4cfb62a198f3b5d373575c97183b50a6cd9" Nov 29 07:26:36 crc kubenswrapper[4947]: E1129 07:26:36.179808 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:26:48 crc kubenswrapper[4947]: I1129 07:26:48.179321 4947 scope.go:117] "RemoveContainer" containerID="9f307de5f94b3683e9f730374e74b4cfb62a198f3b5d373575c97183b50a6cd9" Nov 29 07:26:48 crc kubenswrapper[4947]: E1129 07:26:48.180234 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:27:03 crc kubenswrapper[4947]: I1129 07:27:03.179794 4947 scope.go:117] "RemoveContainer" containerID="9f307de5f94b3683e9f730374e74b4cfb62a198f3b5d373575c97183b50a6cd9" Nov 29 07:27:03 crc kubenswrapper[4947]: E1129 07:27:03.180979 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:27:18 crc kubenswrapper[4947]: I1129 07:27:18.179894 4947 scope.go:117] "RemoveContainer" containerID="9f307de5f94b3683e9f730374e74b4cfb62a198f3b5d373575c97183b50a6cd9" Nov 29 07:27:18 crc kubenswrapper[4947]: E1129 07:27:18.181192 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:27:29 crc kubenswrapper[4947]: I1129 07:27:29.190525 4947 scope.go:117] "RemoveContainer" containerID="9f307de5f94b3683e9f730374e74b4cfb62a198f3b5d373575c97183b50a6cd9" Nov 29 07:27:29 crc kubenswrapper[4947]: E1129 07:27:29.192241 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:27:43 crc kubenswrapper[4947]: I1129 07:27:43.179688 4947 scope.go:117] "RemoveContainer" containerID="9f307de5f94b3683e9f730374e74b4cfb62a198f3b5d373575c97183b50a6cd9" Nov 29 07:27:43 crc kubenswrapper[4947]: E1129 07:27:43.184066 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:27:55 crc kubenswrapper[4947]: I1129 07:27:55.179949 4947 scope.go:117] "RemoveContainer" containerID="9f307de5f94b3683e9f730374e74b4cfb62a198f3b5d373575c97183b50a6cd9" Nov 29 07:27:55 crc kubenswrapper[4947]: E1129 07:27:55.181323 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:28:08 crc kubenswrapper[4947]: I1129 07:28:08.178698 4947 scope.go:117] "RemoveContainer" containerID="9f307de5f94b3683e9f730374e74b4cfb62a198f3b5d373575c97183b50a6cd9" Nov 29 07:28:08 crc kubenswrapper[4947]: E1129 07:28:08.198983 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:28:16 crc kubenswrapper[4947]: I1129 07:28:16.861139 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-k895b"] Nov 29 07:28:16 crc kubenswrapper[4947]: I1129 07:28:16.863708 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k895b" Nov 29 07:28:16 crc kubenswrapper[4947]: I1129 07:28:16.895920 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k895b"] Nov 29 07:28:16 crc kubenswrapper[4947]: I1129 07:28:16.992837 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36d79fe8-06fb-4d21-8b9d-c0065e02274e-utilities\") pod \"certified-operators-k895b\" (UID: \"36d79fe8-06fb-4d21-8b9d-c0065e02274e\") " pod="openshift-marketplace/certified-operators-k895b" Nov 29 07:28:16 crc kubenswrapper[4947]: I1129 07:28:16.993030 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36d79fe8-06fb-4d21-8b9d-c0065e02274e-catalog-content\") pod \"certified-operators-k895b\" (UID: \"36d79fe8-06fb-4d21-8b9d-c0065e02274e\") " pod="openshift-marketplace/certified-operators-k895b" Nov 29 07:28:16 crc kubenswrapper[4947]: I1129 07:28:16.993088 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csk6p\" (UniqueName: \"kubernetes.io/projected/36d79fe8-06fb-4d21-8b9d-c0065e02274e-kube-api-access-csk6p\") pod \"certified-operators-k895b\" (UID: \"36d79fe8-06fb-4d21-8b9d-c0065e02274e\") " pod="openshift-marketplace/certified-operators-k895b" Nov 29 07:28:17 crc kubenswrapper[4947]: I1129 07:28:17.095356 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csk6p\" (UniqueName: \"kubernetes.io/projected/36d79fe8-06fb-4d21-8b9d-c0065e02274e-kube-api-access-csk6p\") pod \"certified-operators-k895b\" (UID: \"36d79fe8-06fb-4d21-8b9d-c0065e02274e\") " pod="openshift-marketplace/certified-operators-k895b" Nov 29 07:28:17 crc kubenswrapper[4947]: I1129 07:28:17.096025 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36d79fe8-06fb-4d21-8b9d-c0065e02274e-utilities\") pod \"certified-operators-k895b\" (UID: \"36d79fe8-06fb-4d21-8b9d-c0065e02274e\") " pod="openshift-marketplace/certified-operators-k895b" Nov 29 07:28:17 crc kubenswrapper[4947]: I1129 07:28:17.096315 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36d79fe8-06fb-4d21-8b9d-c0065e02274e-catalog-content\") pod \"certified-operators-k895b\" (UID: \"36d79fe8-06fb-4d21-8b9d-c0065e02274e\") " pod="openshift-marketplace/certified-operators-k895b" Nov 29 07:28:17 crc kubenswrapper[4947]: I1129 07:28:17.096924 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36d79fe8-06fb-4d21-8b9d-c0065e02274e-utilities\") pod \"certified-operators-k895b\" (UID: \"36d79fe8-06fb-4d21-8b9d-c0065e02274e\") " pod="openshift-marketplace/certified-operators-k895b" Nov 29 07:28:17 crc kubenswrapper[4947]: I1129 07:28:17.096994 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36d79fe8-06fb-4d21-8b9d-c0065e02274e-catalog-content\") pod \"certified-operators-k895b\" (UID: \"36d79fe8-06fb-4d21-8b9d-c0065e02274e\") " pod="openshift-marketplace/certified-operators-k895b" Nov 29 07:28:17 crc kubenswrapper[4947]: I1129 07:28:17.119438 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csk6p\" (UniqueName: \"kubernetes.io/projected/36d79fe8-06fb-4d21-8b9d-c0065e02274e-kube-api-access-csk6p\") pod \"certified-operators-k895b\" (UID: \"36d79fe8-06fb-4d21-8b9d-c0065e02274e\") " pod="openshift-marketplace/certified-operators-k895b" Nov 29 07:28:17 crc kubenswrapper[4947]: I1129 07:28:17.202492 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k895b" Nov 29 07:28:17 crc kubenswrapper[4947]: I1129 07:28:17.843791 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k895b"] Nov 29 07:28:17 crc kubenswrapper[4947]: I1129 07:28:17.990594 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k895b" event={"ID":"36d79fe8-06fb-4d21-8b9d-c0065e02274e","Type":"ContainerStarted","Data":"000ec64ac947d6e0113a3194e68654bf0c021556e1df25895a86673682a4b567"} Nov 29 07:28:19 crc kubenswrapper[4947]: I1129 07:28:19.002341 4947 generic.go:334] "Generic (PLEG): container finished" podID="36d79fe8-06fb-4d21-8b9d-c0065e02274e" containerID="fbebfb8ed18b47ce584d64ea07153231d1fcd4dda72133c9d0ee16bbf6d9c4ae" exitCode=0 Nov 29 07:28:19 crc kubenswrapper[4947]: I1129 07:28:19.002433 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k895b" event={"ID":"36d79fe8-06fb-4d21-8b9d-c0065e02274e","Type":"ContainerDied","Data":"fbebfb8ed18b47ce584d64ea07153231d1fcd4dda72133c9d0ee16bbf6d9c4ae"} Nov 29 07:28:21 crc kubenswrapper[4947]: I1129 07:28:21.038612 4947 generic.go:334] "Generic (PLEG): container finished" podID="36d79fe8-06fb-4d21-8b9d-c0065e02274e" containerID="3824acabff7f1777467f9228656c180b37e820e00be51c237b65420a33f213fb" exitCode=0 Nov 29 07:28:21 crc kubenswrapper[4947]: I1129 07:28:21.039082 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k895b" event={"ID":"36d79fe8-06fb-4d21-8b9d-c0065e02274e","Type":"ContainerDied","Data":"3824acabff7f1777467f9228656c180b37e820e00be51c237b65420a33f213fb"} Nov 29 07:28:21 crc kubenswrapper[4947]: I1129 07:28:21.179442 4947 scope.go:117] "RemoveContainer" containerID="9f307de5f94b3683e9f730374e74b4cfb62a198f3b5d373575c97183b50a6cd9" Nov 29 07:28:21 crc kubenswrapper[4947]: E1129 07:28:21.179799 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:28:21 crc kubenswrapper[4947]: I1129 07:28:21.246885 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jxs6d"] Nov 29 07:28:21 crc kubenswrapper[4947]: I1129 07:28:21.249515 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jxs6d" Nov 29 07:28:21 crc kubenswrapper[4947]: I1129 07:28:21.257581 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jxs6d"] Nov 29 07:28:21 crc kubenswrapper[4947]: I1129 07:28:21.385427 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2bm9\" (UniqueName: \"kubernetes.io/projected/7d5a7203-76b9-49e6-96c3-6bab103ff1a6-kube-api-access-z2bm9\") pod \"redhat-operators-jxs6d\" (UID: \"7d5a7203-76b9-49e6-96c3-6bab103ff1a6\") " pod="openshift-marketplace/redhat-operators-jxs6d" Nov 29 07:28:21 crc kubenswrapper[4947]: I1129 07:28:21.385766 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d5a7203-76b9-49e6-96c3-6bab103ff1a6-utilities\") pod \"redhat-operators-jxs6d\" (UID: \"7d5a7203-76b9-49e6-96c3-6bab103ff1a6\") " pod="openshift-marketplace/redhat-operators-jxs6d" Nov 29 07:28:21 crc kubenswrapper[4947]: I1129 07:28:21.386115 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d5a7203-76b9-49e6-96c3-6bab103ff1a6-catalog-content\") pod \"redhat-operators-jxs6d\" (UID: \"7d5a7203-76b9-49e6-96c3-6bab103ff1a6\") " pod="openshift-marketplace/redhat-operators-jxs6d" Nov 29 07:28:21 crc kubenswrapper[4947]: I1129 07:28:21.488808 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2bm9\" (UniqueName: \"kubernetes.io/projected/7d5a7203-76b9-49e6-96c3-6bab103ff1a6-kube-api-access-z2bm9\") pod \"redhat-operators-jxs6d\" (UID: \"7d5a7203-76b9-49e6-96c3-6bab103ff1a6\") " pod="openshift-marketplace/redhat-operators-jxs6d" Nov 29 07:28:21 crc kubenswrapper[4947]: I1129 07:28:21.488967 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d5a7203-76b9-49e6-96c3-6bab103ff1a6-utilities\") pod \"redhat-operators-jxs6d\" (UID: \"7d5a7203-76b9-49e6-96c3-6bab103ff1a6\") " pod="openshift-marketplace/redhat-operators-jxs6d" Nov 29 07:28:21 crc kubenswrapper[4947]: I1129 07:28:21.489071 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d5a7203-76b9-49e6-96c3-6bab103ff1a6-catalog-content\") pod \"redhat-operators-jxs6d\" (UID: \"7d5a7203-76b9-49e6-96c3-6bab103ff1a6\") " pod="openshift-marketplace/redhat-operators-jxs6d" Nov 29 07:28:21 crc kubenswrapper[4947]: I1129 07:28:21.489722 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d5a7203-76b9-49e6-96c3-6bab103ff1a6-catalog-content\") pod \"redhat-operators-jxs6d\" (UID: \"7d5a7203-76b9-49e6-96c3-6bab103ff1a6\") " pod="openshift-marketplace/redhat-operators-jxs6d" Nov 29 07:28:21 crc kubenswrapper[4947]: I1129 07:28:21.489941 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d5a7203-76b9-49e6-96c3-6bab103ff1a6-utilities\") pod \"redhat-operators-jxs6d\" (UID: \"7d5a7203-76b9-49e6-96c3-6bab103ff1a6\") " pod="openshift-marketplace/redhat-operators-jxs6d" Nov 29 07:28:21 crc kubenswrapper[4947]: I1129 07:28:21.521738 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2bm9\" (UniqueName: \"kubernetes.io/projected/7d5a7203-76b9-49e6-96c3-6bab103ff1a6-kube-api-access-z2bm9\") pod \"redhat-operators-jxs6d\" (UID: \"7d5a7203-76b9-49e6-96c3-6bab103ff1a6\") " pod="openshift-marketplace/redhat-operators-jxs6d" Nov 29 07:28:21 crc kubenswrapper[4947]: I1129 07:28:21.591908 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jxs6d" Nov 29 07:28:22 crc kubenswrapper[4947]: I1129 07:28:22.058497 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k895b" event={"ID":"36d79fe8-06fb-4d21-8b9d-c0065e02274e","Type":"ContainerStarted","Data":"2bb85e69ae62cb2b4c0c5e5cf628871908238c8973275f8ec68937503de366ba"} Nov 29 07:28:22 crc kubenswrapper[4947]: I1129 07:28:22.097178 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-k895b" podStartSLOduration=3.390018302 podStartE2EDuration="6.097159229s" podCreationTimestamp="2025-11-29 07:28:16 +0000 UTC" firstStartedPulling="2025-11-29 07:28:19.005508064 +0000 UTC m=+3250.049890145" lastFinishedPulling="2025-11-29 07:28:21.712648981 +0000 UTC m=+3252.757031072" observedRunningTime="2025-11-29 07:28:22.094244406 +0000 UTC m=+3253.138626487" watchObservedRunningTime="2025-11-29 07:28:22.097159229 +0000 UTC m=+3253.141541310" Nov 29 07:28:22 crc kubenswrapper[4947]: I1129 07:28:22.167374 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jxs6d"] Nov 29 07:28:23 crc kubenswrapper[4947]: I1129 07:28:23.070104 4947 generic.go:334] "Generic (PLEG): container finished" podID="7d5a7203-76b9-49e6-96c3-6bab103ff1a6" containerID="f2ba2225a294d31e8b0b5fe02c1c63c4dd96b8734cc973227fa1630b4a0451d4" exitCode=0 Nov 29 07:28:23 crc kubenswrapper[4947]: I1129 07:28:23.070205 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jxs6d" event={"ID":"7d5a7203-76b9-49e6-96c3-6bab103ff1a6","Type":"ContainerDied","Data":"f2ba2225a294d31e8b0b5fe02c1c63c4dd96b8734cc973227fa1630b4a0451d4"} Nov 29 07:28:23 crc kubenswrapper[4947]: I1129 07:28:23.070639 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jxs6d" event={"ID":"7d5a7203-76b9-49e6-96c3-6bab103ff1a6","Type":"ContainerStarted","Data":"d0a6b11f39e42e0883555b02b2339f91c675f9493c260386e70b53d9f079d170"} Nov 29 07:28:26 crc kubenswrapper[4947]: I1129 07:28:26.104792 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jxs6d" event={"ID":"7d5a7203-76b9-49e6-96c3-6bab103ff1a6","Type":"ContainerStarted","Data":"b7670d2b9df42ec8a7a7ddb17f4438acd80b9045ffc0bbf843e3ea28b2dde653"} Nov 29 07:28:27 crc kubenswrapper[4947]: I1129 07:28:27.203894 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-k895b" Nov 29 07:28:27 crc kubenswrapper[4947]: I1129 07:28:27.204312 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-k895b" Nov 29 07:28:27 crc kubenswrapper[4947]: I1129 07:28:27.275446 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-k895b" Nov 29 07:28:28 crc kubenswrapper[4947]: I1129 07:28:28.176502 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-k895b" Nov 29 07:28:28 crc kubenswrapper[4947]: I1129 07:28:28.427949 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k895b"] Nov 29 07:28:29 crc kubenswrapper[4947]: I1129 07:28:29.140470 4947 generic.go:334] "Generic (PLEG): container finished" podID="7d5a7203-76b9-49e6-96c3-6bab103ff1a6" containerID="b7670d2b9df42ec8a7a7ddb17f4438acd80b9045ffc0bbf843e3ea28b2dde653" exitCode=0 Nov 29 07:28:29 crc kubenswrapper[4947]: I1129 07:28:29.140549 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jxs6d" event={"ID":"7d5a7203-76b9-49e6-96c3-6bab103ff1a6","Type":"ContainerDied","Data":"b7670d2b9df42ec8a7a7ddb17f4438acd80b9045ffc0bbf843e3ea28b2dde653"} Nov 29 07:28:30 crc kubenswrapper[4947]: I1129 07:28:30.149259 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-k895b" podUID="36d79fe8-06fb-4d21-8b9d-c0065e02274e" containerName="registry-server" containerID="cri-o://2bb85e69ae62cb2b4c0c5e5cf628871908238c8973275f8ec68937503de366ba" gracePeriod=2 Nov 29 07:28:31 crc kubenswrapper[4947]: I1129 07:28:31.161876 4947 generic.go:334] "Generic (PLEG): container finished" podID="36d79fe8-06fb-4d21-8b9d-c0065e02274e" containerID="2bb85e69ae62cb2b4c0c5e5cf628871908238c8973275f8ec68937503de366ba" exitCode=0 Nov 29 07:28:31 crc kubenswrapper[4947]: I1129 07:28:31.162187 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k895b" event={"ID":"36d79fe8-06fb-4d21-8b9d-c0065e02274e","Type":"ContainerDied","Data":"2bb85e69ae62cb2b4c0c5e5cf628871908238c8973275f8ec68937503de366ba"} Nov 29 07:28:31 crc kubenswrapper[4947]: I1129 07:28:31.895076 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k895b" Nov 29 07:28:32 crc kubenswrapper[4947]: I1129 07:28:32.016653 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36d79fe8-06fb-4d21-8b9d-c0065e02274e-utilities\") pod \"36d79fe8-06fb-4d21-8b9d-c0065e02274e\" (UID: \"36d79fe8-06fb-4d21-8b9d-c0065e02274e\") " Nov 29 07:28:32 crc kubenswrapper[4947]: I1129 07:28:32.017458 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36d79fe8-06fb-4d21-8b9d-c0065e02274e-catalog-content\") pod \"36d79fe8-06fb-4d21-8b9d-c0065e02274e\" (UID: \"36d79fe8-06fb-4d21-8b9d-c0065e02274e\") " Nov 29 07:28:32 crc kubenswrapper[4947]: I1129 07:28:32.017613 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36d79fe8-06fb-4d21-8b9d-c0065e02274e-utilities" (OuterVolumeSpecName: "utilities") pod "36d79fe8-06fb-4d21-8b9d-c0065e02274e" (UID: "36d79fe8-06fb-4d21-8b9d-c0065e02274e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:28:32 crc kubenswrapper[4947]: I1129 07:28:32.019490 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csk6p\" (UniqueName: \"kubernetes.io/projected/36d79fe8-06fb-4d21-8b9d-c0065e02274e-kube-api-access-csk6p\") pod \"36d79fe8-06fb-4d21-8b9d-c0065e02274e\" (UID: \"36d79fe8-06fb-4d21-8b9d-c0065e02274e\") " Nov 29 07:28:32 crc kubenswrapper[4947]: I1129 07:28:32.020991 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36d79fe8-06fb-4d21-8b9d-c0065e02274e-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:32 crc kubenswrapper[4947]: I1129 07:28:32.045527 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36d79fe8-06fb-4d21-8b9d-c0065e02274e-kube-api-access-csk6p" (OuterVolumeSpecName: "kube-api-access-csk6p") pod "36d79fe8-06fb-4d21-8b9d-c0065e02274e" (UID: "36d79fe8-06fb-4d21-8b9d-c0065e02274e"). InnerVolumeSpecName "kube-api-access-csk6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:28:32 crc kubenswrapper[4947]: I1129 07:28:32.072991 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36d79fe8-06fb-4d21-8b9d-c0065e02274e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "36d79fe8-06fb-4d21-8b9d-c0065e02274e" (UID: "36d79fe8-06fb-4d21-8b9d-c0065e02274e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:28:32 crc kubenswrapper[4947]: I1129 07:28:32.123075 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36d79fe8-06fb-4d21-8b9d-c0065e02274e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:32 crc kubenswrapper[4947]: I1129 07:28:32.123121 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-csk6p\" (UniqueName: \"kubernetes.io/projected/36d79fe8-06fb-4d21-8b9d-c0065e02274e-kube-api-access-csk6p\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:32 crc kubenswrapper[4947]: I1129 07:28:32.174428 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jxs6d" event={"ID":"7d5a7203-76b9-49e6-96c3-6bab103ff1a6","Type":"ContainerStarted","Data":"c12245e0c3bff47a9333f18c7df50efceba640645d8bd61900e9e3694932d6fa"} Nov 29 07:28:32 crc kubenswrapper[4947]: I1129 07:28:32.180094 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k895b" event={"ID":"36d79fe8-06fb-4d21-8b9d-c0065e02274e","Type":"ContainerDied","Data":"000ec64ac947d6e0113a3194e68654bf0c021556e1df25895a86673682a4b567"} Nov 29 07:28:32 crc kubenswrapper[4947]: I1129 07:28:32.180143 4947 scope.go:117] "RemoveContainer" containerID="2bb85e69ae62cb2b4c0c5e5cf628871908238c8973275f8ec68937503de366ba" Nov 29 07:28:32 crc kubenswrapper[4947]: I1129 07:28:32.180150 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k895b" Nov 29 07:28:32 crc kubenswrapper[4947]: I1129 07:28:32.197159 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jxs6d" podStartSLOduration=2.649373498 podStartE2EDuration="11.197139995s" podCreationTimestamp="2025-11-29 07:28:21 +0000 UTC" firstStartedPulling="2025-11-29 07:28:23.071981721 +0000 UTC m=+3254.116363802" lastFinishedPulling="2025-11-29 07:28:31.619748218 +0000 UTC m=+3262.664130299" observedRunningTime="2025-11-29 07:28:32.195648807 +0000 UTC m=+3263.240030918" watchObservedRunningTime="2025-11-29 07:28:32.197139995 +0000 UTC m=+3263.241522076" Nov 29 07:28:32 crc kubenswrapper[4947]: I1129 07:28:32.204631 4947 scope.go:117] "RemoveContainer" containerID="3824acabff7f1777467f9228656c180b37e820e00be51c237b65420a33f213fb" Nov 29 07:28:32 crc kubenswrapper[4947]: I1129 07:28:32.237328 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k895b"] Nov 29 07:28:32 crc kubenswrapper[4947]: I1129 07:28:32.250304 4947 scope.go:117] "RemoveContainer" containerID="fbebfb8ed18b47ce584d64ea07153231d1fcd4dda72133c9d0ee16bbf6d9c4ae" Nov 29 07:28:32 crc kubenswrapper[4947]: I1129 07:28:32.256039 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-k895b"] Nov 29 07:28:33 crc kubenswrapper[4947]: I1129 07:28:33.192787 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36d79fe8-06fb-4d21-8b9d-c0065e02274e" path="/var/lib/kubelet/pods/36d79fe8-06fb-4d21-8b9d-c0065e02274e/volumes" Nov 29 07:28:36 crc kubenswrapper[4947]: I1129 07:28:36.179516 4947 scope.go:117] "RemoveContainer" containerID="9f307de5f94b3683e9f730374e74b4cfb62a198f3b5d373575c97183b50a6cd9" Nov 29 07:28:36 crc kubenswrapper[4947]: E1129 07:28:36.180277 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:28:41 crc kubenswrapper[4947]: I1129 07:28:41.592798 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jxs6d" Nov 29 07:28:41 crc kubenswrapper[4947]: I1129 07:28:41.593361 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jxs6d" Nov 29 07:28:41 crc kubenswrapper[4947]: I1129 07:28:41.642715 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jxs6d" Nov 29 07:28:42 crc kubenswrapper[4947]: I1129 07:28:42.318959 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jxs6d" Nov 29 07:28:42 crc kubenswrapper[4947]: I1129 07:28:42.371086 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jxs6d"] Nov 29 07:28:44 crc kubenswrapper[4947]: I1129 07:28:44.291195 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jxs6d" podUID="7d5a7203-76b9-49e6-96c3-6bab103ff1a6" containerName="registry-server" containerID="cri-o://c12245e0c3bff47a9333f18c7df50efceba640645d8bd61900e9e3694932d6fa" gracePeriod=2 Nov 29 07:28:45 crc kubenswrapper[4947]: I1129 07:28:45.303464 4947 generic.go:334] "Generic (PLEG): container finished" podID="7d5a7203-76b9-49e6-96c3-6bab103ff1a6" containerID="c12245e0c3bff47a9333f18c7df50efceba640645d8bd61900e9e3694932d6fa" exitCode=0 Nov 29 07:28:45 crc kubenswrapper[4947]: I1129 07:28:45.303526 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jxs6d" event={"ID":"7d5a7203-76b9-49e6-96c3-6bab103ff1a6","Type":"ContainerDied","Data":"c12245e0c3bff47a9333f18c7df50efceba640645d8bd61900e9e3694932d6fa"} Nov 29 07:28:45 crc kubenswrapper[4947]: I1129 07:28:45.400336 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jxs6d" Nov 29 07:28:45 crc kubenswrapper[4947]: I1129 07:28:45.494766 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d5a7203-76b9-49e6-96c3-6bab103ff1a6-utilities\") pod \"7d5a7203-76b9-49e6-96c3-6bab103ff1a6\" (UID: \"7d5a7203-76b9-49e6-96c3-6bab103ff1a6\") " Nov 29 07:28:45 crc kubenswrapper[4947]: I1129 07:28:45.494912 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d5a7203-76b9-49e6-96c3-6bab103ff1a6-catalog-content\") pod \"7d5a7203-76b9-49e6-96c3-6bab103ff1a6\" (UID: \"7d5a7203-76b9-49e6-96c3-6bab103ff1a6\") " Nov 29 07:28:45 crc kubenswrapper[4947]: I1129 07:28:45.495040 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2bm9\" (UniqueName: \"kubernetes.io/projected/7d5a7203-76b9-49e6-96c3-6bab103ff1a6-kube-api-access-z2bm9\") pod \"7d5a7203-76b9-49e6-96c3-6bab103ff1a6\" (UID: \"7d5a7203-76b9-49e6-96c3-6bab103ff1a6\") " Nov 29 07:28:45 crc kubenswrapper[4947]: I1129 07:28:45.495679 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d5a7203-76b9-49e6-96c3-6bab103ff1a6-utilities" (OuterVolumeSpecName: "utilities") pod "7d5a7203-76b9-49e6-96c3-6bab103ff1a6" (UID: "7d5a7203-76b9-49e6-96c3-6bab103ff1a6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:28:45 crc kubenswrapper[4947]: I1129 07:28:45.500969 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d5a7203-76b9-49e6-96c3-6bab103ff1a6-kube-api-access-z2bm9" (OuterVolumeSpecName: "kube-api-access-z2bm9") pod "7d5a7203-76b9-49e6-96c3-6bab103ff1a6" (UID: "7d5a7203-76b9-49e6-96c3-6bab103ff1a6"). InnerVolumeSpecName "kube-api-access-z2bm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:28:45 crc kubenswrapper[4947]: I1129 07:28:45.598206 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2bm9\" (UniqueName: \"kubernetes.io/projected/7d5a7203-76b9-49e6-96c3-6bab103ff1a6-kube-api-access-z2bm9\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:45 crc kubenswrapper[4947]: I1129 07:28:45.598300 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d5a7203-76b9-49e6-96c3-6bab103ff1a6-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:45 crc kubenswrapper[4947]: I1129 07:28:45.607709 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d5a7203-76b9-49e6-96c3-6bab103ff1a6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7d5a7203-76b9-49e6-96c3-6bab103ff1a6" (UID: "7d5a7203-76b9-49e6-96c3-6bab103ff1a6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:28:45 crc kubenswrapper[4947]: I1129 07:28:45.699746 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d5a7203-76b9-49e6-96c3-6bab103ff1a6-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:46 crc kubenswrapper[4947]: I1129 07:28:46.313850 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jxs6d" event={"ID":"7d5a7203-76b9-49e6-96c3-6bab103ff1a6","Type":"ContainerDied","Data":"d0a6b11f39e42e0883555b02b2339f91c675f9493c260386e70b53d9f079d170"} Nov 29 07:28:46 crc kubenswrapper[4947]: I1129 07:28:46.313890 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jxs6d" Nov 29 07:28:46 crc kubenswrapper[4947]: I1129 07:28:46.313908 4947 scope.go:117] "RemoveContainer" containerID="c12245e0c3bff47a9333f18c7df50efceba640645d8bd61900e9e3694932d6fa" Nov 29 07:28:46 crc kubenswrapper[4947]: I1129 07:28:46.345408 4947 scope.go:117] "RemoveContainer" containerID="b7670d2b9df42ec8a7a7ddb17f4438acd80b9045ffc0bbf843e3ea28b2dde653" Nov 29 07:28:46 crc kubenswrapper[4947]: I1129 07:28:46.348490 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jxs6d"] Nov 29 07:28:46 crc kubenswrapper[4947]: I1129 07:28:46.357491 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jxs6d"] Nov 29 07:28:46 crc kubenswrapper[4947]: I1129 07:28:46.381484 4947 scope.go:117] "RemoveContainer" containerID="f2ba2225a294d31e8b0b5fe02c1c63c4dd96b8734cc973227fa1630b4a0451d4" Nov 29 07:28:47 crc kubenswrapper[4947]: I1129 07:28:47.179820 4947 scope.go:117] "RemoveContainer" containerID="9f307de5f94b3683e9f730374e74b4cfb62a198f3b5d373575c97183b50a6cd9" Nov 29 07:28:47 crc kubenswrapper[4947]: E1129 07:28:47.180399 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:28:47 crc kubenswrapper[4947]: I1129 07:28:47.192750 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d5a7203-76b9-49e6-96c3-6bab103ff1a6" path="/var/lib/kubelet/pods/7d5a7203-76b9-49e6-96c3-6bab103ff1a6/volumes" Nov 29 07:28:59 crc kubenswrapper[4947]: I1129 07:28:59.185281 4947 scope.go:117] "RemoveContainer" containerID="9f307de5f94b3683e9f730374e74b4cfb62a198f3b5d373575c97183b50a6cd9" Nov 29 07:28:59 crc kubenswrapper[4947]: E1129 07:28:59.185951 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:29:11 crc kubenswrapper[4947]: I1129 07:29:11.179545 4947 scope.go:117] "RemoveContainer" containerID="9f307de5f94b3683e9f730374e74b4cfb62a198f3b5d373575c97183b50a6cd9" Nov 29 07:29:11 crc kubenswrapper[4947]: E1129 07:29:11.180946 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:29:23 crc kubenswrapper[4947]: I1129 07:29:23.178802 4947 scope.go:117] "RemoveContainer" containerID="9f307de5f94b3683e9f730374e74b4cfb62a198f3b5d373575c97183b50a6cd9" Nov 29 07:29:23 crc kubenswrapper[4947]: E1129 07:29:23.179556 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:29:34 crc kubenswrapper[4947]: I1129 07:29:34.179687 4947 scope.go:117] "RemoveContainer" containerID="9f307de5f94b3683e9f730374e74b4cfb62a198f3b5d373575c97183b50a6cd9" Nov 29 07:29:34 crc kubenswrapper[4947]: E1129 07:29:34.180761 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:29:49 crc kubenswrapper[4947]: I1129 07:29:49.185614 4947 scope.go:117] "RemoveContainer" containerID="9f307de5f94b3683e9f730374e74b4cfb62a198f3b5d373575c97183b50a6cd9" Nov 29 07:29:49 crc kubenswrapper[4947]: E1129 07:29:49.186360 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:30:00 crc kubenswrapper[4947]: I1129 07:30:00.162717 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406690-qzj4k"] Nov 29 07:30:00 crc kubenswrapper[4947]: E1129 07:30:00.163882 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d5a7203-76b9-49e6-96c3-6bab103ff1a6" containerName="registry-server" Nov 29 07:30:00 crc kubenswrapper[4947]: I1129 07:30:00.163899 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d5a7203-76b9-49e6-96c3-6bab103ff1a6" containerName="registry-server" Nov 29 07:30:00 crc kubenswrapper[4947]: E1129 07:30:00.163913 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d5a7203-76b9-49e6-96c3-6bab103ff1a6" containerName="extract-content" Nov 29 07:30:00 crc kubenswrapper[4947]: I1129 07:30:00.163920 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d5a7203-76b9-49e6-96c3-6bab103ff1a6" containerName="extract-content" Nov 29 07:30:00 crc kubenswrapper[4947]: E1129 07:30:00.163947 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36d79fe8-06fb-4d21-8b9d-c0065e02274e" containerName="extract-utilities" Nov 29 07:30:00 crc kubenswrapper[4947]: I1129 07:30:00.163954 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="36d79fe8-06fb-4d21-8b9d-c0065e02274e" containerName="extract-utilities" Nov 29 07:30:00 crc kubenswrapper[4947]: E1129 07:30:00.163968 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d5a7203-76b9-49e6-96c3-6bab103ff1a6" containerName="extract-utilities" Nov 29 07:30:00 crc kubenswrapper[4947]: I1129 07:30:00.163974 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d5a7203-76b9-49e6-96c3-6bab103ff1a6" containerName="extract-utilities" Nov 29 07:30:00 crc kubenswrapper[4947]: E1129 07:30:00.163991 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36d79fe8-06fb-4d21-8b9d-c0065e02274e" containerName="extract-content" Nov 29 07:30:00 crc kubenswrapper[4947]: I1129 07:30:00.163999 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="36d79fe8-06fb-4d21-8b9d-c0065e02274e" containerName="extract-content" Nov 29 07:30:00 crc kubenswrapper[4947]: E1129 07:30:00.164026 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36d79fe8-06fb-4d21-8b9d-c0065e02274e" containerName="registry-server" Nov 29 07:30:00 crc kubenswrapper[4947]: I1129 07:30:00.164036 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="36d79fe8-06fb-4d21-8b9d-c0065e02274e" containerName="registry-server" Nov 29 07:30:00 crc kubenswrapper[4947]: I1129 07:30:00.164249 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d5a7203-76b9-49e6-96c3-6bab103ff1a6" containerName="registry-server" Nov 29 07:30:00 crc kubenswrapper[4947]: I1129 07:30:00.164271 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="36d79fe8-06fb-4d21-8b9d-c0065e02274e" containerName="registry-server" Nov 29 07:30:00 crc kubenswrapper[4947]: I1129 07:30:00.165214 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-qzj4k" Nov 29 07:30:00 crc kubenswrapper[4947]: I1129 07:30:00.168982 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 29 07:30:00 crc kubenswrapper[4947]: I1129 07:30:00.169781 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 29 07:30:00 crc kubenswrapper[4947]: I1129 07:30:00.177471 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406690-qzj4k"] Nov 29 07:30:00 crc kubenswrapper[4947]: I1129 07:30:00.180026 4947 scope.go:117] "RemoveContainer" containerID="9f307de5f94b3683e9f730374e74b4cfb62a198f3b5d373575c97183b50a6cd9" Nov 29 07:30:00 crc kubenswrapper[4947]: E1129 07:30:00.180358 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:30:00 crc kubenswrapper[4947]: I1129 07:30:00.257868 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/74e233d8-d395-4660-8bfa-de810efcc150-secret-volume\") pod \"collect-profiles-29406690-qzj4k\" (UID: \"74e233d8-d395-4660-8bfa-de810efcc150\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-qzj4k" Nov 29 07:30:00 crc kubenswrapper[4947]: I1129 07:30:00.257972 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74e233d8-d395-4660-8bfa-de810efcc150-config-volume\") pod \"collect-profiles-29406690-qzj4k\" (UID: \"74e233d8-d395-4660-8bfa-de810efcc150\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-qzj4k" Nov 29 07:30:00 crc kubenswrapper[4947]: I1129 07:30:00.257997 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqz5m\" (UniqueName: \"kubernetes.io/projected/74e233d8-d395-4660-8bfa-de810efcc150-kube-api-access-sqz5m\") pod \"collect-profiles-29406690-qzj4k\" (UID: \"74e233d8-d395-4660-8bfa-de810efcc150\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-qzj4k" Nov 29 07:30:00 crc kubenswrapper[4947]: I1129 07:30:00.361645 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/74e233d8-d395-4660-8bfa-de810efcc150-secret-volume\") pod \"collect-profiles-29406690-qzj4k\" (UID: \"74e233d8-d395-4660-8bfa-de810efcc150\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-qzj4k" Nov 29 07:30:00 crc kubenswrapper[4947]: I1129 07:30:00.361728 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74e233d8-d395-4660-8bfa-de810efcc150-config-volume\") pod \"collect-profiles-29406690-qzj4k\" (UID: \"74e233d8-d395-4660-8bfa-de810efcc150\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-qzj4k" Nov 29 07:30:00 crc kubenswrapper[4947]: I1129 07:30:00.361750 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqz5m\" (UniqueName: \"kubernetes.io/projected/74e233d8-d395-4660-8bfa-de810efcc150-kube-api-access-sqz5m\") pod \"collect-profiles-29406690-qzj4k\" (UID: \"74e233d8-d395-4660-8bfa-de810efcc150\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-qzj4k" Nov 29 07:30:00 crc kubenswrapper[4947]: I1129 07:30:00.363262 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74e233d8-d395-4660-8bfa-de810efcc150-config-volume\") pod \"collect-profiles-29406690-qzj4k\" (UID: \"74e233d8-d395-4660-8bfa-de810efcc150\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-qzj4k" Nov 29 07:30:00 crc kubenswrapper[4947]: I1129 07:30:00.368394 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/74e233d8-d395-4660-8bfa-de810efcc150-secret-volume\") pod \"collect-profiles-29406690-qzj4k\" (UID: \"74e233d8-d395-4660-8bfa-de810efcc150\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-qzj4k" Nov 29 07:30:00 crc kubenswrapper[4947]: I1129 07:30:00.381686 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqz5m\" (UniqueName: \"kubernetes.io/projected/74e233d8-d395-4660-8bfa-de810efcc150-kube-api-access-sqz5m\") pod \"collect-profiles-29406690-qzj4k\" (UID: \"74e233d8-d395-4660-8bfa-de810efcc150\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-qzj4k" Nov 29 07:30:00 crc kubenswrapper[4947]: I1129 07:30:00.498947 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-qzj4k" Nov 29 07:30:00 crc kubenswrapper[4947]: I1129 07:30:00.964429 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406690-qzj4k"] Nov 29 07:30:01 crc kubenswrapper[4947]: I1129 07:30:01.018586 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-qzj4k" event={"ID":"74e233d8-d395-4660-8bfa-de810efcc150","Type":"ContainerStarted","Data":"997ab977ccffcb5b7a3a3a9411187b4f64b7af0e63aa010256c106235e3a0952"} Nov 29 07:30:02 crc kubenswrapper[4947]: I1129 07:30:02.029507 4947 generic.go:334] "Generic (PLEG): container finished" podID="74e233d8-d395-4660-8bfa-de810efcc150" containerID="e649ec8e6d1b98eba04f63f2b9b94231519a035455f80d49e3c828bd3bf6bdb3" exitCode=0 Nov 29 07:30:02 crc kubenswrapper[4947]: I1129 07:30:02.029625 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-qzj4k" event={"ID":"74e233d8-d395-4660-8bfa-de810efcc150","Type":"ContainerDied","Data":"e649ec8e6d1b98eba04f63f2b9b94231519a035455f80d49e3c828bd3bf6bdb3"} Nov 29 07:30:03 crc kubenswrapper[4947]: I1129 07:30:03.349514 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-qzj4k" Nov 29 07:30:03 crc kubenswrapper[4947]: I1129 07:30:03.425186 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74e233d8-d395-4660-8bfa-de810efcc150-config-volume\") pod \"74e233d8-d395-4660-8bfa-de810efcc150\" (UID: \"74e233d8-d395-4660-8bfa-de810efcc150\") " Nov 29 07:30:03 crc kubenswrapper[4947]: I1129 07:30:03.425501 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/74e233d8-d395-4660-8bfa-de810efcc150-secret-volume\") pod \"74e233d8-d395-4660-8bfa-de810efcc150\" (UID: \"74e233d8-d395-4660-8bfa-de810efcc150\") " Nov 29 07:30:03 crc kubenswrapper[4947]: I1129 07:30:03.425735 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqz5m\" (UniqueName: \"kubernetes.io/projected/74e233d8-d395-4660-8bfa-de810efcc150-kube-api-access-sqz5m\") pod \"74e233d8-d395-4660-8bfa-de810efcc150\" (UID: \"74e233d8-d395-4660-8bfa-de810efcc150\") " Nov 29 07:30:03 crc kubenswrapper[4947]: I1129 07:30:03.425904 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74e233d8-d395-4660-8bfa-de810efcc150-config-volume" (OuterVolumeSpecName: "config-volume") pod "74e233d8-d395-4660-8bfa-de810efcc150" (UID: "74e233d8-d395-4660-8bfa-de810efcc150"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:30:03 crc kubenswrapper[4947]: I1129 07:30:03.426601 4947 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74e233d8-d395-4660-8bfa-de810efcc150-config-volume\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:03 crc kubenswrapper[4947]: I1129 07:30:03.431284 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74e233d8-d395-4660-8bfa-de810efcc150-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "74e233d8-d395-4660-8bfa-de810efcc150" (UID: "74e233d8-d395-4660-8bfa-de810efcc150"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:30:03 crc kubenswrapper[4947]: I1129 07:30:03.432123 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74e233d8-d395-4660-8bfa-de810efcc150-kube-api-access-sqz5m" (OuterVolumeSpecName: "kube-api-access-sqz5m") pod "74e233d8-d395-4660-8bfa-de810efcc150" (UID: "74e233d8-d395-4660-8bfa-de810efcc150"). InnerVolumeSpecName "kube-api-access-sqz5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:30:03 crc kubenswrapper[4947]: I1129 07:30:03.528892 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqz5m\" (UniqueName: \"kubernetes.io/projected/74e233d8-d395-4660-8bfa-de810efcc150-kube-api-access-sqz5m\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:03 crc kubenswrapper[4947]: I1129 07:30:03.528938 4947 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/74e233d8-d395-4660-8bfa-de810efcc150-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:04 crc kubenswrapper[4947]: I1129 07:30:04.051829 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-qzj4k" event={"ID":"74e233d8-d395-4660-8bfa-de810efcc150","Type":"ContainerDied","Data":"997ab977ccffcb5b7a3a3a9411187b4f64b7af0e63aa010256c106235e3a0952"} Nov 29 07:30:04 crc kubenswrapper[4947]: I1129 07:30:04.051869 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="997ab977ccffcb5b7a3a3a9411187b4f64b7af0e63aa010256c106235e3a0952" Nov 29 07:30:04 crc kubenswrapper[4947]: I1129 07:30:04.052431 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-qzj4k" Nov 29 07:30:04 crc kubenswrapper[4947]: I1129 07:30:04.440717 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406645-8z2x4"] Nov 29 07:30:04 crc kubenswrapper[4947]: I1129 07:30:04.452374 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406645-8z2x4"] Nov 29 07:30:05 crc kubenswrapper[4947]: I1129 07:30:05.190414 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28fefedc-ca81-4a45-b82d-59283c409bc8" path="/var/lib/kubelet/pods/28fefedc-ca81-4a45-b82d-59283c409bc8/volumes" Nov 29 07:30:13 crc kubenswrapper[4947]: I1129 07:30:13.179817 4947 scope.go:117] "RemoveContainer" containerID="9f307de5f94b3683e9f730374e74b4cfb62a198f3b5d373575c97183b50a6cd9" Nov 29 07:30:13 crc kubenswrapper[4947]: E1129 07:30:13.181026 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:30:27 crc kubenswrapper[4947]: I1129 07:30:27.178446 4947 scope.go:117] "RemoveContainer" containerID="9f307de5f94b3683e9f730374e74b4cfb62a198f3b5d373575c97183b50a6cd9" Nov 29 07:30:27 crc kubenswrapper[4947]: E1129 07:30:27.179283 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:30:39 crc kubenswrapper[4947]: I1129 07:30:39.186822 4947 scope.go:117] "RemoveContainer" containerID="9f307de5f94b3683e9f730374e74b4cfb62a198f3b5d373575c97183b50a6cd9" Nov 29 07:30:39 crc kubenswrapper[4947]: E1129 07:30:39.188111 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:30:39 crc kubenswrapper[4947]: I1129 07:30:39.688755 4947 scope.go:117] "RemoveContainer" containerID="b11232dc16c361fdc972ac683f79ca7e1d88081cdec6cc576bf8dff38c8d6158" Nov 29 07:30:53 crc kubenswrapper[4947]: I1129 07:30:53.178752 4947 scope.go:117] "RemoveContainer" containerID="9f307de5f94b3683e9f730374e74b4cfb62a198f3b5d373575c97183b50a6cd9" Nov 29 07:30:53 crc kubenswrapper[4947]: E1129 07:30:53.179560 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:31:06 crc kubenswrapper[4947]: I1129 07:31:06.179297 4947 scope.go:117] "RemoveContainer" containerID="9f307de5f94b3683e9f730374e74b4cfb62a198f3b5d373575c97183b50a6cd9" Nov 29 07:31:06 crc kubenswrapper[4947]: E1129 07:31:06.180571 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:31:18 crc kubenswrapper[4947]: I1129 07:31:18.179265 4947 scope.go:117] "RemoveContainer" containerID="9f307de5f94b3683e9f730374e74b4cfb62a198f3b5d373575c97183b50a6cd9" Nov 29 07:31:18 crc kubenswrapper[4947]: E1129 07:31:18.180060 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:31:31 crc kubenswrapper[4947]: I1129 07:31:31.179942 4947 scope.go:117] "RemoveContainer" containerID="9f307de5f94b3683e9f730374e74b4cfb62a198f3b5d373575c97183b50a6cd9" Nov 29 07:31:31 crc kubenswrapper[4947]: I1129 07:31:31.893065 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" event={"ID":"5f4d791f-bb61-4aaa-a09c-3007b59645a7","Type":"ContainerStarted","Data":"f0e0197d224bb0f950a728e1a6a38d3e1a99970cb7adaa9271f86cc1b41a4062"} Nov 29 07:32:46 crc kubenswrapper[4947]: I1129 07:32:46.578998 4947 generic.go:334] "Generic (PLEG): container finished" podID="97ae78bf-a258-4fbc-912c-f7d6ce706e54" containerID="4a2a4576603769b101a0cc56d7e6dfc0b5b3949a5e2c9dc63c7e01ad109186fe" exitCode=0 Nov 29 07:32:46 crc kubenswrapper[4947]: I1129 07:32:46.579107 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4x8l8" event={"ID":"97ae78bf-a258-4fbc-912c-f7d6ce706e54","Type":"ContainerDied","Data":"4a2a4576603769b101a0cc56d7e6dfc0b5b3949a5e2c9dc63c7e01ad109186fe"} Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.032723 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4x8l8" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.135773 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97ae78bf-a258-4fbc-912c-f7d6ce706e54-libvirt-combined-ca-bundle\") pod \"97ae78bf-a258-4fbc-912c-f7d6ce706e54\" (UID: \"97ae78bf-a258-4fbc-912c-f7d6ce706e54\") " Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.135898 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/97ae78bf-a258-4fbc-912c-f7d6ce706e54-libvirt-secret-0\") pod \"97ae78bf-a258-4fbc-912c-f7d6ce706e54\" (UID: \"97ae78bf-a258-4fbc-912c-f7d6ce706e54\") " Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.135980 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/97ae78bf-a258-4fbc-912c-f7d6ce706e54-ceph\") pod \"97ae78bf-a258-4fbc-912c-f7d6ce706e54\" (UID: \"97ae78bf-a258-4fbc-912c-f7d6ce706e54\") " Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.135999 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xglxk\" (UniqueName: \"kubernetes.io/projected/97ae78bf-a258-4fbc-912c-f7d6ce706e54-kube-api-access-xglxk\") pod \"97ae78bf-a258-4fbc-912c-f7d6ce706e54\" (UID: \"97ae78bf-a258-4fbc-912c-f7d6ce706e54\") " Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.136082 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/97ae78bf-a258-4fbc-912c-f7d6ce706e54-ssh-key\") pod \"97ae78bf-a258-4fbc-912c-f7d6ce706e54\" (UID: \"97ae78bf-a258-4fbc-912c-f7d6ce706e54\") " Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.136129 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97ae78bf-a258-4fbc-912c-f7d6ce706e54-inventory\") pod \"97ae78bf-a258-4fbc-912c-f7d6ce706e54\" (UID: \"97ae78bf-a258-4fbc-912c-f7d6ce706e54\") " Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.142282 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97ae78bf-a258-4fbc-912c-f7d6ce706e54-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "97ae78bf-a258-4fbc-912c-f7d6ce706e54" (UID: "97ae78bf-a258-4fbc-912c-f7d6ce706e54"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.142919 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97ae78bf-a258-4fbc-912c-f7d6ce706e54-kube-api-access-xglxk" (OuterVolumeSpecName: "kube-api-access-xglxk") pod "97ae78bf-a258-4fbc-912c-f7d6ce706e54" (UID: "97ae78bf-a258-4fbc-912c-f7d6ce706e54"). InnerVolumeSpecName "kube-api-access-xglxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.147116 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97ae78bf-a258-4fbc-912c-f7d6ce706e54-ceph" (OuterVolumeSpecName: "ceph") pod "97ae78bf-a258-4fbc-912c-f7d6ce706e54" (UID: "97ae78bf-a258-4fbc-912c-f7d6ce706e54"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.169702 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97ae78bf-a258-4fbc-912c-f7d6ce706e54-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "97ae78bf-a258-4fbc-912c-f7d6ce706e54" (UID: "97ae78bf-a258-4fbc-912c-f7d6ce706e54"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.171836 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97ae78bf-a258-4fbc-912c-f7d6ce706e54-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "97ae78bf-a258-4fbc-912c-f7d6ce706e54" (UID: "97ae78bf-a258-4fbc-912c-f7d6ce706e54"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.172072 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97ae78bf-a258-4fbc-912c-f7d6ce706e54-inventory" (OuterVolumeSpecName: "inventory") pod "97ae78bf-a258-4fbc-912c-f7d6ce706e54" (UID: "97ae78bf-a258-4fbc-912c-f7d6ce706e54"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.241343 4947 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97ae78bf-a258-4fbc-912c-f7d6ce706e54-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.241407 4947 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/97ae78bf-a258-4fbc-912c-f7d6ce706e54-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.241423 4947 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/97ae78bf-a258-4fbc-912c-f7d6ce706e54-ceph\") on node \"crc\" DevicePath \"\"" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.241437 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xglxk\" (UniqueName: \"kubernetes.io/projected/97ae78bf-a258-4fbc-912c-f7d6ce706e54-kube-api-access-xglxk\") on node \"crc\" DevicePath \"\"" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.241451 4947 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/97ae78bf-a258-4fbc-912c-f7d6ce706e54-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.241463 4947 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97ae78bf-a258-4fbc-912c-f7d6ce706e54-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.603052 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4x8l8" event={"ID":"97ae78bf-a258-4fbc-912c-f7d6ce706e54","Type":"ContainerDied","Data":"2c616502c06489d694ccb03715648d275d2eb8099e1f42a8fa3ce06fa60e1a00"} Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.603106 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c616502c06489d694ccb03715648d275d2eb8099e1f42a8fa3ce06fa60e1a00" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.603650 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4x8l8" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.702059 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx"] Nov 29 07:32:48 crc kubenswrapper[4947]: E1129 07:32:48.702460 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74e233d8-d395-4660-8bfa-de810efcc150" containerName="collect-profiles" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.702482 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="74e233d8-d395-4660-8bfa-de810efcc150" containerName="collect-profiles" Nov 29 07:32:48 crc kubenswrapper[4947]: E1129 07:32:48.702510 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97ae78bf-a258-4fbc-912c-f7d6ce706e54" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.702518 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="97ae78bf-a258-4fbc-912c-f7d6ce706e54" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.702696 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="74e233d8-d395-4660-8bfa-de810efcc150" containerName="collect-profiles" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.702716 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="97ae78bf-a258-4fbc-912c-f7d6ce706e54" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.703322 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.707299 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.707576 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.707714 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.707870 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.707990 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xvljs" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.708201 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.708351 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.708555 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.708669 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ceph-nova" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.724566 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx"] Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.861439 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f14a521e-01c4-4720-984d-65f1123397ae-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx\" (UID: \"f14a521e-01c4-4720-984d-65f1123397ae\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.861506 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f14a521e-01c4-4720-984d-65f1123397ae-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx\" (UID: \"f14a521e-01c4-4720-984d-65f1123397ae\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.861530 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f14a521e-01c4-4720-984d-65f1123397ae-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx\" (UID: \"f14a521e-01c4-4720-984d-65f1123397ae\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.861578 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f14a521e-01c4-4720-984d-65f1123397ae-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx\" (UID: \"f14a521e-01c4-4720-984d-65f1123397ae\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.861612 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/f14a521e-01c4-4720-984d-65f1123397ae-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx\" (UID: \"f14a521e-01c4-4720-984d-65f1123397ae\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.861636 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f14a521e-01c4-4720-984d-65f1123397ae-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx\" (UID: \"f14a521e-01c4-4720-984d-65f1123397ae\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.861662 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f14a521e-01c4-4720-984d-65f1123397ae-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx\" (UID: \"f14a521e-01c4-4720-984d-65f1123397ae\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.861684 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f14a521e-01c4-4720-984d-65f1123397ae-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx\" (UID: \"f14a521e-01c4-4720-984d-65f1123397ae\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.861725 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f14a521e-01c4-4720-984d-65f1123397ae-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx\" (UID: \"f14a521e-01c4-4720-984d-65f1123397ae\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.861740 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f14a521e-01c4-4720-984d-65f1123397ae-ssh-key\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx\" (UID: \"f14a521e-01c4-4720-984d-65f1123397ae\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.861757 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2mll\" (UniqueName: \"kubernetes.io/projected/f14a521e-01c4-4720-984d-65f1123397ae-kube-api-access-x2mll\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx\" (UID: \"f14a521e-01c4-4720-984d-65f1123397ae\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.963528 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f14a521e-01c4-4720-984d-65f1123397ae-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx\" (UID: \"f14a521e-01c4-4720-984d-65f1123397ae\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.963601 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/f14a521e-01c4-4720-984d-65f1123397ae-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx\" (UID: \"f14a521e-01c4-4720-984d-65f1123397ae\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.963631 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f14a521e-01c4-4720-984d-65f1123397ae-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx\" (UID: \"f14a521e-01c4-4720-984d-65f1123397ae\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.963664 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f14a521e-01c4-4720-984d-65f1123397ae-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx\" (UID: \"f14a521e-01c4-4720-984d-65f1123397ae\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.963686 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f14a521e-01c4-4720-984d-65f1123397ae-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx\" (UID: \"f14a521e-01c4-4720-984d-65f1123397ae\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.963723 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f14a521e-01c4-4720-984d-65f1123397ae-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx\" (UID: \"f14a521e-01c4-4720-984d-65f1123397ae\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.963749 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f14a521e-01c4-4720-984d-65f1123397ae-ssh-key\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx\" (UID: \"f14a521e-01c4-4720-984d-65f1123397ae\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.963774 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2mll\" (UniqueName: \"kubernetes.io/projected/f14a521e-01c4-4720-984d-65f1123397ae-kube-api-access-x2mll\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx\" (UID: \"f14a521e-01c4-4720-984d-65f1123397ae\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.963833 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f14a521e-01c4-4720-984d-65f1123397ae-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx\" (UID: \"f14a521e-01c4-4720-984d-65f1123397ae\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.963868 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f14a521e-01c4-4720-984d-65f1123397ae-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx\" (UID: \"f14a521e-01c4-4720-984d-65f1123397ae\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.963887 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f14a521e-01c4-4720-984d-65f1123397ae-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx\" (UID: \"f14a521e-01c4-4720-984d-65f1123397ae\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.965780 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/f14a521e-01c4-4720-984d-65f1123397ae-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx\" (UID: \"f14a521e-01c4-4720-984d-65f1123397ae\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.966525 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f14a521e-01c4-4720-984d-65f1123397ae-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx\" (UID: \"f14a521e-01c4-4720-984d-65f1123397ae\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.970241 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f14a521e-01c4-4720-984d-65f1123397ae-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx\" (UID: \"f14a521e-01c4-4720-984d-65f1123397ae\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.970365 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f14a521e-01c4-4720-984d-65f1123397ae-ssh-key\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx\" (UID: \"f14a521e-01c4-4720-984d-65f1123397ae\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.970669 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f14a521e-01c4-4720-984d-65f1123397ae-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx\" (UID: \"f14a521e-01c4-4720-984d-65f1123397ae\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.971819 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f14a521e-01c4-4720-984d-65f1123397ae-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx\" (UID: \"f14a521e-01c4-4720-984d-65f1123397ae\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.972163 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f14a521e-01c4-4720-984d-65f1123397ae-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx\" (UID: \"f14a521e-01c4-4720-984d-65f1123397ae\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.973049 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f14a521e-01c4-4720-984d-65f1123397ae-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx\" (UID: \"f14a521e-01c4-4720-984d-65f1123397ae\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.984926 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f14a521e-01c4-4720-984d-65f1123397ae-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx\" (UID: \"f14a521e-01c4-4720-984d-65f1123397ae\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.985044 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f14a521e-01c4-4720-984d-65f1123397ae-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx\" (UID: \"f14a521e-01c4-4720-984d-65f1123397ae\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx" Nov 29 07:32:48 crc kubenswrapper[4947]: I1129 07:32:48.986574 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2mll\" (UniqueName: \"kubernetes.io/projected/f14a521e-01c4-4720-984d-65f1123397ae-kube-api-access-x2mll\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx\" (UID: \"f14a521e-01c4-4720-984d-65f1123397ae\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx" Nov 29 07:32:49 crc kubenswrapper[4947]: I1129 07:32:49.021986 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx" Nov 29 07:32:49 crc kubenswrapper[4947]: I1129 07:32:49.586775 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx"] Nov 29 07:32:49 crc kubenswrapper[4947]: I1129 07:32:49.594324 4947 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 07:32:49 crc kubenswrapper[4947]: I1129 07:32:49.615381 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx" event={"ID":"f14a521e-01c4-4720-984d-65f1123397ae","Type":"ContainerStarted","Data":"33260e5540b1f380b44d058d3408d92263826b9e1ff364fffb141aefb3f454ac"} Nov 29 07:32:59 crc kubenswrapper[4947]: I1129 07:32:59.722954 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx" event={"ID":"f14a521e-01c4-4720-984d-65f1123397ae","Type":"ContainerStarted","Data":"ecdb04437dfd44c24a9c1e8dcb761803beb696b473dc77bdc3a0e9131568d65a"} Nov 29 07:32:59 crc kubenswrapper[4947]: I1129 07:32:59.748365 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx" podStartSLOduration=2.54205556 podStartE2EDuration="11.748340744s" podCreationTimestamp="2025-11-29 07:32:48 +0000 UTC" firstStartedPulling="2025-11-29 07:32:49.59399179 +0000 UTC m=+3520.638373871" lastFinishedPulling="2025-11-29 07:32:58.800276974 +0000 UTC m=+3529.844659055" observedRunningTime="2025-11-29 07:32:59.745121583 +0000 UTC m=+3530.789503664" watchObservedRunningTime="2025-11-29 07:32:59.748340744 +0000 UTC m=+3530.792722825" Nov 29 07:33:31 crc kubenswrapper[4947]: I1129 07:33:31.028511 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tfsb7"] Nov 29 07:33:31 crc kubenswrapper[4947]: I1129 07:33:31.031768 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tfsb7" Nov 29 07:33:31 crc kubenswrapper[4947]: I1129 07:33:31.046637 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tfsb7"] Nov 29 07:33:31 crc kubenswrapper[4947]: I1129 07:33:31.163944 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab1a01d0-cc98-41ff-9fba-56fa1f7aa305-utilities\") pod \"community-operators-tfsb7\" (UID: \"ab1a01d0-cc98-41ff-9fba-56fa1f7aa305\") " pod="openshift-marketplace/community-operators-tfsb7" Nov 29 07:33:31 crc kubenswrapper[4947]: I1129 07:33:31.164081 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab1a01d0-cc98-41ff-9fba-56fa1f7aa305-catalog-content\") pod \"community-operators-tfsb7\" (UID: \"ab1a01d0-cc98-41ff-9fba-56fa1f7aa305\") " pod="openshift-marketplace/community-operators-tfsb7" Nov 29 07:33:31 crc kubenswrapper[4947]: I1129 07:33:31.164165 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94hz8\" (UniqueName: \"kubernetes.io/projected/ab1a01d0-cc98-41ff-9fba-56fa1f7aa305-kube-api-access-94hz8\") pod \"community-operators-tfsb7\" (UID: \"ab1a01d0-cc98-41ff-9fba-56fa1f7aa305\") " pod="openshift-marketplace/community-operators-tfsb7" Nov 29 07:33:31 crc kubenswrapper[4947]: I1129 07:33:31.265835 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab1a01d0-cc98-41ff-9fba-56fa1f7aa305-catalog-content\") pod \"community-operators-tfsb7\" (UID: \"ab1a01d0-cc98-41ff-9fba-56fa1f7aa305\") " pod="openshift-marketplace/community-operators-tfsb7" Nov 29 07:33:31 crc kubenswrapper[4947]: I1129 07:33:31.266197 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94hz8\" (UniqueName: \"kubernetes.io/projected/ab1a01d0-cc98-41ff-9fba-56fa1f7aa305-kube-api-access-94hz8\") pod \"community-operators-tfsb7\" (UID: \"ab1a01d0-cc98-41ff-9fba-56fa1f7aa305\") " pod="openshift-marketplace/community-operators-tfsb7" Nov 29 07:33:31 crc kubenswrapper[4947]: I1129 07:33:31.266384 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab1a01d0-cc98-41ff-9fba-56fa1f7aa305-utilities\") pod \"community-operators-tfsb7\" (UID: \"ab1a01d0-cc98-41ff-9fba-56fa1f7aa305\") " pod="openshift-marketplace/community-operators-tfsb7" Nov 29 07:33:31 crc kubenswrapper[4947]: I1129 07:33:31.266436 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab1a01d0-cc98-41ff-9fba-56fa1f7aa305-catalog-content\") pod \"community-operators-tfsb7\" (UID: \"ab1a01d0-cc98-41ff-9fba-56fa1f7aa305\") " pod="openshift-marketplace/community-operators-tfsb7" Nov 29 07:33:31 crc kubenswrapper[4947]: I1129 07:33:31.266847 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab1a01d0-cc98-41ff-9fba-56fa1f7aa305-utilities\") pod \"community-operators-tfsb7\" (UID: \"ab1a01d0-cc98-41ff-9fba-56fa1f7aa305\") " pod="openshift-marketplace/community-operators-tfsb7" Nov 29 07:33:31 crc kubenswrapper[4947]: I1129 07:33:31.292630 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94hz8\" (UniqueName: \"kubernetes.io/projected/ab1a01d0-cc98-41ff-9fba-56fa1f7aa305-kube-api-access-94hz8\") pod \"community-operators-tfsb7\" (UID: \"ab1a01d0-cc98-41ff-9fba-56fa1f7aa305\") " pod="openshift-marketplace/community-operators-tfsb7" Nov 29 07:33:31 crc kubenswrapper[4947]: I1129 07:33:31.362657 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tfsb7" Nov 29 07:33:31 crc kubenswrapper[4947]: I1129 07:33:31.937193 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tfsb7"] Nov 29 07:33:32 crc kubenswrapper[4947]: I1129 07:33:32.007940 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tfsb7" event={"ID":"ab1a01d0-cc98-41ff-9fba-56fa1f7aa305","Type":"ContainerStarted","Data":"0d5f1aa9f4175a9d7f901f17fde9b254d794daf873d3579d883faeb57119d7e3"} Nov 29 07:33:33 crc kubenswrapper[4947]: I1129 07:33:33.019591 4947 generic.go:334] "Generic (PLEG): container finished" podID="ab1a01d0-cc98-41ff-9fba-56fa1f7aa305" containerID="0442b26d98842e090a4a7e338f370d268e14538698d0b99907904344ed59e982" exitCode=0 Nov 29 07:33:33 crc kubenswrapper[4947]: I1129 07:33:33.019781 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tfsb7" event={"ID":"ab1a01d0-cc98-41ff-9fba-56fa1f7aa305","Type":"ContainerDied","Data":"0442b26d98842e090a4a7e338f370d268e14538698d0b99907904344ed59e982"} Nov 29 07:33:35 crc kubenswrapper[4947]: I1129 07:33:35.070037 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tfsb7" event={"ID":"ab1a01d0-cc98-41ff-9fba-56fa1f7aa305","Type":"ContainerStarted","Data":"3993412612fdc7be35098be0fbbbf21ef058ef7ed85cb52908ac9d845bdb1a26"} Nov 29 07:33:36 crc kubenswrapper[4947]: I1129 07:33:36.080896 4947 generic.go:334] "Generic (PLEG): container finished" podID="ab1a01d0-cc98-41ff-9fba-56fa1f7aa305" containerID="3993412612fdc7be35098be0fbbbf21ef058ef7ed85cb52908ac9d845bdb1a26" exitCode=0 Nov 29 07:33:36 crc kubenswrapper[4947]: I1129 07:33:36.081018 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tfsb7" event={"ID":"ab1a01d0-cc98-41ff-9fba-56fa1f7aa305","Type":"ContainerDied","Data":"3993412612fdc7be35098be0fbbbf21ef058ef7ed85cb52908ac9d845bdb1a26"} Nov 29 07:33:38 crc kubenswrapper[4947]: I1129 07:33:38.100482 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tfsb7" event={"ID":"ab1a01d0-cc98-41ff-9fba-56fa1f7aa305","Type":"ContainerStarted","Data":"c6d76dbb4c9ddf787634bf2d7a384737a8effd3d57ced88e868105e320dfdb8d"} Nov 29 07:33:38 crc kubenswrapper[4947]: I1129 07:33:38.120793 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tfsb7" podStartSLOduration=3.239147324 podStartE2EDuration="7.120769921s" podCreationTimestamp="2025-11-29 07:33:31 +0000 UTC" firstStartedPulling="2025-11-29 07:33:33.021451844 +0000 UTC m=+3564.065833925" lastFinishedPulling="2025-11-29 07:33:36.903074441 +0000 UTC m=+3567.947456522" observedRunningTime="2025-11-29 07:33:38.120629387 +0000 UTC m=+3569.165011478" watchObservedRunningTime="2025-11-29 07:33:38.120769921 +0000 UTC m=+3569.165152002" Nov 29 07:33:41 crc kubenswrapper[4947]: I1129 07:33:41.362776 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tfsb7" Nov 29 07:33:41 crc kubenswrapper[4947]: I1129 07:33:41.363406 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tfsb7" Nov 29 07:33:41 crc kubenswrapper[4947]: I1129 07:33:41.433040 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tfsb7" Nov 29 07:33:42 crc kubenswrapper[4947]: I1129 07:33:42.183165 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tfsb7" Nov 29 07:33:42 crc kubenswrapper[4947]: I1129 07:33:42.238580 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tfsb7"] Nov 29 07:33:44 crc kubenswrapper[4947]: I1129 07:33:44.147761 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tfsb7" podUID="ab1a01d0-cc98-41ff-9fba-56fa1f7aa305" containerName="registry-server" containerID="cri-o://c6d76dbb4c9ddf787634bf2d7a384737a8effd3d57ced88e868105e320dfdb8d" gracePeriod=2 Nov 29 07:33:44 crc kubenswrapper[4947]: I1129 07:33:44.637981 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tfsb7" Nov 29 07:33:44 crc kubenswrapper[4947]: I1129 07:33:44.761701 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94hz8\" (UniqueName: \"kubernetes.io/projected/ab1a01d0-cc98-41ff-9fba-56fa1f7aa305-kube-api-access-94hz8\") pod \"ab1a01d0-cc98-41ff-9fba-56fa1f7aa305\" (UID: \"ab1a01d0-cc98-41ff-9fba-56fa1f7aa305\") " Nov 29 07:33:44 crc kubenswrapper[4947]: I1129 07:33:44.761917 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab1a01d0-cc98-41ff-9fba-56fa1f7aa305-utilities\") pod \"ab1a01d0-cc98-41ff-9fba-56fa1f7aa305\" (UID: \"ab1a01d0-cc98-41ff-9fba-56fa1f7aa305\") " Nov 29 07:33:44 crc kubenswrapper[4947]: I1129 07:33:44.761993 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab1a01d0-cc98-41ff-9fba-56fa1f7aa305-catalog-content\") pod \"ab1a01d0-cc98-41ff-9fba-56fa1f7aa305\" (UID: \"ab1a01d0-cc98-41ff-9fba-56fa1f7aa305\") " Nov 29 07:33:44 crc kubenswrapper[4947]: I1129 07:33:44.763537 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab1a01d0-cc98-41ff-9fba-56fa1f7aa305-utilities" (OuterVolumeSpecName: "utilities") pod "ab1a01d0-cc98-41ff-9fba-56fa1f7aa305" (UID: "ab1a01d0-cc98-41ff-9fba-56fa1f7aa305"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:33:44 crc kubenswrapper[4947]: I1129 07:33:44.769177 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab1a01d0-cc98-41ff-9fba-56fa1f7aa305-kube-api-access-94hz8" (OuterVolumeSpecName: "kube-api-access-94hz8") pod "ab1a01d0-cc98-41ff-9fba-56fa1f7aa305" (UID: "ab1a01d0-cc98-41ff-9fba-56fa1f7aa305"). InnerVolumeSpecName "kube-api-access-94hz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:33:44 crc kubenswrapper[4947]: I1129 07:33:44.829026 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab1a01d0-cc98-41ff-9fba-56fa1f7aa305-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ab1a01d0-cc98-41ff-9fba-56fa1f7aa305" (UID: "ab1a01d0-cc98-41ff-9fba-56fa1f7aa305"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:33:44 crc kubenswrapper[4947]: I1129 07:33:44.865248 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94hz8\" (UniqueName: \"kubernetes.io/projected/ab1a01d0-cc98-41ff-9fba-56fa1f7aa305-kube-api-access-94hz8\") on node \"crc\" DevicePath \"\"" Nov 29 07:33:44 crc kubenswrapper[4947]: I1129 07:33:44.865304 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab1a01d0-cc98-41ff-9fba-56fa1f7aa305-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:33:44 crc kubenswrapper[4947]: I1129 07:33:44.865318 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab1a01d0-cc98-41ff-9fba-56fa1f7aa305-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:33:45 crc kubenswrapper[4947]: I1129 07:33:45.160947 4947 generic.go:334] "Generic (PLEG): container finished" podID="ab1a01d0-cc98-41ff-9fba-56fa1f7aa305" containerID="c6d76dbb4c9ddf787634bf2d7a384737a8effd3d57ced88e868105e320dfdb8d" exitCode=0 Nov 29 07:33:45 crc kubenswrapper[4947]: I1129 07:33:45.161004 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tfsb7" event={"ID":"ab1a01d0-cc98-41ff-9fba-56fa1f7aa305","Type":"ContainerDied","Data":"c6d76dbb4c9ddf787634bf2d7a384737a8effd3d57ced88e868105e320dfdb8d"} Nov 29 07:33:45 crc kubenswrapper[4947]: I1129 07:33:45.161042 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tfsb7" event={"ID":"ab1a01d0-cc98-41ff-9fba-56fa1f7aa305","Type":"ContainerDied","Data":"0d5f1aa9f4175a9d7f901f17fde9b254d794daf873d3579d883faeb57119d7e3"} Nov 29 07:33:45 crc kubenswrapper[4947]: I1129 07:33:45.161044 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tfsb7" Nov 29 07:33:45 crc kubenswrapper[4947]: I1129 07:33:45.161068 4947 scope.go:117] "RemoveContainer" containerID="c6d76dbb4c9ddf787634bf2d7a384737a8effd3d57ced88e868105e320dfdb8d" Nov 29 07:33:45 crc kubenswrapper[4947]: I1129 07:33:45.208900 4947 scope.go:117] "RemoveContainer" containerID="3993412612fdc7be35098be0fbbbf21ef058ef7ed85cb52908ac9d845bdb1a26" Nov 29 07:33:45 crc kubenswrapper[4947]: I1129 07:33:45.222049 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tfsb7"] Nov 29 07:33:45 crc kubenswrapper[4947]: I1129 07:33:45.250426 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tfsb7"] Nov 29 07:33:45 crc kubenswrapper[4947]: I1129 07:33:45.256959 4947 scope.go:117] "RemoveContainer" containerID="0442b26d98842e090a4a7e338f370d268e14538698d0b99907904344ed59e982" Nov 29 07:33:45 crc kubenswrapper[4947]: I1129 07:33:45.314426 4947 scope.go:117] "RemoveContainer" containerID="c6d76dbb4c9ddf787634bf2d7a384737a8effd3d57ced88e868105e320dfdb8d" Nov 29 07:33:45 crc kubenswrapper[4947]: E1129 07:33:45.320383 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6d76dbb4c9ddf787634bf2d7a384737a8effd3d57ced88e868105e320dfdb8d\": container with ID starting with c6d76dbb4c9ddf787634bf2d7a384737a8effd3d57ced88e868105e320dfdb8d not found: ID does not exist" containerID="c6d76dbb4c9ddf787634bf2d7a384737a8effd3d57ced88e868105e320dfdb8d" Nov 29 07:33:45 crc kubenswrapper[4947]: I1129 07:33:45.320437 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6d76dbb4c9ddf787634bf2d7a384737a8effd3d57ced88e868105e320dfdb8d"} err="failed to get container status \"c6d76dbb4c9ddf787634bf2d7a384737a8effd3d57ced88e868105e320dfdb8d\": rpc error: code = NotFound desc = could not find container \"c6d76dbb4c9ddf787634bf2d7a384737a8effd3d57ced88e868105e320dfdb8d\": container with ID starting with c6d76dbb4c9ddf787634bf2d7a384737a8effd3d57ced88e868105e320dfdb8d not found: ID does not exist" Nov 29 07:33:45 crc kubenswrapper[4947]: I1129 07:33:45.320467 4947 scope.go:117] "RemoveContainer" containerID="3993412612fdc7be35098be0fbbbf21ef058ef7ed85cb52908ac9d845bdb1a26" Nov 29 07:33:45 crc kubenswrapper[4947]: E1129 07:33:45.324425 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3993412612fdc7be35098be0fbbbf21ef058ef7ed85cb52908ac9d845bdb1a26\": container with ID starting with 3993412612fdc7be35098be0fbbbf21ef058ef7ed85cb52908ac9d845bdb1a26 not found: ID does not exist" containerID="3993412612fdc7be35098be0fbbbf21ef058ef7ed85cb52908ac9d845bdb1a26" Nov 29 07:33:45 crc kubenswrapper[4947]: I1129 07:33:45.324484 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3993412612fdc7be35098be0fbbbf21ef058ef7ed85cb52908ac9d845bdb1a26"} err="failed to get container status \"3993412612fdc7be35098be0fbbbf21ef058ef7ed85cb52908ac9d845bdb1a26\": rpc error: code = NotFound desc = could not find container \"3993412612fdc7be35098be0fbbbf21ef058ef7ed85cb52908ac9d845bdb1a26\": container with ID starting with 3993412612fdc7be35098be0fbbbf21ef058ef7ed85cb52908ac9d845bdb1a26 not found: ID does not exist" Nov 29 07:33:45 crc kubenswrapper[4947]: I1129 07:33:45.324521 4947 scope.go:117] "RemoveContainer" containerID="0442b26d98842e090a4a7e338f370d268e14538698d0b99907904344ed59e982" Nov 29 07:33:45 crc kubenswrapper[4947]: E1129 07:33:45.328447 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0442b26d98842e090a4a7e338f370d268e14538698d0b99907904344ed59e982\": container with ID starting with 0442b26d98842e090a4a7e338f370d268e14538698d0b99907904344ed59e982 not found: ID does not exist" containerID="0442b26d98842e090a4a7e338f370d268e14538698d0b99907904344ed59e982" Nov 29 07:33:45 crc kubenswrapper[4947]: I1129 07:33:45.328505 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0442b26d98842e090a4a7e338f370d268e14538698d0b99907904344ed59e982"} err="failed to get container status \"0442b26d98842e090a4a7e338f370d268e14538698d0b99907904344ed59e982\": rpc error: code = NotFound desc = could not find container \"0442b26d98842e090a4a7e338f370d268e14538698d0b99907904344ed59e982\": container with ID starting with 0442b26d98842e090a4a7e338f370d268e14538698d0b99907904344ed59e982 not found: ID does not exist" Nov 29 07:33:47 crc kubenswrapper[4947]: I1129 07:33:47.207016 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab1a01d0-cc98-41ff-9fba-56fa1f7aa305" path="/var/lib/kubelet/pods/ab1a01d0-cc98-41ff-9fba-56fa1f7aa305/volumes" Nov 29 07:33:52 crc kubenswrapper[4947]: I1129 07:33:52.987393 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:33:52 crc kubenswrapper[4947]: I1129 07:33:52.987837 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:34:22 crc kubenswrapper[4947]: I1129 07:34:22.987886 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:34:22 crc kubenswrapper[4947]: I1129 07:34:22.988473 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:34:52 crc kubenswrapper[4947]: I1129 07:34:52.987280 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:34:52 crc kubenswrapper[4947]: I1129 07:34:52.987704 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:34:52 crc kubenswrapper[4947]: I1129 07:34:52.987741 4947 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" Nov 29 07:34:52 crc kubenswrapper[4947]: I1129 07:34:52.988436 4947 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f0e0197d224bb0f950a728e1a6a38d3e1a99970cb7adaa9271f86cc1b41a4062"} pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 07:34:52 crc kubenswrapper[4947]: I1129 07:34:52.988489 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" containerID="cri-o://f0e0197d224bb0f950a728e1a6a38d3e1a99970cb7adaa9271f86cc1b41a4062" gracePeriod=600 Nov 29 07:34:53 crc kubenswrapper[4947]: I1129 07:34:53.855742 4947 generic.go:334] "Generic (PLEG): container finished" podID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerID="f0e0197d224bb0f950a728e1a6a38d3e1a99970cb7adaa9271f86cc1b41a4062" exitCode=0 Nov 29 07:34:53 crc kubenswrapper[4947]: I1129 07:34:53.856110 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" event={"ID":"5f4d791f-bb61-4aaa-a09c-3007b59645a7","Type":"ContainerDied","Data":"f0e0197d224bb0f950a728e1a6a38d3e1a99970cb7adaa9271f86cc1b41a4062"} Nov 29 07:34:53 crc kubenswrapper[4947]: I1129 07:34:53.856148 4947 scope.go:117] "RemoveContainer" containerID="9f307de5f94b3683e9f730374e74b4cfb62a198f3b5d373575c97183b50a6cd9" Nov 29 07:34:54 crc kubenswrapper[4947]: I1129 07:34:54.879029 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" event={"ID":"5f4d791f-bb61-4aaa-a09c-3007b59645a7","Type":"ContainerStarted","Data":"befe8bc1f518b72b2765c4bbae633eaff2671198765b803461fe977b3f76f166"} Nov 29 07:35:40 crc kubenswrapper[4947]: I1129 07:35:40.490368 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nmj2p"] Nov 29 07:35:40 crc kubenswrapper[4947]: E1129 07:35:40.491420 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab1a01d0-cc98-41ff-9fba-56fa1f7aa305" containerName="registry-server" Nov 29 07:35:40 crc kubenswrapper[4947]: I1129 07:35:40.491436 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab1a01d0-cc98-41ff-9fba-56fa1f7aa305" containerName="registry-server" Nov 29 07:35:40 crc kubenswrapper[4947]: E1129 07:35:40.491449 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab1a01d0-cc98-41ff-9fba-56fa1f7aa305" containerName="extract-utilities" Nov 29 07:35:40 crc kubenswrapper[4947]: I1129 07:35:40.491456 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab1a01d0-cc98-41ff-9fba-56fa1f7aa305" containerName="extract-utilities" Nov 29 07:35:40 crc kubenswrapper[4947]: E1129 07:35:40.491464 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab1a01d0-cc98-41ff-9fba-56fa1f7aa305" containerName="extract-content" Nov 29 07:35:40 crc kubenswrapper[4947]: I1129 07:35:40.491470 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab1a01d0-cc98-41ff-9fba-56fa1f7aa305" containerName="extract-content" Nov 29 07:35:40 crc kubenswrapper[4947]: I1129 07:35:40.491649 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab1a01d0-cc98-41ff-9fba-56fa1f7aa305" containerName="registry-server" Nov 29 07:35:40 crc kubenswrapper[4947]: I1129 07:35:40.493203 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nmj2p" Nov 29 07:35:40 crc kubenswrapper[4947]: I1129 07:35:40.515844 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nmj2p"] Nov 29 07:35:40 crc kubenswrapper[4947]: I1129 07:35:40.547671 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0553a3f-8101-4063-ba99-4e57a6887633-utilities\") pod \"redhat-marketplace-nmj2p\" (UID: \"c0553a3f-8101-4063-ba99-4e57a6887633\") " pod="openshift-marketplace/redhat-marketplace-nmj2p" Nov 29 07:35:40 crc kubenswrapper[4947]: I1129 07:35:40.547732 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0553a3f-8101-4063-ba99-4e57a6887633-catalog-content\") pod \"redhat-marketplace-nmj2p\" (UID: \"c0553a3f-8101-4063-ba99-4e57a6887633\") " pod="openshift-marketplace/redhat-marketplace-nmj2p" Nov 29 07:35:40 crc kubenswrapper[4947]: I1129 07:35:40.547768 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k64d8\" (UniqueName: \"kubernetes.io/projected/c0553a3f-8101-4063-ba99-4e57a6887633-kube-api-access-k64d8\") pod \"redhat-marketplace-nmj2p\" (UID: \"c0553a3f-8101-4063-ba99-4e57a6887633\") " pod="openshift-marketplace/redhat-marketplace-nmj2p" Nov 29 07:35:40 crc kubenswrapper[4947]: I1129 07:35:40.649966 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0553a3f-8101-4063-ba99-4e57a6887633-utilities\") pod \"redhat-marketplace-nmj2p\" (UID: \"c0553a3f-8101-4063-ba99-4e57a6887633\") " pod="openshift-marketplace/redhat-marketplace-nmj2p" Nov 29 07:35:40 crc kubenswrapper[4947]: I1129 07:35:40.650026 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0553a3f-8101-4063-ba99-4e57a6887633-catalog-content\") pod \"redhat-marketplace-nmj2p\" (UID: \"c0553a3f-8101-4063-ba99-4e57a6887633\") " pod="openshift-marketplace/redhat-marketplace-nmj2p" Nov 29 07:35:40 crc kubenswrapper[4947]: I1129 07:35:40.650059 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k64d8\" (UniqueName: \"kubernetes.io/projected/c0553a3f-8101-4063-ba99-4e57a6887633-kube-api-access-k64d8\") pod \"redhat-marketplace-nmj2p\" (UID: \"c0553a3f-8101-4063-ba99-4e57a6887633\") " pod="openshift-marketplace/redhat-marketplace-nmj2p" Nov 29 07:35:40 crc kubenswrapper[4947]: I1129 07:35:40.651171 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0553a3f-8101-4063-ba99-4e57a6887633-utilities\") pod \"redhat-marketplace-nmj2p\" (UID: \"c0553a3f-8101-4063-ba99-4e57a6887633\") " pod="openshift-marketplace/redhat-marketplace-nmj2p" Nov 29 07:35:40 crc kubenswrapper[4947]: I1129 07:35:40.651171 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0553a3f-8101-4063-ba99-4e57a6887633-catalog-content\") pod \"redhat-marketplace-nmj2p\" (UID: \"c0553a3f-8101-4063-ba99-4e57a6887633\") " pod="openshift-marketplace/redhat-marketplace-nmj2p" Nov 29 07:35:40 crc kubenswrapper[4947]: I1129 07:35:40.680825 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k64d8\" (UniqueName: \"kubernetes.io/projected/c0553a3f-8101-4063-ba99-4e57a6887633-kube-api-access-k64d8\") pod \"redhat-marketplace-nmj2p\" (UID: \"c0553a3f-8101-4063-ba99-4e57a6887633\") " pod="openshift-marketplace/redhat-marketplace-nmj2p" Nov 29 07:35:40 crc kubenswrapper[4947]: I1129 07:35:40.826021 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nmj2p" Nov 29 07:35:41 crc kubenswrapper[4947]: I1129 07:35:41.195173 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nmj2p"] Nov 29 07:35:41 crc kubenswrapper[4947]: I1129 07:35:41.308687 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nmj2p" event={"ID":"c0553a3f-8101-4063-ba99-4e57a6887633","Type":"ContainerStarted","Data":"b512aebf7e1903f30889eb770958175f72e526ccd01279befb75597cdeb8d38e"} Nov 29 07:35:42 crc kubenswrapper[4947]: I1129 07:35:42.318754 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nmj2p" event={"ID":"c0553a3f-8101-4063-ba99-4e57a6887633","Type":"ContainerDied","Data":"1ff99fe09ba4e7accc3b19c6138d8c206e0ed6a4269bdd17f5090145ecaa9dd7"} Nov 29 07:35:42 crc kubenswrapper[4947]: I1129 07:35:42.318698 4947 generic.go:334] "Generic (PLEG): container finished" podID="c0553a3f-8101-4063-ba99-4e57a6887633" containerID="1ff99fe09ba4e7accc3b19c6138d8c206e0ed6a4269bdd17f5090145ecaa9dd7" exitCode=0 Nov 29 07:35:43 crc kubenswrapper[4947]: I1129 07:35:43.336560 4947 generic.go:334] "Generic (PLEG): container finished" podID="c0553a3f-8101-4063-ba99-4e57a6887633" containerID="eeeae2de79d714c6991415017d172d02f77a068fdc934611aa8e5256205900d8" exitCode=0 Nov 29 07:35:43 crc kubenswrapper[4947]: I1129 07:35:43.336644 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nmj2p" event={"ID":"c0553a3f-8101-4063-ba99-4e57a6887633","Type":"ContainerDied","Data":"eeeae2de79d714c6991415017d172d02f77a068fdc934611aa8e5256205900d8"} Nov 29 07:35:44 crc kubenswrapper[4947]: I1129 07:35:44.350858 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nmj2p" event={"ID":"c0553a3f-8101-4063-ba99-4e57a6887633","Type":"ContainerStarted","Data":"c26722d0456e805477046bc0fe0911213fd47d43398be19e1b5f9d77093bbf8e"} Nov 29 07:35:44 crc kubenswrapper[4947]: I1129 07:35:44.379872 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nmj2p" podStartSLOduration=2.943577645 podStartE2EDuration="4.379854683s" podCreationTimestamp="2025-11-29 07:35:40 +0000 UTC" firstStartedPulling="2025-11-29 07:35:42.321362403 +0000 UTC m=+3693.365744484" lastFinishedPulling="2025-11-29 07:35:43.757639441 +0000 UTC m=+3694.802021522" observedRunningTime="2025-11-29 07:35:44.374538339 +0000 UTC m=+3695.418920420" watchObservedRunningTime="2025-11-29 07:35:44.379854683 +0000 UTC m=+3695.424236764" Nov 29 07:35:50 crc kubenswrapper[4947]: I1129 07:35:50.827437 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nmj2p" Nov 29 07:35:50 crc kubenswrapper[4947]: I1129 07:35:50.828042 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nmj2p" Nov 29 07:35:50 crc kubenswrapper[4947]: I1129 07:35:50.882137 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nmj2p" Nov 29 07:35:51 crc kubenswrapper[4947]: I1129 07:35:51.496894 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nmj2p" Nov 29 07:35:51 crc kubenswrapper[4947]: I1129 07:35:51.551383 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nmj2p"] Nov 29 07:35:53 crc kubenswrapper[4947]: I1129 07:35:53.450076 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nmj2p" podUID="c0553a3f-8101-4063-ba99-4e57a6887633" containerName="registry-server" containerID="cri-o://c26722d0456e805477046bc0fe0911213fd47d43398be19e1b5f9d77093bbf8e" gracePeriod=2 Nov 29 07:35:53 crc kubenswrapper[4947]: I1129 07:35:53.939015 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nmj2p" Nov 29 07:35:53 crc kubenswrapper[4947]: I1129 07:35:53.995337 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0553a3f-8101-4063-ba99-4e57a6887633-utilities\") pod \"c0553a3f-8101-4063-ba99-4e57a6887633\" (UID: \"c0553a3f-8101-4063-ba99-4e57a6887633\") " Nov 29 07:35:53 crc kubenswrapper[4947]: I1129 07:35:53.995520 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k64d8\" (UniqueName: \"kubernetes.io/projected/c0553a3f-8101-4063-ba99-4e57a6887633-kube-api-access-k64d8\") pod \"c0553a3f-8101-4063-ba99-4e57a6887633\" (UID: \"c0553a3f-8101-4063-ba99-4e57a6887633\") " Nov 29 07:35:53 crc kubenswrapper[4947]: I1129 07:35:53.995599 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0553a3f-8101-4063-ba99-4e57a6887633-catalog-content\") pod \"c0553a3f-8101-4063-ba99-4e57a6887633\" (UID: \"c0553a3f-8101-4063-ba99-4e57a6887633\") " Nov 29 07:35:53 crc kubenswrapper[4947]: I1129 07:35:53.997126 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0553a3f-8101-4063-ba99-4e57a6887633-utilities" (OuterVolumeSpecName: "utilities") pod "c0553a3f-8101-4063-ba99-4e57a6887633" (UID: "c0553a3f-8101-4063-ba99-4e57a6887633"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:35:54 crc kubenswrapper[4947]: I1129 07:35:54.005364 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0553a3f-8101-4063-ba99-4e57a6887633-kube-api-access-k64d8" (OuterVolumeSpecName: "kube-api-access-k64d8") pod "c0553a3f-8101-4063-ba99-4e57a6887633" (UID: "c0553a3f-8101-4063-ba99-4e57a6887633"). InnerVolumeSpecName "kube-api-access-k64d8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:35:54 crc kubenswrapper[4947]: I1129 07:35:54.024544 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0553a3f-8101-4063-ba99-4e57a6887633-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c0553a3f-8101-4063-ba99-4e57a6887633" (UID: "c0553a3f-8101-4063-ba99-4e57a6887633"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:35:54 crc kubenswrapper[4947]: I1129 07:35:54.098478 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k64d8\" (UniqueName: \"kubernetes.io/projected/c0553a3f-8101-4063-ba99-4e57a6887633-kube-api-access-k64d8\") on node \"crc\" DevicePath \"\"" Nov 29 07:35:54 crc kubenswrapper[4947]: I1129 07:35:54.098539 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0553a3f-8101-4063-ba99-4e57a6887633-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:35:54 crc kubenswrapper[4947]: I1129 07:35:54.098556 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0553a3f-8101-4063-ba99-4e57a6887633-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:35:54 crc kubenswrapper[4947]: I1129 07:35:54.464099 4947 generic.go:334] "Generic (PLEG): container finished" podID="c0553a3f-8101-4063-ba99-4e57a6887633" containerID="c26722d0456e805477046bc0fe0911213fd47d43398be19e1b5f9d77093bbf8e" exitCode=0 Nov 29 07:35:54 crc kubenswrapper[4947]: I1129 07:35:54.464173 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nmj2p" Nov 29 07:35:54 crc kubenswrapper[4947]: I1129 07:35:54.464183 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nmj2p" event={"ID":"c0553a3f-8101-4063-ba99-4e57a6887633","Type":"ContainerDied","Data":"c26722d0456e805477046bc0fe0911213fd47d43398be19e1b5f9d77093bbf8e"} Nov 29 07:35:54 crc kubenswrapper[4947]: I1129 07:35:54.464253 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nmj2p" event={"ID":"c0553a3f-8101-4063-ba99-4e57a6887633","Type":"ContainerDied","Data":"b512aebf7e1903f30889eb770958175f72e526ccd01279befb75597cdeb8d38e"} Nov 29 07:35:54 crc kubenswrapper[4947]: I1129 07:35:54.464286 4947 scope.go:117] "RemoveContainer" containerID="c26722d0456e805477046bc0fe0911213fd47d43398be19e1b5f9d77093bbf8e" Nov 29 07:35:54 crc kubenswrapper[4947]: I1129 07:35:54.489461 4947 scope.go:117] "RemoveContainer" containerID="eeeae2de79d714c6991415017d172d02f77a068fdc934611aa8e5256205900d8" Nov 29 07:35:54 crc kubenswrapper[4947]: I1129 07:35:54.519710 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nmj2p"] Nov 29 07:35:54 crc kubenswrapper[4947]: I1129 07:35:54.531567 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nmj2p"] Nov 29 07:35:54 crc kubenswrapper[4947]: I1129 07:35:54.533447 4947 scope.go:117] "RemoveContainer" containerID="1ff99fe09ba4e7accc3b19c6138d8c206e0ed6a4269bdd17f5090145ecaa9dd7" Nov 29 07:35:54 crc kubenswrapper[4947]: I1129 07:35:54.590667 4947 scope.go:117] "RemoveContainer" containerID="c26722d0456e805477046bc0fe0911213fd47d43398be19e1b5f9d77093bbf8e" Nov 29 07:35:54 crc kubenswrapper[4947]: E1129 07:35:54.591212 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c26722d0456e805477046bc0fe0911213fd47d43398be19e1b5f9d77093bbf8e\": container with ID starting with c26722d0456e805477046bc0fe0911213fd47d43398be19e1b5f9d77093bbf8e not found: ID does not exist" containerID="c26722d0456e805477046bc0fe0911213fd47d43398be19e1b5f9d77093bbf8e" Nov 29 07:35:54 crc kubenswrapper[4947]: I1129 07:35:54.591278 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c26722d0456e805477046bc0fe0911213fd47d43398be19e1b5f9d77093bbf8e"} err="failed to get container status \"c26722d0456e805477046bc0fe0911213fd47d43398be19e1b5f9d77093bbf8e\": rpc error: code = NotFound desc = could not find container \"c26722d0456e805477046bc0fe0911213fd47d43398be19e1b5f9d77093bbf8e\": container with ID starting with c26722d0456e805477046bc0fe0911213fd47d43398be19e1b5f9d77093bbf8e not found: ID does not exist" Nov 29 07:35:54 crc kubenswrapper[4947]: I1129 07:35:54.591302 4947 scope.go:117] "RemoveContainer" containerID="eeeae2de79d714c6991415017d172d02f77a068fdc934611aa8e5256205900d8" Nov 29 07:35:54 crc kubenswrapper[4947]: E1129 07:35:54.591576 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eeeae2de79d714c6991415017d172d02f77a068fdc934611aa8e5256205900d8\": container with ID starting with eeeae2de79d714c6991415017d172d02f77a068fdc934611aa8e5256205900d8 not found: ID does not exist" containerID="eeeae2de79d714c6991415017d172d02f77a068fdc934611aa8e5256205900d8" Nov 29 07:35:54 crc kubenswrapper[4947]: I1129 07:35:54.591604 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eeeae2de79d714c6991415017d172d02f77a068fdc934611aa8e5256205900d8"} err="failed to get container status \"eeeae2de79d714c6991415017d172d02f77a068fdc934611aa8e5256205900d8\": rpc error: code = NotFound desc = could not find container \"eeeae2de79d714c6991415017d172d02f77a068fdc934611aa8e5256205900d8\": container with ID starting with eeeae2de79d714c6991415017d172d02f77a068fdc934611aa8e5256205900d8 not found: ID does not exist" Nov 29 07:35:54 crc kubenswrapper[4947]: I1129 07:35:54.591618 4947 scope.go:117] "RemoveContainer" containerID="1ff99fe09ba4e7accc3b19c6138d8c206e0ed6a4269bdd17f5090145ecaa9dd7" Nov 29 07:35:54 crc kubenswrapper[4947]: E1129 07:35:54.591874 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ff99fe09ba4e7accc3b19c6138d8c206e0ed6a4269bdd17f5090145ecaa9dd7\": container with ID starting with 1ff99fe09ba4e7accc3b19c6138d8c206e0ed6a4269bdd17f5090145ecaa9dd7 not found: ID does not exist" containerID="1ff99fe09ba4e7accc3b19c6138d8c206e0ed6a4269bdd17f5090145ecaa9dd7" Nov 29 07:35:54 crc kubenswrapper[4947]: I1129 07:35:54.591905 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ff99fe09ba4e7accc3b19c6138d8c206e0ed6a4269bdd17f5090145ecaa9dd7"} err="failed to get container status \"1ff99fe09ba4e7accc3b19c6138d8c206e0ed6a4269bdd17f5090145ecaa9dd7\": rpc error: code = NotFound desc = could not find container \"1ff99fe09ba4e7accc3b19c6138d8c206e0ed6a4269bdd17f5090145ecaa9dd7\": container with ID starting with 1ff99fe09ba4e7accc3b19c6138d8c206e0ed6a4269bdd17f5090145ecaa9dd7 not found: ID does not exist" Nov 29 07:35:55 crc kubenswrapper[4947]: I1129 07:35:55.190708 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0553a3f-8101-4063-ba99-4e57a6887633" path="/var/lib/kubelet/pods/c0553a3f-8101-4063-ba99-4e57a6887633/volumes" Nov 29 07:37:22 crc kubenswrapper[4947]: I1129 07:37:22.988480 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:37:22 crc kubenswrapper[4947]: I1129 07:37:22.989120 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:37:52 crc kubenswrapper[4947]: I1129 07:37:52.987798 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:37:52 crc kubenswrapper[4947]: I1129 07:37:52.988375 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:38:22 crc kubenswrapper[4947]: I1129 07:38:22.987839 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:38:22 crc kubenswrapper[4947]: I1129 07:38:22.988691 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:38:22 crc kubenswrapper[4947]: I1129 07:38:22.988761 4947 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" Nov 29 07:38:22 crc kubenswrapper[4947]: I1129 07:38:22.989782 4947 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"befe8bc1f518b72b2765c4bbae633eaff2671198765b803461fe977b3f76f166"} pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 07:38:22 crc kubenswrapper[4947]: I1129 07:38:22.989846 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" containerID="cri-o://befe8bc1f518b72b2765c4bbae633eaff2671198765b803461fe977b3f76f166" gracePeriod=600 Nov 29 07:38:23 crc kubenswrapper[4947]: E1129 07:38:23.117433 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:38:24 crc kubenswrapper[4947]: I1129 07:38:24.092206 4947 generic.go:334] "Generic (PLEG): container finished" podID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerID="befe8bc1f518b72b2765c4bbae633eaff2671198765b803461fe977b3f76f166" exitCode=0 Nov 29 07:38:24 crc kubenswrapper[4947]: I1129 07:38:24.092275 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" event={"ID":"5f4d791f-bb61-4aaa-a09c-3007b59645a7","Type":"ContainerDied","Data":"befe8bc1f518b72b2765c4bbae633eaff2671198765b803461fe977b3f76f166"} Nov 29 07:38:24 crc kubenswrapper[4947]: I1129 07:38:24.092347 4947 scope.go:117] "RemoveContainer" containerID="f0e0197d224bb0f950a728e1a6a38d3e1a99970cb7adaa9271f86cc1b41a4062" Nov 29 07:38:24 crc kubenswrapper[4947]: I1129 07:38:24.093369 4947 scope.go:117] "RemoveContainer" containerID="befe8bc1f518b72b2765c4bbae633eaff2671198765b803461fe977b3f76f166" Nov 29 07:38:24 crc kubenswrapper[4947]: E1129 07:38:24.093911 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:38:38 crc kubenswrapper[4947]: I1129 07:38:38.180176 4947 scope.go:117] "RemoveContainer" containerID="befe8bc1f518b72b2765c4bbae633eaff2671198765b803461fe977b3f76f166" Nov 29 07:38:38 crc kubenswrapper[4947]: E1129 07:38:38.181102 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:38:41 crc kubenswrapper[4947]: I1129 07:38:41.260815 4947 generic.go:334] "Generic (PLEG): container finished" podID="f14a521e-01c4-4720-984d-65f1123397ae" containerID="ecdb04437dfd44c24a9c1e8dcb761803beb696b473dc77bdc3a0e9131568d65a" exitCode=0 Nov 29 07:38:41 crc kubenswrapper[4947]: I1129 07:38:41.260926 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx" event={"ID":"f14a521e-01c4-4720-984d-65f1123397ae","Type":"ContainerDied","Data":"ecdb04437dfd44c24a9c1e8dcb761803beb696b473dc77bdc3a0e9131568d65a"} Nov 29 07:38:42 crc kubenswrapper[4947]: I1129 07:38:42.701291 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx" Nov 29 07:38:42 crc kubenswrapper[4947]: I1129 07:38:42.813124 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f14a521e-01c4-4720-984d-65f1123397ae-ceph\") pod \"f14a521e-01c4-4720-984d-65f1123397ae\" (UID: \"f14a521e-01c4-4720-984d-65f1123397ae\") " Nov 29 07:38:42 crc kubenswrapper[4947]: I1129 07:38:42.813286 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f14a521e-01c4-4720-984d-65f1123397ae-nova-cell1-compute-config-1\") pod \"f14a521e-01c4-4720-984d-65f1123397ae\" (UID: \"f14a521e-01c4-4720-984d-65f1123397ae\") " Nov 29 07:38:42 crc kubenswrapper[4947]: I1129 07:38:42.813345 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/f14a521e-01c4-4720-984d-65f1123397ae-ceph-nova-0\") pod \"f14a521e-01c4-4720-984d-65f1123397ae\" (UID: \"f14a521e-01c4-4720-984d-65f1123397ae\") " Nov 29 07:38:42 crc kubenswrapper[4947]: I1129 07:38:42.813380 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f14a521e-01c4-4720-984d-65f1123397ae-nova-migration-ssh-key-1\") pod \"f14a521e-01c4-4720-984d-65f1123397ae\" (UID: \"f14a521e-01c4-4720-984d-65f1123397ae\") " Nov 29 07:38:42 crc kubenswrapper[4947]: I1129 07:38:42.813432 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f14a521e-01c4-4720-984d-65f1123397ae-nova-migration-ssh-key-0\") pod \"f14a521e-01c4-4720-984d-65f1123397ae\" (UID: \"f14a521e-01c4-4720-984d-65f1123397ae\") " Nov 29 07:38:42 crc kubenswrapper[4947]: I1129 07:38:42.813501 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f14a521e-01c4-4720-984d-65f1123397ae-nova-custom-ceph-combined-ca-bundle\") pod \"f14a521e-01c4-4720-984d-65f1123397ae\" (UID: \"f14a521e-01c4-4720-984d-65f1123397ae\") " Nov 29 07:38:42 crc kubenswrapper[4947]: I1129 07:38:42.813566 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f14a521e-01c4-4720-984d-65f1123397ae-nova-cell1-compute-config-0\") pod \"f14a521e-01c4-4720-984d-65f1123397ae\" (UID: \"f14a521e-01c4-4720-984d-65f1123397ae\") " Nov 29 07:38:42 crc kubenswrapper[4947]: I1129 07:38:42.813605 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f14a521e-01c4-4720-984d-65f1123397ae-nova-extra-config-0\") pod \"f14a521e-01c4-4720-984d-65f1123397ae\" (UID: \"f14a521e-01c4-4720-984d-65f1123397ae\") " Nov 29 07:38:42 crc kubenswrapper[4947]: I1129 07:38:42.813638 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f14a521e-01c4-4720-984d-65f1123397ae-ssh-key\") pod \"f14a521e-01c4-4720-984d-65f1123397ae\" (UID: \"f14a521e-01c4-4720-984d-65f1123397ae\") " Nov 29 07:38:42 crc kubenswrapper[4947]: I1129 07:38:42.813658 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2mll\" (UniqueName: \"kubernetes.io/projected/f14a521e-01c4-4720-984d-65f1123397ae-kube-api-access-x2mll\") pod \"f14a521e-01c4-4720-984d-65f1123397ae\" (UID: \"f14a521e-01c4-4720-984d-65f1123397ae\") " Nov 29 07:38:42 crc kubenswrapper[4947]: I1129 07:38:42.813677 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f14a521e-01c4-4720-984d-65f1123397ae-inventory\") pod \"f14a521e-01c4-4720-984d-65f1123397ae\" (UID: \"f14a521e-01c4-4720-984d-65f1123397ae\") " Nov 29 07:38:42 crc kubenswrapper[4947]: I1129 07:38:42.820520 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f14a521e-01c4-4720-984d-65f1123397ae-ceph" (OuterVolumeSpecName: "ceph") pod "f14a521e-01c4-4720-984d-65f1123397ae" (UID: "f14a521e-01c4-4720-984d-65f1123397ae"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:38:42 crc kubenswrapper[4947]: I1129 07:38:42.820691 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f14a521e-01c4-4720-984d-65f1123397ae-kube-api-access-x2mll" (OuterVolumeSpecName: "kube-api-access-x2mll") pod "f14a521e-01c4-4720-984d-65f1123397ae" (UID: "f14a521e-01c4-4720-984d-65f1123397ae"). InnerVolumeSpecName "kube-api-access-x2mll". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:38:42 crc kubenswrapper[4947]: I1129 07:38:42.823338 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f14a521e-01c4-4720-984d-65f1123397ae-nova-custom-ceph-combined-ca-bundle" (OuterVolumeSpecName: "nova-custom-ceph-combined-ca-bundle") pod "f14a521e-01c4-4720-984d-65f1123397ae" (UID: "f14a521e-01c4-4720-984d-65f1123397ae"). InnerVolumeSpecName "nova-custom-ceph-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:38:42 crc kubenswrapper[4947]: I1129 07:38:42.847709 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f14a521e-01c4-4720-984d-65f1123397ae-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f14a521e-01c4-4720-984d-65f1123397ae" (UID: "f14a521e-01c4-4720-984d-65f1123397ae"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:38:42 crc kubenswrapper[4947]: I1129 07:38:42.850130 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f14a521e-01c4-4720-984d-65f1123397ae-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "f14a521e-01c4-4720-984d-65f1123397ae" (UID: "f14a521e-01c4-4720-984d-65f1123397ae"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:38:42 crc kubenswrapper[4947]: I1129 07:38:42.850703 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f14a521e-01c4-4720-984d-65f1123397ae-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "f14a521e-01c4-4720-984d-65f1123397ae" (UID: "f14a521e-01c4-4720-984d-65f1123397ae"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:38:42 crc kubenswrapper[4947]: I1129 07:38:42.851182 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f14a521e-01c4-4720-984d-65f1123397ae-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "f14a521e-01c4-4720-984d-65f1123397ae" (UID: "f14a521e-01c4-4720-984d-65f1123397ae"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:38:42 crc kubenswrapper[4947]: I1129 07:38:42.861603 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f14a521e-01c4-4720-984d-65f1123397ae-ceph-nova-0" (OuterVolumeSpecName: "ceph-nova-0") pod "f14a521e-01c4-4720-984d-65f1123397ae" (UID: "f14a521e-01c4-4720-984d-65f1123397ae"). InnerVolumeSpecName "ceph-nova-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:38:42 crc kubenswrapper[4947]: I1129 07:38:42.862480 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f14a521e-01c4-4720-984d-65f1123397ae-inventory" (OuterVolumeSpecName: "inventory") pod "f14a521e-01c4-4720-984d-65f1123397ae" (UID: "f14a521e-01c4-4720-984d-65f1123397ae"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:38:42 crc kubenswrapper[4947]: I1129 07:38:42.869051 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f14a521e-01c4-4720-984d-65f1123397ae-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "f14a521e-01c4-4720-984d-65f1123397ae" (UID: "f14a521e-01c4-4720-984d-65f1123397ae"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:38:42 crc kubenswrapper[4947]: I1129 07:38:42.878132 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f14a521e-01c4-4720-984d-65f1123397ae-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "f14a521e-01c4-4720-984d-65f1123397ae" (UID: "f14a521e-01c4-4720-984d-65f1123397ae"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:38:42 crc kubenswrapper[4947]: I1129 07:38:42.916495 4947 reconciler_common.go:293] "Volume detached for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f14a521e-01c4-4720-984d-65f1123397ae-nova-custom-ceph-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:38:42 crc kubenswrapper[4947]: I1129 07:38:42.916815 4947 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f14a521e-01c4-4720-984d-65f1123397ae-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:38:42 crc kubenswrapper[4947]: I1129 07:38:42.916918 4947 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f14a521e-01c4-4720-984d-65f1123397ae-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:38:42 crc kubenswrapper[4947]: I1129 07:38:42.917040 4947 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f14a521e-01c4-4720-984d-65f1123397ae-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:38:42 crc kubenswrapper[4947]: I1129 07:38:42.917164 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2mll\" (UniqueName: \"kubernetes.io/projected/f14a521e-01c4-4720-984d-65f1123397ae-kube-api-access-x2mll\") on node \"crc\" DevicePath \"\"" Nov 29 07:38:42 crc kubenswrapper[4947]: I1129 07:38:42.917355 4947 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f14a521e-01c4-4720-984d-65f1123397ae-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:38:42 crc kubenswrapper[4947]: I1129 07:38:42.917511 4947 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f14a521e-01c4-4720-984d-65f1123397ae-ceph\") on node \"crc\" DevicePath \"\"" Nov 29 07:38:42 crc kubenswrapper[4947]: I1129 07:38:42.917668 4947 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f14a521e-01c4-4720-984d-65f1123397ae-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Nov 29 07:38:42 crc kubenswrapper[4947]: I1129 07:38:42.917793 4947 reconciler_common.go:293] "Volume detached for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/f14a521e-01c4-4720-984d-65f1123397ae-ceph-nova-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:38:42 crc kubenswrapper[4947]: I1129 07:38:42.917929 4947 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f14a521e-01c4-4720-984d-65f1123397ae-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Nov 29 07:38:42 crc kubenswrapper[4947]: I1129 07:38:42.918009 4947 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f14a521e-01c4-4720-984d-65f1123397ae-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:38:43 crc kubenswrapper[4947]: I1129 07:38:43.279787 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx" event={"ID":"f14a521e-01c4-4720-984d-65f1123397ae","Type":"ContainerDied","Data":"33260e5540b1f380b44d058d3408d92263826b9e1ff364fffb141aefb3f454ac"} Nov 29 07:38:43 crc kubenswrapper[4947]: I1129 07:38:43.279842 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33260e5540b1f380b44d058d3408d92263826b9e1ff364fffb141aefb3f454ac" Nov 29 07:38:43 crc kubenswrapper[4947]: I1129 07:38:43.279844 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx" Nov 29 07:38:53 crc kubenswrapper[4947]: I1129 07:38:53.179466 4947 scope.go:117] "RemoveContainer" containerID="befe8bc1f518b72b2765c4bbae633eaff2671198765b803461fe977b3f76f166" Nov 29 07:38:53 crc kubenswrapper[4947]: E1129 07:38:53.180762 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.219187 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Nov 29 07:38:58 crc kubenswrapper[4947]: E1129 07:38:58.220744 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0553a3f-8101-4063-ba99-4e57a6887633" containerName="registry-server" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.220763 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0553a3f-8101-4063-ba99-4e57a6887633" containerName="registry-server" Nov 29 07:38:58 crc kubenswrapper[4947]: E1129 07:38:58.220781 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0553a3f-8101-4063-ba99-4e57a6887633" containerName="extract-utilities" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.220788 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0553a3f-8101-4063-ba99-4e57a6887633" containerName="extract-utilities" Nov 29 07:38:58 crc kubenswrapper[4947]: E1129 07:38:58.220815 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f14a521e-01c4-4720-984d-65f1123397ae" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.220824 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="f14a521e-01c4-4720-984d-65f1123397ae" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Nov 29 07:38:58 crc kubenswrapper[4947]: E1129 07:38:58.220840 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0553a3f-8101-4063-ba99-4e57a6887633" containerName="extract-content" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.220846 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0553a3f-8101-4063-ba99-4e57a6887633" containerName="extract-content" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.221046 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0553a3f-8101-4063-ba99-4e57a6887633" containerName="registry-server" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.221088 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="f14a521e-01c4-4720-984d-65f1123397ae" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.222396 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.225119 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.227739 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.262153 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.327242 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7887622b-8fe9-4ea2-a867-3490f70c1d87-lib-modules\") pod \"cinder-backup-0\" (UID: \"7887622b-8fe9-4ea2-a867-3490f70c1d87\") " pod="openstack/cinder-backup-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.327333 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7887622b-8fe9-4ea2-a867-3490f70c1d87-config-data\") pod \"cinder-backup-0\" (UID: \"7887622b-8fe9-4ea2-a867-3490f70c1d87\") " pod="openstack/cinder-backup-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.327361 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7887622b-8fe9-4ea2-a867-3490f70c1d87-ceph\") pod \"cinder-backup-0\" (UID: \"7887622b-8fe9-4ea2-a867-3490f70c1d87\") " pod="openstack/cinder-backup-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.327425 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7887622b-8fe9-4ea2-a867-3490f70c1d87-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"7887622b-8fe9-4ea2-a867-3490f70c1d87\") " pod="openstack/cinder-backup-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.327468 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7887622b-8fe9-4ea2-a867-3490f70c1d87-run\") pod \"cinder-backup-0\" (UID: \"7887622b-8fe9-4ea2-a867-3490f70c1d87\") " pod="openstack/cinder-backup-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.327647 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/7887622b-8fe9-4ea2-a867-3490f70c1d87-dev\") pod \"cinder-backup-0\" (UID: \"7887622b-8fe9-4ea2-a867-3490f70c1d87\") " pod="openstack/cinder-backup-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.327728 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/7887622b-8fe9-4ea2-a867-3490f70c1d87-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"7887622b-8fe9-4ea2-a867-3490f70c1d87\") " pod="openstack/cinder-backup-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.327778 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsw89\" (UniqueName: \"kubernetes.io/projected/7887622b-8fe9-4ea2-a867-3490f70c1d87-kube-api-access-dsw89\") pod \"cinder-backup-0\" (UID: \"7887622b-8fe9-4ea2-a867-3490f70c1d87\") " pod="openstack/cinder-backup-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.327818 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7887622b-8fe9-4ea2-a867-3490f70c1d87-sys\") pod \"cinder-backup-0\" (UID: \"7887622b-8fe9-4ea2-a867-3490f70c1d87\") " pod="openstack/cinder-backup-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.327984 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/7887622b-8fe9-4ea2-a867-3490f70c1d87-etc-nvme\") pod \"cinder-backup-0\" (UID: \"7887622b-8fe9-4ea2-a867-3490f70c1d87\") " pod="openstack/cinder-backup-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.328022 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/7887622b-8fe9-4ea2-a867-3490f70c1d87-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"7887622b-8fe9-4ea2-a867-3490f70c1d87\") " pod="openstack/cinder-backup-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.328119 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/7887622b-8fe9-4ea2-a867-3490f70c1d87-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"7887622b-8fe9-4ea2-a867-3490f70c1d87\") " pod="openstack/cinder-backup-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.328166 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7887622b-8fe9-4ea2-a867-3490f70c1d87-scripts\") pod \"cinder-backup-0\" (UID: \"7887622b-8fe9-4ea2-a867-3490f70c1d87\") " pod="openstack/cinder-backup-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.328202 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7887622b-8fe9-4ea2-a867-3490f70c1d87-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"7887622b-8fe9-4ea2-a867-3490f70c1d87\") " pod="openstack/cinder-backup-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.328255 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7887622b-8fe9-4ea2-a867-3490f70c1d87-config-data-custom\") pod \"cinder-backup-0\" (UID: \"7887622b-8fe9-4ea2-a867-3490f70c1d87\") " pod="openstack/cinder-backup-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.328284 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/7887622b-8fe9-4ea2-a867-3490f70c1d87-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"7887622b-8fe9-4ea2-a867-3490f70c1d87\") " pod="openstack/cinder-backup-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.333446 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.335936 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.342519 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.369255 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.434378 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/06d1947f-99c6-4f58-93f6-b15ea0b89743-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"06d1947f-99c6-4f58-93f6-b15ea0b89743\") " pod="openstack/cinder-volume-volume1-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.434445 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/06d1947f-99c6-4f58-93f6-b15ea0b89743-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"06d1947f-99c6-4f58-93f6-b15ea0b89743\") " pod="openstack/cinder-volume-volume1-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.434499 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/06d1947f-99c6-4f58-93f6-b15ea0b89743-dev\") pod \"cinder-volume-volume1-0\" (UID: \"06d1947f-99c6-4f58-93f6-b15ea0b89743\") " pod="openstack/cinder-volume-volume1-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.434526 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/7887622b-8fe9-4ea2-a867-3490f70c1d87-etc-nvme\") pod \"cinder-backup-0\" (UID: \"7887622b-8fe9-4ea2-a867-3490f70c1d87\") " pod="openstack/cinder-backup-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.434546 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/7887622b-8fe9-4ea2-a867-3490f70c1d87-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"7887622b-8fe9-4ea2-a867-3490f70c1d87\") " pod="openstack/cinder-backup-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.434583 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/06d1947f-99c6-4f58-93f6-b15ea0b89743-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"06d1947f-99c6-4f58-93f6-b15ea0b89743\") " pod="openstack/cinder-volume-volume1-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.434601 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06d1947f-99c6-4f58-93f6-b15ea0b89743-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"06d1947f-99c6-4f58-93f6-b15ea0b89743\") " pod="openstack/cinder-volume-volume1-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.434619 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06d1947f-99c6-4f58-93f6-b15ea0b89743-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"06d1947f-99c6-4f58-93f6-b15ea0b89743\") " pod="openstack/cinder-volume-volume1-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.434638 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbgck\" (UniqueName: \"kubernetes.io/projected/06d1947f-99c6-4f58-93f6-b15ea0b89743-kube-api-access-kbgck\") pod \"cinder-volume-volume1-0\" (UID: \"06d1947f-99c6-4f58-93f6-b15ea0b89743\") " pod="openstack/cinder-volume-volume1-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.434654 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/06d1947f-99c6-4f58-93f6-b15ea0b89743-sys\") pod \"cinder-volume-volume1-0\" (UID: \"06d1947f-99c6-4f58-93f6-b15ea0b89743\") " pod="openstack/cinder-volume-volume1-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.434674 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/7887622b-8fe9-4ea2-a867-3490f70c1d87-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"7887622b-8fe9-4ea2-a867-3490f70c1d87\") " pod="openstack/cinder-backup-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.434689 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06d1947f-99c6-4f58-93f6-b15ea0b89743-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"06d1947f-99c6-4f58-93f6-b15ea0b89743\") " pod="openstack/cinder-volume-volume1-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.434713 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7887622b-8fe9-4ea2-a867-3490f70c1d87-scripts\") pod \"cinder-backup-0\" (UID: \"7887622b-8fe9-4ea2-a867-3490f70c1d87\") " pod="openstack/cinder-backup-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.434733 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7887622b-8fe9-4ea2-a867-3490f70c1d87-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"7887622b-8fe9-4ea2-a867-3490f70c1d87\") " pod="openstack/cinder-backup-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.434755 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7887622b-8fe9-4ea2-a867-3490f70c1d87-config-data-custom\") pod \"cinder-backup-0\" (UID: \"7887622b-8fe9-4ea2-a867-3490f70c1d87\") " pod="openstack/cinder-backup-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.434780 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/06d1947f-99c6-4f58-93f6-b15ea0b89743-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"06d1947f-99c6-4f58-93f6-b15ea0b89743\") " pod="openstack/cinder-volume-volume1-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.434797 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/7887622b-8fe9-4ea2-a867-3490f70c1d87-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"7887622b-8fe9-4ea2-a867-3490f70c1d87\") " pod="openstack/cinder-backup-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.434821 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/06d1947f-99c6-4f58-93f6-b15ea0b89743-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"06d1947f-99c6-4f58-93f6-b15ea0b89743\") " pod="openstack/cinder-volume-volume1-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.434842 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7887622b-8fe9-4ea2-a867-3490f70c1d87-lib-modules\") pod \"cinder-backup-0\" (UID: \"7887622b-8fe9-4ea2-a867-3490f70c1d87\") " pod="openstack/cinder-backup-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.434870 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7887622b-8fe9-4ea2-a867-3490f70c1d87-config-data\") pod \"cinder-backup-0\" (UID: \"7887622b-8fe9-4ea2-a867-3490f70c1d87\") " pod="openstack/cinder-backup-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.434887 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7887622b-8fe9-4ea2-a867-3490f70c1d87-ceph\") pod \"cinder-backup-0\" (UID: \"7887622b-8fe9-4ea2-a867-3490f70c1d87\") " pod="openstack/cinder-backup-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.434911 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/06d1947f-99c6-4f58-93f6-b15ea0b89743-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"06d1947f-99c6-4f58-93f6-b15ea0b89743\") " pod="openstack/cinder-volume-volume1-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.434934 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7887622b-8fe9-4ea2-a867-3490f70c1d87-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"7887622b-8fe9-4ea2-a867-3490f70c1d87\") " pod="openstack/cinder-backup-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.434953 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/06d1947f-99c6-4f58-93f6-b15ea0b89743-run\") pod \"cinder-volume-volume1-0\" (UID: \"06d1947f-99c6-4f58-93f6-b15ea0b89743\") " pod="openstack/cinder-volume-volume1-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.434970 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7887622b-8fe9-4ea2-a867-3490f70c1d87-run\") pod \"cinder-backup-0\" (UID: \"7887622b-8fe9-4ea2-a867-3490f70c1d87\") " pod="openstack/cinder-backup-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.434994 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/06d1947f-99c6-4f58-93f6-b15ea0b89743-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"06d1947f-99c6-4f58-93f6-b15ea0b89743\") " pod="openstack/cinder-volume-volume1-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.435016 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/06d1947f-99c6-4f58-93f6-b15ea0b89743-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"06d1947f-99c6-4f58-93f6-b15ea0b89743\") " pod="openstack/cinder-volume-volume1-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.435039 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/06d1947f-99c6-4f58-93f6-b15ea0b89743-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"06d1947f-99c6-4f58-93f6-b15ea0b89743\") " pod="openstack/cinder-volume-volume1-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.435089 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/7887622b-8fe9-4ea2-a867-3490f70c1d87-dev\") pod \"cinder-backup-0\" (UID: \"7887622b-8fe9-4ea2-a867-3490f70c1d87\") " pod="openstack/cinder-backup-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.435116 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/7887622b-8fe9-4ea2-a867-3490f70c1d87-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"7887622b-8fe9-4ea2-a867-3490f70c1d87\") " pod="openstack/cinder-backup-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.435138 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsw89\" (UniqueName: \"kubernetes.io/projected/7887622b-8fe9-4ea2-a867-3490f70c1d87-kube-api-access-dsw89\") pod \"cinder-backup-0\" (UID: \"7887622b-8fe9-4ea2-a867-3490f70c1d87\") " pod="openstack/cinder-backup-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.435159 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7887622b-8fe9-4ea2-a867-3490f70c1d87-sys\") pod \"cinder-backup-0\" (UID: \"7887622b-8fe9-4ea2-a867-3490f70c1d87\") " pod="openstack/cinder-backup-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.435273 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7887622b-8fe9-4ea2-a867-3490f70c1d87-sys\") pod \"cinder-backup-0\" (UID: \"7887622b-8fe9-4ea2-a867-3490f70c1d87\") " pod="openstack/cinder-backup-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.435438 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/7887622b-8fe9-4ea2-a867-3490f70c1d87-etc-nvme\") pod \"cinder-backup-0\" (UID: \"7887622b-8fe9-4ea2-a867-3490f70c1d87\") " pod="openstack/cinder-backup-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.435614 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/7887622b-8fe9-4ea2-a867-3490f70c1d87-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"7887622b-8fe9-4ea2-a867-3490f70c1d87\") " pod="openstack/cinder-backup-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.435661 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/7887622b-8fe9-4ea2-a867-3490f70c1d87-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"7887622b-8fe9-4ea2-a867-3490f70c1d87\") " pod="openstack/cinder-backup-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.436368 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7887622b-8fe9-4ea2-a867-3490f70c1d87-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"7887622b-8fe9-4ea2-a867-3490f70c1d87\") " pod="openstack/cinder-backup-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.436604 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7887622b-8fe9-4ea2-a867-3490f70c1d87-run\") pod \"cinder-backup-0\" (UID: \"7887622b-8fe9-4ea2-a867-3490f70c1d87\") " pod="openstack/cinder-backup-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.436644 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/7887622b-8fe9-4ea2-a867-3490f70c1d87-dev\") pod \"cinder-backup-0\" (UID: \"7887622b-8fe9-4ea2-a867-3490f70c1d87\") " pod="openstack/cinder-backup-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.436650 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/7887622b-8fe9-4ea2-a867-3490f70c1d87-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"7887622b-8fe9-4ea2-a867-3490f70c1d87\") " pod="openstack/cinder-backup-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.436683 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7887622b-8fe9-4ea2-a867-3490f70c1d87-lib-modules\") pod \"cinder-backup-0\" (UID: \"7887622b-8fe9-4ea2-a867-3490f70c1d87\") " pod="openstack/cinder-backup-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.436738 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/7887622b-8fe9-4ea2-a867-3490f70c1d87-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"7887622b-8fe9-4ea2-a867-3490f70c1d87\") " pod="openstack/cinder-backup-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.446440 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7887622b-8fe9-4ea2-a867-3490f70c1d87-scripts\") pod \"cinder-backup-0\" (UID: \"7887622b-8fe9-4ea2-a867-3490f70c1d87\") " pod="openstack/cinder-backup-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.446963 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7887622b-8fe9-4ea2-a867-3490f70c1d87-config-data-custom\") pod \"cinder-backup-0\" (UID: \"7887622b-8fe9-4ea2-a867-3490f70c1d87\") " pod="openstack/cinder-backup-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.447290 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7887622b-8fe9-4ea2-a867-3490f70c1d87-config-data\") pod \"cinder-backup-0\" (UID: \"7887622b-8fe9-4ea2-a867-3490f70c1d87\") " pod="openstack/cinder-backup-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.447393 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7887622b-8fe9-4ea2-a867-3490f70c1d87-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"7887622b-8fe9-4ea2-a867-3490f70c1d87\") " pod="openstack/cinder-backup-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.458934 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7887622b-8fe9-4ea2-a867-3490f70c1d87-ceph\") pod \"cinder-backup-0\" (UID: \"7887622b-8fe9-4ea2-a867-3490f70c1d87\") " pod="openstack/cinder-backup-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.465466 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsw89\" (UniqueName: \"kubernetes.io/projected/7887622b-8fe9-4ea2-a867-3490f70c1d87-kube-api-access-dsw89\") pod \"cinder-backup-0\" (UID: \"7887622b-8fe9-4ea2-a867-3490f70c1d87\") " pod="openstack/cinder-backup-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.537168 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/06d1947f-99c6-4f58-93f6-b15ea0b89743-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"06d1947f-99c6-4f58-93f6-b15ea0b89743\") " pod="openstack/cinder-volume-volume1-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.537242 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/06d1947f-99c6-4f58-93f6-b15ea0b89743-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"06d1947f-99c6-4f58-93f6-b15ea0b89743\") " pod="openstack/cinder-volume-volume1-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.537284 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/06d1947f-99c6-4f58-93f6-b15ea0b89743-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"06d1947f-99c6-4f58-93f6-b15ea0b89743\") " pod="openstack/cinder-volume-volume1-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.537313 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/06d1947f-99c6-4f58-93f6-b15ea0b89743-run\") pod \"cinder-volume-volume1-0\" (UID: \"06d1947f-99c6-4f58-93f6-b15ea0b89743\") " pod="openstack/cinder-volume-volume1-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.537333 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/06d1947f-99c6-4f58-93f6-b15ea0b89743-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"06d1947f-99c6-4f58-93f6-b15ea0b89743\") " pod="openstack/cinder-volume-volume1-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.537353 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/06d1947f-99c6-4f58-93f6-b15ea0b89743-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"06d1947f-99c6-4f58-93f6-b15ea0b89743\") " pod="openstack/cinder-volume-volume1-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.537373 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/06d1947f-99c6-4f58-93f6-b15ea0b89743-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"06d1947f-99c6-4f58-93f6-b15ea0b89743\") " pod="openstack/cinder-volume-volume1-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.537459 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/06d1947f-99c6-4f58-93f6-b15ea0b89743-run\") pod \"cinder-volume-volume1-0\" (UID: \"06d1947f-99c6-4f58-93f6-b15ea0b89743\") " pod="openstack/cinder-volume-volume1-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.537485 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/06d1947f-99c6-4f58-93f6-b15ea0b89743-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"06d1947f-99c6-4f58-93f6-b15ea0b89743\") " pod="openstack/cinder-volume-volume1-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.537621 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/06d1947f-99c6-4f58-93f6-b15ea0b89743-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"06d1947f-99c6-4f58-93f6-b15ea0b89743\") " pod="openstack/cinder-volume-volume1-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.537664 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/06d1947f-99c6-4f58-93f6-b15ea0b89743-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"06d1947f-99c6-4f58-93f6-b15ea0b89743\") " pod="openstack/cinder-volume-volume1-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.537696 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/06d1947f-99c6-4f58-93f6-b15ea0b89743-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"06d1947f-99c6-4f58-93f6-b15ea0b89743\") " pod="openstack/cinder-volume-volume1-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.537715 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/06d1947f-99c6-4f58-93f6-b15ea0b89743-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"06d1947f-99c6-4f58-93f6-b15ea0b89743\") " pod="openstack/cinder-volume-volume1-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.537736 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/06d1947f-99c6-4f58-93f6-b15ea0b89743-dev\") pod \"cinder-volume-volume1-0\" (UID: \"06d1947f-99c6-4f58-93f6-b15ea0b89743\") " pod="openstack/cinder-volume-volume1-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.537768 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/06d1947f-99c6-4f58-93f6-b15ea0b89743-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"06d1947f-99c6-4f58-93f6-b15ea0b89743\") " pod="openstack/cinder-volume-volume1-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.537785 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06d1947f-99c6-4f58-93f6-b15ea0b89743-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"06d1947f-99c6-4f58-93f6-b15ea0b89743\") " pod="openstack/cinder-volume-volume1-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.537803 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06d1947f-99c6-4f58-93f6-b15ea0b89743-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"06d1947f-99c6-4f58-93f6-b15ea0b89743\") " pod="openstack/cinder-volume-volume1-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.537824 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbgck\" (UniqueName: \"kubernetes.io/projected/06d1947f-99c6-4f58-93f6-b15ea0b89743-kube-api-access-kbgck\") pod \"cinder-volume-volume1-0\" (UID: \"06d1947f-99c6-4f58-93f6-b15ea0b89743\") " pod="openstack/cinder-volume-volume1-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.537611 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/06d1947f-99c6-4f58-93f6-b15ea0b89743-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"06d1947f-99c6-4f58-93f6-b15ea0b89743\") " pod="openstack/cinder-volume-volume1-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.537838 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/06d1947f-99c6-4f58-93f6-b15ea0b89743-sys\") pod \"cinder-volume-volume1-0\" (UID: \"06d1947f-99c6-4f58-93f6-b15ea0b89743\") " pod="openstack/cinder-volume-volume1-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.537859 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/06d1947f-99c6-4f58-93f6-b15ea0b89743-sys\") pod \"cinder-volume-volume1-0\" (UID: \"06d1947f-99c6-4f58-93f6-b15ea0b89743\") " pod="openstack/cinder-volume-volume1-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.537665 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/06d1947f-99c6-4f58-93f6-b15ea0b89743-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"06d1947f-99c6-4f58-93f6-b15ea0b89743\") " pod="openstack/cinder-volume-volume1-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.537896 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/06d1947f-99c6-4f58-93f6-b15ea0b89743-dev\") pod \"cinder-volume-volume1-0\" (UID: \"06d1947f-99c6-4f58-93f6-b15ea0b89743\") " pod="openstack/cinder-volume-volume1-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.537932 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/06d1947f-99c6-4f58-93f6-b15ea0b89743-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"06d1947f-99c6-4f58-93f6-b15ea0b89743\") " pod="openstack/cinder-volume-volume1-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.538371 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06d1947f-99c6-4f58-93f6-b15ea0b89743-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"06d1947f-99c6-4f58-93f6-b15ea0b89743\") " pod="openstack/cinder-volume-volume1-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.537617 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/06d1947f-99c6-4f58-93f6-b15ea0b89743-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"06d1947f-99c6-4f58-93f6-b15ea0b89743\") " pod="openstack/cinder-volume-volume1-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.547588 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/06d1947f-99c6-4f58-93f6-b15ea0b89743-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"06d1947f-99c6-4f58-93f6-b15ea0b89743\") " pod="openstack/cinder-volume-volume1-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.549582 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.555380 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06d1947f-99c6-4f58-93f6-b15ea0b89743-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"06d1947f-99c6-4f58-93f6-b15ea0b89743\") " pod="openstack/cinder-volume-volume1-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.555522 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06d1947f-99c6-4f58-93f6-b15ea0b89743-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"06d1947f-99c6-4f58-93f6-b15ea0b89743\") " pod="openstack/cinder-volume-volume1-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.555670 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/06d1947f-99c6-4f58-93f6-b15ea0b89743-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"06d1947f-99c6-4f58-93f6-b15ea0b89743\") " pod="openstack/cinder-volume-volume1-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.555955 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06d1947f-99c6-4f58-93f6-b15ea0b89743-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"06d1947f-99c6-4f58-93f6-b15ea0b89743\") " pod="openstack/cinder-volume-volume1-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.561602 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbgck\" (UniqueName: \"kubernetes.io/projected/06d1947f-99c6-4f58-93f6-b15ea0b89743-kube-api-access-kbgck\") pod \"cinder-volume-volume1-0\" (UID: \"06d1947f-99c6-4f58-93f6-b15ea0b89743\") " pod="openstack/cinder-volume-volume1-0" Nov 29 07:38:58 crc kubenswrapper[4947]: I1129 07:38:58.677407 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.066696 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.092093 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.147228 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.153474 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.153651 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-8278w" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.153871 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.154383 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.268780 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d01fd87b-812e-4368-9f75-39cf578e76ac-ceph\") pod \"glance-default-external-api-0\" (UID: \"d01fd87b-812e-4368-9f75-39cf578e76ac\") " pod="openstack/glance-default-external-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.268888 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d01fd87b-812e-4368-9f75-39cf578e76ac-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d01fd87b-812e-4368-9f75-39cf578e76ac\") " pod="openstack/glance-default-external-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.268971 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q27dd\" (UniqueName: \"kubernetes.io/projected/d01fd87b-812e-4368-9f75-39cf578e76ac-kube-api-access-q27dd\") pod \"glance-default-external-api-0\" (UID: \"d01fd87b-812e-4368-9f75-39cf578e76ac\") " pod="openstack/glance-default-external-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.269146 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d01fd87b-812e-4368-9f75-39cf578e76ac-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d01fd87b-812e-4368-9f75-39cf578e76ac\") " pod="openstack/glance-default-external-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.269291 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d01fd87b-812e-4368-9f75-39cf578e76ac-logs\") pod \"glance-default-external-api-0\" (UID: \"d01fd87b-812e-4368-9f75-39cf578e76ac\") " pod="openstack/glance-default-external-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.269537 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"d01fd87b-812e-4368-9f75-39cf578e76ac\") " pod="openstack/glance-default-external-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.269634 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d01fd87b-812e-4368-9f75-39cf578e76ac-config-data\") pod \"glance-default-external-api-0\" (UID: \"d01fd87b-812e-4368-9f75-39cf578e76ac\") " pod="openstack/glance-default-external-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.269657 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d01fd87b-812e-4368-9f75-39cf578e76ac-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d01fd87b-812e-4368-9f75-39cf578e76ac\") " pod="openstack/glance-default-external-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.269727 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d01fd87b-812e-4368-9f75-39cf578e76ac-scripts\") pod \"glance-default-external-api-0\" (UID: \"d01fd87b-812e-4368-9f75-39cf578e76ac\") " pod="openstack/glance-default-external-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.313512 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.337710 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.337855 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.349658 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.349925 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.391502 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-tjns7"] Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.410542 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d01fd87b-812e-4368-9f75-39cf578e76ac-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d01fd87b-812e-4368-9f75-39cf578e76ac\") " pod="openstack/glance-default-external-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.410726 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d01fd87b-812e-4368-9f75-39cf578e76ac-logs\") pod \"glance-default-external-api-0\" (UID: \"d01fd87b-812e-4368-9f75-39cf578e76ac\") " pod="openstack/glance-default-external-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.411016 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"d01fd87b-812e-4368-9f75-39cf578e76ac\") " pod="openstack/glance-default-external-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.411132 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d01fd87b-812e-4368-9f75-39cf578e76ac-config-data\") pod \"glance-default-external-api-0\" (UID: \"d01fd87b-812e-4368-9f75-39cf578e76ac\") " pod="openstack/glance-default-external-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.411204 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d01fd87b-812e-4368-9f75-39cf578e76ac-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d01fd87b-812e-4368-9f75-39cf578e76ac\") " pod="openstack/glance-default-external-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.411318 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d01fd87b-812e-4368-9f75-39cf578e76ac-scripts\") pod \"glance-default-external-api-0\" (UID: \"d01fd87b-812e-4368-9f75-39cf578e76ac\") " pod="openstack/glance-default-external-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.411419 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d01fd87b-812e-4368-9f75-39cf578e76ac-ceph\") pod \"glance-default-external-api-0\" (UID: \"d01fd87b-812e-4368-9f75-39cf578e76ac\") " pod="openstack/glance-default-external-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.411487 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d01fd87b-812e-4368-9f75-39cf578e76ac-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d01fd87b-812e-4368-9f75-39cf578e76ac\") " pod="openstack/glance-default-external-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.411582 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q27dd\" (UniqueName: \"kubernetes.io/projected/d01fd87b-812e-4368-9f75-39cf578e76ac-kube-api-access-q27dd\") pod \"glance-default-external-api-0\" (UID: \"d01fd87b-812e-4368-9f75-39cf578e76ac\") " pod="openstack/glance-default-external-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.414188 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-tjns7" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.419371 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d01fd87b-812e-4368-9f75-39cf578e76ac-logs\") pod \"glance-default-external-api-0\" (UID: \"d01fd87b-812e-4368-9f75-39cf578e76ac\") " pod="openstack/glance-default-external-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.420004 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d01fd87b-812e-4368-9f75-39cf578e76ac-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d01fd87b-812e-4368-9f75-39cf578e76ac\") " pod="openstack/glance-default-external-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.422187 4947 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"d01fd87b-812e-4368-9f75-39cf578e76ac\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-external-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.450156 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d01fd87b-812e-4368-9f75-39cf578e76ac-ceph\") pod \"glance-default-external-api-0\" (UID: \"d01fd87b-812e-4368-9f75-39cf578e76ac\") " pod="openstack/glance-default-external-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.455321 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d01fd87b-812e-4368-9f75-39cf578e76ac-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d01fd87b-812e-4368-9f75-39cf578e76ac\") " pod="openstack/glance-default-external-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.455683 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d01fd87b-812e-4368-9f75-39cf578e76ac-scripts\") pod \"glance-default-external-api-0\" (UID: \"d01fd87b-812e-4368-9f75-39cf578e76ac\") " pod="openstack/glance-default-external-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.455928 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q27dd\" (UniqueName: \"kubernetes.io/projected/d01fd87b-812e-4368-9f75-39cf578e76ac-kube-api-access-q27dd\") pod \"glance-default-external-api-0\" (UID: \"d01fd87b-812e-4368-9f75-39cf578e76ac\") " pod="openstack/glance-default-external-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.469341 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d01fd87b-812e-4368-9f75-39cf578e76ac-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d01fd87b-812e-4368-9f75-39cf578e76ac\") " pod="openstack/glance-default-external-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.470817 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d01fd87b-812e-4368-9f75-39cf578e76ac-config-data\") pod \"glance-default-external-api-0\" (UID: \"d01fd87b-812e-4368-9f75-39cf578e76ac\") " pod="openstack/glance-default-external-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.487584 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-tjns7"] Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.516683 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e13a36c7-9b31-4592-91dd-881412235914-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e13a36c7-9b31-4592-91dd-881412235914\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.516796 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15bbf1b5-d20c-4156-a60e-d2720d012aa0-operator-scripts\") pod \"manila-db-create-tjns7\" (UID: \"15bbf1b5-d20c-4156-a60e-d2720d012aa0\") " pod="openstack/manila-db-create-tjns7" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.516834 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nfzl\" (UniqueName: \"kubernetes.io/projected/e13a36c7-9b31-4592-91dd-881412235914-kube-api-access-4nfzl\") pod \"glance-default-internal-api-0\" (UID: \"e13a36c7-9b31-4592-91dd-881412235914\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.516900 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e13a36c7-9b31-4592-91dd-881412235914-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e13a36c7-9b31-4592-91dd-881412235914\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.516957 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e13a36c7-9b31-4592-91dd-881412235914-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e13a36c7-9b31-4592-91dd-881412235914\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.517053 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz57l\" (UniqueName: \"kubernetes.io/projected/15bbf1b5-d20c-4156-a60e-d2720d012aa0-kube-api-access-rz57l\") pod \"manila-db-create-tjns7\" (UID: \"15bbf1b5-d20c-4156-a60e-d2720d012aa0\") " pod="openstack/manila-db-create-tjns7" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.517119 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e13a36c7-9b31-4592-91dd-881412235914-logs\") pod \"glance-default-internal-api-0\" (UID: \"e13a36c7-9b31-4592-91dd-881412235914\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.517164 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e13a36c7-9b31-4592-91dd-881412235914-ceph\") pod \"glance-default-internal-api-0\" (UID: \"e13a36c7-9b31-4592-91dd-881412235914\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.517212 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e13a36c7-9b31-4592-91dd-881412235914-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e13a36c7-9b31-4592-91dd-881412235914\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.517273 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"e13a36c7-9b31-4592-91dd-881412235914\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.517311 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e13a36c7-9b31-4592-91dd-881412235914-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e13a36c7-9b31-4592-91dd-881412235914\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.517845 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5468b7496f-chvvc"] Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.537388 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5468b7496f-chvvc" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.553827 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5468b7496f-chvvc"] Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.566567 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-mjj6f" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.573982 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.574476 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.575368 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.619396 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rz57l\" (UniqueName: \"kubernetes.io/projected/15bbf1b5-d20c-4156-a60e-d2720d012aa0-kube-api-access-rz57l\") pod \"manila-db-create-tjns7\" (UID: \"15bbf1b5-d20c-4156-a60e-d2720d012aa0\") " pod="openstack/manila-db-create-tjns7" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.619927 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e13a36c7-9b31-4592-91dd-881412235914-logs\") pod \"glance-default-internal-api-0\" (UID: \"e13a36c7-9b31-4592-91dd-881412235914\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.619971 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e13a36c7-9b31-4592-91dd-881412235914-ceph\") pod \"glance-default-internal-api-0\" (UID: \"e13a36c7-9b31-4592-91dd-881412235914\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.620007 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/18340ffc-dd9e-437a-b838-7d63a0fd0102-config-data\") pod \"horizon-5468b7496f-chvvc\" (UID: \"18340ffc-dd9e-437a-b838-7d63a0fd0102\") " pod="openstack/horizon-5468b7496f-chvvc" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.620059 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e13a36c7-9b31-4592-91dd-881412235914-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e13a36c7-9b31-4592-91dd-881412235914\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.620087 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"e13a36c7-9b31-4592-91dd-881412235914\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.620116 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18340ffc-dd9e-437a-b838-7d63a0fd0102-logs\") pod \"horizon-5468b7496f-chvvc\" (UID: \"18340ffc-dd9e-437a-b838-7d63a0fd0102\") " pod="openstack/horizon-5468b7496f-chvvc" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.620142 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwhgs\" (UniqueName: \"kubernetes.io/projected/18340ffc-dd9e-437a-b838-7d63a0fd0102-kube-api-access-wwhgs\") pod \"horizon-5468b7496f-chvvc\" (UID: \"18340ffc-dd9e-437a-b838-7d63a0fd0102\") " pod="openstack/horizon-5468b7496f-chvvc" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.620171 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e13a36c7-9b31-4592-91dd-881412235914-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e13a36c7-9b31-4592-91dd-881412235914\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.620333 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/18340ffc-dd9e-437a-b838-7d63a0fd0102-scripts\") pod \"horizon-5468b7496f-chvvc\" (UID: \"18340ffc-dd9e-437a-b838-7d63a0fd0102\") " pod="openstack/horizon-5468b7496f-chvvc" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.620380 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e13a36c7-9b31-4592-91dd-881412235914-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e13a36c7-9b31-4592-91dd-881412235914\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.620458 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15bbf1b5-d20c-4156-a60e-d2720d012aa0-operator-scripts\") pod \"manila-db-create-tjns7\" (UID: \"15bbf1b5-d20c-4156-a60e-d2720d012aa0\") " pod="openstack/manila-db-create-tjns7" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.620512 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nfzl\" (UniqueName: \"kubernetes.io/projected/e13a36c7-9b31-4592-91dd-881412235914-kube-api-access-4nfzl\") pod \"glance-default-internal-api-0\" (UID: \"e13a36c7-9b31-4592-91dd-881412235914\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.620611 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e13a36c7-9b31-4592-91dd-881412235914-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e13a36c7-9b31-4592-91dd-881412235914\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.620648 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/18340ffc-dd9e-437a-b838-7d63a0fd0102-horizon-secret-key\") pod \"horizon-5468b7496f-chvvc\" (UID: \"18340ffc-dd9e-437a-b838-7d63a0fd0102\") " pod="openstack/horizon-5468b7496f-chvvc" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.620716 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e13a36c7-9b31-4592-91dd-881412235914-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e13a36c7-9b31-4592-91dd-881412235914\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.620935 4947 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"e13a36c7-9b31-4592-91dd-881412235914\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.626398 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15bbf1b5-d20c-4156-a60e-d2720d012aa0-operator-scripts\") pod \"manila-db-create-tjns7\" (UID: \"15bbf1b5-d20c-4156-a60e-d2720d012aa0\") " pod="openstack/manila-db-create-tjns7" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.626476 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e13a36c7-9b31-4592-91dd-881412235914-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e13a36c7-9b31-4592-91dd-881412235914\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.637369 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e13a36c7-9b31-4592-91dd-881412235914-logs\") pod \"glance-default-internal-api-0\" (UID: \"e13a36c7-9b31-4592-91dd-881412235914\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.637892 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e13a36c7-9b31-4592-91dd-881412235914-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e13a36c7-9b31-4592-91dd-881412235914\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.643661 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="bc968903-97f7-437d-882d-1bb4278dab13" containerName="galera" probeResult="failure" output="command timed out" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.655183 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e13a36c7-9b31-4592-91dd-881412235914-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e13a36c7-9b31-4592-91dd-881412235914\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.656329 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e13a36c7-9b31-4592-91dd-881412235914-ceph\") pod \"glance-default-internal-api-0\" (UID: \"e13a36c7-9b31-4592-91dd-881412235914\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.660037 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e13a36c7-9b31-4592-91dd-881412235914-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e13a36c7-9b31-4592-91dd-881412235914\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.675134 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rz57l\" (UniqueName: \"kubernetes.io/projected/15bbf1b5-d20c-4156-a60e-d2720d012aa0-kube-api-access-rz57l\") pod \"manila-db-create-tjns7\" (UID: \"15bbf1b5-d20c-4156-a60e-d2720d012aa0\") " pod="openstack/manila-db-create-tjns7" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.681263 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"d01fd87b-812e-4368-9f75-39cf578e76ac\") " pod="openstack/glance-default-external-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.700401 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e13a36c7-9b31-4592-91dd-881412235914-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e13a36c7-9b31-4592-91dd-881412235914\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.710455 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-83fe-account-create-update-lc9t4"] Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.712368 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-83fe-account-create-update-lc9t4" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.713071 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nfzl\" (UniqueName: \"kubernetes.io/projected/e13a36c7-9b31-4592-91dd-881412235914-kube-api-access-4nfzl\") pod \"glance-default-internal-api-0\" (UID: \"e13a36c7-9b31-4592-91dd-881412235914\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.726151 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/18340ffc-dd9e-437a-b838-7d63a0fd0102-config-data\") pod \"horizon-5468b7496f-chvvc\" (UID: \"18340ffc-dd9e-437a-b838-7d63a0fd0102\") " pod="openstack/horizon-5468b7496f-chvvc" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.726261 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18340ffc-dd9e-437a-b838-7d63a0fd0102-logs\") pod \"horizon-5468b7496f-chvvc\" (UID: \"18340ffc-dd9e-437a-b838-7d63a0fd0102\") " pod="openstack/horizon-5468b7496f-chvvc" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.726296 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwhgs\" (UniqueName: \"kubernetes.io/projected/18340ffc-dd9e-437a-b838-7d63a0fd0102-kube-api-access-wwhgs\") pod \"horizon-5468b7496f-chvvc\" (UID: \"18340ffc-dd9e-437a-b838-7d63a0fd0102\") " pod="openstack/horizon-5468b7496f-chvvc" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.726376 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/18340ffc-dd9e-437a-b838-7d63a0fd0102-scripts\") pod \"horizon-5468b7496f-chvvc\" (UID: \"18340ffc-dd9e-437a-b838-7d63a0fd0102\") " pod="openstack/horizon-5468b7496f-chvvc" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.726530 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/18340ffc-dd9e-437a-b838-7d63a0fd0102-horizon-secret-key\") pod \"horizon-5468b7496f-chvvc\" (UID: \"18340ffc-dd9e-437a-b838-7d63a0fd0102\") " pod="openstack/horizon-5468b7496f-chvvc" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.726960 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.729075 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/18340ffc-dd9e-437a-b838-7d63a0fd0102-config-data\") pod \"horizon-5468b7496f-chvvc\" (UID: \"18340ffc-dd9e-437a-b838-7d63a0fd0102\") " pod="openstack/horizon-5468b7496f-chvvc" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.729213 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18340ffc-dd9e-437a-b838-7d63a0fd0102-logs\") pod \"horizon-5468b7496f-chvvc\" (UID: \"18340ffc-dd9e-437a-b838-7d63a0fd0102\") " pod="openstack/horizon-5468b7496f-chvvc" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.738005 4947 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.742041 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/18340ffc-dd9e-437a-b838-7d63a0fd0102-scripts\") pod \"horizon-5468b7496f-chvvc\" (UID: \"18340ffc-dd9e-437a-b838-7d63a0fd0102\") " pod="openstack/horizon-5468b7496f-chvvc" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.756567 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"e13a36c7-9b31-4592-91dd-881412235914\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.757189 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/18340ffc-dd9e-437a-b838-7d63a0fd0102-horizon-secret-key\") pod \"horizon-5468b7496f-chvvc\" (UID: \"18340ffc-dd9e-437a-b838-7d63a0fd0102\") " pod="openstack/horizon-5468b7496f-chvvc" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.768106 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-83fe-account-create-update-lc9t4"] Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.780043 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwhgs\" (UniqueName: \"kubernetes.io/projected/18340ffc-dd9e-437a-b838-7d63a0fd0102-kube-api-access-wwhgs\") pod \"horizon-5468b7496f-chvvc\" (UID: \"18340ffc-dd9e-437a-b838-7d63a0fd0102\") " pod="openstack/horizon-5468b7496f-chvvc" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.829126 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb628464-6376-4dc1-8142-0b9134560fb3-operator-scripts\") pod \"manila-83fe-account-create-update-lc9t4\" (UID: \"eb628464-6376-4dc1-8142-0b9134560fb3\") " pod="openstack/manila-83fe-account-create-update-lc9t4" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.829496 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj76x\" (UniqueName: \"kubernetes.io/projected/eb628464-6376-4dc1-8142-0b9134560fb3-kube-api-access-cj76x\") pod \"manila-83fe-account-create-update-lc9t4\" (UID: \"eb628464-6376-4dc1-8142-0b9134560fb3\") " pod="openstack/manila-83fe-account-create-update-lc9t4" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.871743 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.873280 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.880735 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.905950 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-tjns7" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.938755 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cj76x\" (UniqueName: \"kubernetes.io/projected/eb628464-6376-4dc1-8142-0b9134560fb3-kube-api-access-cj76x\") pod \"manila-83fe-account-create-update-lc9t4\" (UID: \"eb628464-6376-4dc1-8142-0b9134560fb3\") " pod="openstack/manila-83fe-account-create-update-lc9t4" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.940175 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb628464-6376-4dc1-8142-0b9134560fb3-operator-scripts\") pod \"manila-83fe-account-create-update-lc9t4\" (UID: \"eb628464-6376-4dc1-8142-0b9134560fb3\") " pod="openstack/manila-83fe-account-create-update-lc9t4" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.941638 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb628464-6376-4dc1-8142-0b9134560fb3-operator-scripts\") pod \"manila-83fe-account-create-update-lc9t4\" (UID: \"eb628464-6376-4dc1-8142-0b9134560fb3\") " pod="openstack/manila-83fe-account-create-update-lc9t4" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.951994 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5468b7496f-chvvc" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.952820 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.967925 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.973980 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cj76x\" (UniqueName: \"kubernetes.io/projected/eb628464-6376-4dc1-8142-0b9134560fb3-kube-api-access-cj76x\") pod \"manila-83fe-account-create-update-lc9t4\" (UID: \"eb628464-6376-4dc1-8142-0b9134560fb3\") " pod="openstack/manila-83fe-account-create-update-lc9t4" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.983141 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-68c46dc8bf-mgc5p"] Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.985635 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68c46dc8bf-mgc5p" Nov 29 07:38:59 crc kubenswrapper[4947]: I1129 07:38:59.994449 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-68c46dc8bf-mgc5p"] Nov 29 07:39:00 crc kubenswrapper[4947]: I1129 07:39:00.043463 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e-horizon-secret-key\") pod \"horizon-68c46dc8bf-mgc5p\" (UID: \"d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e\") " pod="openstack/horizon-68c46dc8bf-mgc5p" Nov 29 07:39:00 crc kubenswrapper[4947]: I1129 07:39:00.043606 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e-scripts\") pod \"horizon-68c46dc8bf-mgc5p\" (UID: \"d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e\") " pod="openstack/horizon-68c46dc8bf-mgc5p" Nov 29 07:39:00 crc kubenswrapper[4947]: I1129 07:39:00.043649 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ctss\" (UniqueName: \"kubernetes.io/projected/d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e-kube-api-access-2ctss\") pod \"horizon-68c46dc8bf-mgc5p\" (UID: \"d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e\") " pod="openstack/horizon-68c46dc8bf-mgc5p" Nov 29 07:39:00 crc kubenswrapper[4947]: I1129 07:39:00.043765 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e-logs\") pod \"horizon-68c46dc8bf-mgc5p\" (UID: \"d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e\") " pod="openstack/horizon-68c46dc8bf-mgc5p" Nov 29 07:39:00 crc kubenswrapper[4947]: I1129 07:39:00.044008 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e-config-data\") pod \"horizon-68c46dc8bf-mgc5p\" (UID: \"d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e\") " pod="openstack/horizon-68c46dc8bf-mgc5p" Nov 29 07:39:00 crc kubenswrapper[4947]: I1129 07:39:00.155351 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e-config-data\") pod \"horizon-68c46dc8bf-mgc5p\" (UID: \"d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e\") " pod="openstack/horizon-68c46dc8bf-mgc5p" Nov 29 07:39:00 crc kubenswrapper[4947]: I1129 07:39:00.156148 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e-horizon-secret-key\") pod \"horizon-68c46dc8bf-mgc5p\" (UID: \"d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e\") " pod="openstack/horizon-68c46dc8bf-mgc5p" Nov 29 07:39:00 crc kubenswrapper[4947]: I1129 07:39:00.159429 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e-scripts\") pod \"horizon-68c46dc8bf-mgc5p\" (UID: \"d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e\") " pod="openstack/horizon-68c46dc8bf-mgc5p" Nov 29 07:39:00 crc kubenswrapper[4947]: I1129 07:39:00.159721 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ctss\" (UniqueName: \"kubernetes.io/projected/d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e-kube-api-access-2ctss\") pod \"horizon-68c46dc8bf-mgc5p\" (UID: \"d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e\") " pod="openstack/horizon-68c46dc8bf-mgc5p" Nov 29 07:39:00 crc kubenswrapper[4947]: I1129 07:39:00.159933 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e-logs\") pod \"horizon-68c46dc8bf-mgc5p\" (UID: \"d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e\") " pod="openstack/horizon-68c46dc8bf-mgc5p" Nov 29 07:39:00 crc kubenswrapper[4947]: I1129 07:39:00.162152 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e-config-data\") pod \"horizon-68c46dc8bf-mgc5p\" (UID: \"d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e\") " pod="openstack/horizon-68c46dc8bf-mgc5p" Nov 29 07:39:00 crc kubenswrapper[4947]: I1129 07:39:00.163922 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e-scripts\") pod \"horizon-68c46dc8bf-mgc5p\" (UID: \"d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e\") " pod="openstack/horizon-68c46dc8bf-mgc5p" Nov 29 07:39:00 crc kubenswrapper[4947]: I1129 07:39:00.173050 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e-logs\") pod \"horizon-68c46dc8bf-mgc5p\" (UID: \"d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e\") " pod="openstack/horizon-68c46dc8bf-mgc5p" Nov 29 07:39:00 crc kubenswrapper[4947]: I1129 07:39:00.186549 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-83fe-account-create-update-lc9t4" Nov 29 07:39:00 crc kubenswrapper[4947]: I1129 07:39:00.215067 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Nov 29 07:39:00 crc kubenswrapper[4947]: I1129 07:39:00.217640 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ctss\" (UniqueName: \"kubernetes.io/projected/d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e-kube-api-access-2ctss\") pod \"horizon-68c46dc8bf-mgc5p\" (UID: \"d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e\") " pod="openstack/horizon-68c46dc8bf-mgc5p" Nov 29 07:39:00 crc kubenswrapper[4947]: I1129 07:39:00.222306 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e-horizon-secret-key\") pod \"horizon-68c46dc8bf-mgc5p\" (UID: \"d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e\") " pod="openstack/horizon-68c46dc8bf-mgc5p" Nov 29 07:39:00 crc kubenswrapper[4947]: W1129 07:39:00.260693 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06d1947f_99c6_4f58_93f6_b15ea0b89743.slice/crio-5738dd9853b2b3f481d7b4b5cbf4fd56486cf27f50aaf1aa4e420cf3aa353317 WatchSource:0}: Error finding container 5738dd9853b2b3f481d7b4b5cbf4fd56486cf27f50aaf1aa4e420cf3aa353317: Status 404 returned error can't find the container with id 5738dd9853b2b3f481d7b4b5cbf4fd56486cf27f50aaf1aa4e420cf3aa353317 Nov 29 07:39:00 crc kubenswrapper[4947]: I1129 07:39:00.337112 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68c46dc8bf-mgc5p" Nov 29 07:39:00 crc kubenswrapper[4947]: I1129 07:39:00.533014 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"06d1947f-99c6-4f58-93f6-b15ea0b89743","Type":"ContainerStarted","Data":"5738dd9853b2b3f481d7b4b5cbf4fd56486cf27f50aaf1aa4e420cf3aa353317"} Nov 29 07:39:00 crc kubenswrapper[4947]: I1129 07:39:00.536054 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"7887622b-8fe9-4ea2-a867-3490f70c1d87","Type":"ContainerStarted","Data":"4759904e8f704e7600e43af566136a88f2e64d8c5b998c3fcac67f34861a3c28"} Nov 29 07:39:00 crc kubenswrapper[4947]: I1129 07:39:00.811286 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 07:39:00 crc kubenswrapper[4947]: I1129 07:39:00.856112 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5468b7496f-chvvc"] Nov 29 07:39:00 crc kubenswrapper[4947]: I1129 07:39:00.962963 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-tjns7"] Nov 29 07:39:01 crc kubenswrapper[4947]: I1129 07:39:01.094658 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-83fe-account-create-update-lc9t4"] Nov 29 07:39:01 crc kubenswrapper[4947]: W1129 07:39:01.154844 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb628464_6376_4dc1_8142_0b9134560fb3.slice/crio-08e3476778002bd39993fc0032c57ac16b40cc93207e9c36986cf27490c82ca9 WatchSource:0}: Error finding container 08e3476778002bd39993fc0032c57ac16b40cc93207e9c36986cf27490c82ca9: Status 404 returned error can't find the container with id 08e3476778002bd39993fc0032c57ac16b40cc93207e9c36986cf27490c82ca9 Nov 29 07:39:01 crc kubenswrapper[4947]: I1129 07:39:01.251784 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-68c46dc8bf-mgc5p"] Nov 29 07:39:01 crc kubenswrapper[4947]: I1129 07:39:01.488659 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 07:39:01 crc kubenswrapper[4947]: I1129 07:39:01.555531 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-83fe-account-create-update-lc9t4" event={"ID":"eb628464-6376-4dc1-8142-0b9134560fb3","Type":"ContainerStarted","Data":"08e3476778002bd39993fc0032c57ac16b40cc93207e9c36986cf27490c82ca9"} Nov 29 07:39:01 crc kubenswrapper[4947]: I1129 07:39:01.569988 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-tjns7" event={"ID":"15bbf1b5-d20c-4156-a60e-d2720d012aa0","Type":"ContainerStarted","Data":"8bf24da7149bebf57d869a22f20663b5b05e35a9941acd98a5bab19f0f3ac85f"} Nov 29 07:39:01 crc kubenswrapper[4947]: I1129 07:39:01.572162 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5468b7496f-chvvc" event={"ID":"18340ffc-dd9e-437a-b838-7d63a0fd0102","Type":"ContainerStarted","Data":"8f8f0b0bc92736696ebd3ee8a4832ab677e7e4cc732e98c11de88610d417a911"} Nov 29 07:39:01 crc kubenswrapper[4947]: I1129 07:39:01.574316 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d01fd87b-812e-4368-9f75-39cf578e76ac","Type":"ContainerStarted","Data":"75e0c8c0d18462b8f5682fe519ec113e82f88747cb859f357c7cd4a82edf5e0f"} Nov 29 07:39:01 crc kubenswrapper[4947]: I1129 07:39:01.576144 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68c46dc8bf-mgc5p" event={"ID":"d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e","Type":"ContainerStarted","Data":"d10eef009857a2306aeeaea68249eb64ecda22ca0133944d0d91145d6dccd39e"} Nov 29 07:39:02 crc kubenswrapper[4947]: I1129 07:39:02.597732 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d01fd87b-812e-4368-9f75-39cf578e76ac","Type":"ContainerStarted","Data":"a826a116655c5526d40dc05a03cdbdfeedbfed6a62874ead9275c788f4f403c5"} Nov 29 07:39:02 crc kubenswrapper[4947]: I1129 07:39:02.601647 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"7887622b-8fe9-4ea2-a867-3490f70c1d87","Type":"ContainerStarted","Data":"bd4d66b8e9312800ba7ac4d2e04d2e8e8071f790532919bcfd8d0a131471c323"} Nov 29 07:39:02 crc kubenswrapper[4947]: I1129 07:39:02.604577 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"06d1947f-99c6-4f58-93f6-b15ea0b89743","Type":"ContainerStarted","Data":"a5ff2dd70036b77804f3109c5a5c9a8c34306156a02ad1d3e7c2770b98da0cee"} Nov 29 07:39:02 crc kubenswrapper[4947]: I1129 07:39:02.607979 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-83fe-account-create-update-lc9t4" event={"ID":"eb628464-6376-4dc1-8142-0b9134560fb3","Type":"ContainerStarted","Data":"f08a08db469a0514ca910b1df3d7dd7ab4fe4c18f7416f9ec1c748152905fd83"} Nov 29 07:39:02 crc kubenswrapper[4947]: I1129 07:39:02.619343 4947 generic.go:334] "Generic (PLEG): container finished" podID="15bbf1b5-d20c-4156-a60e-d2720d012aa0" containerID="1e06e322291d7ec555d9d7f61ea67ee48d6243753cb29c829415d39f2ca2d7fd" exitCode=0 Nov 29 07:39:02 crc kubenswrapper[4947]: I1129 07:39:02.619436 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-tjns7" event={"ID":"15bbf1b5-d20c-4156-a60e-d2720d012aa0","Type":"ContainerDied","Data":"1e06e322291d7ec555d9d7f61ea67ee48d6243753cb29c829415d39f2ca2d7fd"} Nov 29 07:39:02 crc kubenswrapper[4947]: I1129 07:39:02.634235 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-83fe-account-create-update-lc9t4" podStartSLOduration=3.634174187 podStartE2EDuration="3.634174187s" podCreationTimestamp="2025-11-29 07:38:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:39:02.6334629 +0000 UTC m=+3893.677844991" watchObservedRunningTime="2025-11-29 07:39:02.634174187 +0000 UTC m=+3893.678556268" Nov 29 07:39:02 crc kubenswrapper[4947]: I1129 07:39:02.660747 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e13a36c7-9b31-4592-91dd-881412235914","Type":"ContainerStarted","Data":"2e624e810807e380218ff68950bd8e67b40aecf056154159131a75067570aa45"} Nov 29 07:39:02 crc kubenswrapper[4947]: I1129 07:39:02.909362 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5468b7496f-chvvc"] Nov 29 07:39:02 crc kubenswrapper[4947]: I1129 07:39:02.946774 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-bf95bd4cd-8f5st"] Nov 29 07:39:02 crc kubenswrapper[4947]: I1129 07:39:02.952799 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-bf95bd4cd-8f5st" Nov 29 07:39:02 crc kubenswrapper[4947]: I1129 07:39:02.958009 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Nov 29 07:39:03 crc kubenswrapper[4947]: I1129 07:39:03.038886 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-bf95bd4cd-8f5st"] Nov 29 07:39:03 crc kubenswrapper[4947]: I1129 07:39:03.054331 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dec28edc-46ae-456a-9be2-ec56bdfd409f-horizon-secret-key\") pod \"horizon-bf95bd4cd-8f5st\" (UID: \"dec28edc-46ae-456a-9be2-ec56bdfd409f\") " pod="openstack/horizon-bf95bd4cd-8f5st" Nov 29 07:39:03 crc kubenswrapper[4947]: I1129 07:39:03.054832 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dec28edc-46ae-456a-9be2-ec56bdfd409f-config-data\") pod \"horizon-bf95bd4cd-8f5st\" (UID: \"dec28edc-46ae-456a-9be2-ec56bdfd409f\") " pod="openstack/horizon-bf95bd4cd-8f5st" Nov 29 07:39:03 crc kubenswrapper[4947]: I1129 07:39:03.054886 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dec28edc-46ae-456a-9be2-ec56bdfd409f-scripts\") pod \"horizon-bf95bd4cd-8f5st\" (UID: \"dec28edc-46ae-456a-9be2-ec56bdfd409f\") " pod="openstack/horizon-bf95bd4cd-8f5st" Nov 29 07:39:03 crc kubenswrapper[4947]: I1129 07:39:03.054946 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dec28edc-46ae-456a-9be2-ec56bdfd409f-logs\") pod \"horizon-bf95bd4cd-8f5st\" (UID: \"dec28edc-46ae-456a-9be2-ec56bdfd409f\") " pod="openstack/horizon-bf95bd4cd-8f5st" Nov 29 07:39:03 crc kubenswrapper[4947]: I1129 07:39:03.055029 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/dec28edc-46ae-456a-9be2-ec56bdfd409f-horizon-tls-certs\") pod \"horizon-bf95bd4cd-8f5st\" (UID: \"dec28edc-46ae-456a-9be2-ec56bdfd409f\") " pod="openstack/horizon-bf95bd4cd-8f5st" Nov 29 07:39:03 crc kubenswrapper[4947]: I1129 07:39:03.055073 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dec28edc-46ae-456a-9be2-ec56bdfd409f-combined-ca-bundle\") pod \"horizon-bf95bd4cd-8f5st\" (UID: \"dec28edc-46ae-456a-9be2-ec56bdfd409f\") " pod="openstack/horizon-bf95bd4cd-8f5st" Nov 29 07:39:03 crc kubenswrapper[4947]: I1129 07:39:03.055134 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zrzh\" (UniqueName: \"kubernetes.io/projected/dec28edc-46ae-456a-9be2-ec56bdfd409f-kube-api-access-8zrzh\") pod \"horizon-bf95bd4cd-8f5st\" (UID: \"dec28edc-46ae-456a-9be2-ec56bdfd409f\") " pod="openstack/horizon-bf95bd4cd-8f5st" Nov 29 07:39:03 crc kubenswrapper[4947]: I1129 07:39:03.112364 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-68c46dc8bf-mgc5p"] Nov 29 07:39:03 crc kubenswrapper[4947]: I1129 07:39:03.163139 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zrzh\" (UniqueName: \"kubernetes.io/projected/dec28edc-46ae-456a-9be2-ec56bdfd409f-kube-api-access-8zrzh\") pod \"horizon-bf95bd4cd-8f5st\" (UID: \"dec28edc-46ae-456a-9be2-ec56bdfd409f\") " pod="openstack/horizon-bf95bd4cd-8f5st" Nov 29 07:39:03 crc kubenswrapper[4947]: I1129 07:39:03.163642 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dec28edc-46ae-456a-9be2-ec56bdfd409f-horizon-secret-key\") pod \"horizon-bf95bd4cd-8f5st\" (UID: \"dec28edc-46ae-456a-9be2-ec56bdfd409f\") " pod="openstack/horizon-bf95bd4cd-8f5st" Nov 29 07:39:03 crc kubenswrapper[4947]: I1129 07:39:03.166020 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dec28edc-46ae-456a-9be2-ec56bdfd409f-config-data\") pod \"horizon-bf95bd4cd-8f5st\" (UID: \"dec28edc-46ae-456a-9be2-ec56bdfd409f\") " pod="openstack/horizon-bf95bd4cd-8f5st" Nov 29 07:39:03 crc kubenswrapper[4947]: I1129 07:39:03.166096 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dec28edc-46ae-456a-9be2-ec56bdfd409f-scripts\") pod \"horizon-bf95bd4cd-8f5st\" (UID: \"dec28edc-46ae-456a-9be2-ec56bdfd409f\") " pod="openstack/horizon-bf95bd4cd-8f5st" Nov 29 07:39:03 crc kubenswrapper[4947]: I1129 07:39:03.166168 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dec28edc-46ae-456a-9be2-ec56bdfd409f-logs\") pod \"horizon-bf95bd4cd-8f5st\" (UID: \"dec28edc-46ae-456a-9be2-ec56bdfd409f\") " pod="openstack/horizon-bf95bd4cd-8f5st" Nov 29 07:39:03 crc kubenswrapper[4947]: I1129 07:39:03.166258 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/dec28edc-46ae-456a-9be2-ec56bdfd409f-horizon-tls-certs\") pod \"horizon-bf95bd4cd-8f5st\" (UID: \"dec28edc-46ae-456a-9be2-ec56bdfd409f\") " pod="openstack/horizon-bf95bd4cd-8f5st" Nov 29 07:39:03 crc kubenswrapper[4947]: I1129 07:39:03.166293 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dec28edc-46ae-456a-9be2-ec56bdfd409f-combined-ca-bundle\") pod \"horizon-bf95bd4cd-8f5st\" (UID: \"dec28edc-46ae-456a-9be2-ec56bdfd409f\") " pod="openstack/horizon-bf95bd4cd-8f5st" Nov 29 07:39:03 crc kubenswrapper[4947]: I1129 07:39:03.171852 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dec28edc-46ae-456a-9be2-ec56bdfd409f-config-data\") pod \"horizon-bf95bd4cd-8f5st\" (UID: \"dec28edc-46ae-456a-9be2-ec56bdfd409f\") " pod="openstack/horizon-bf95bd4cd-8f5st" Nov 29 07:39:03 crc kubenswrapper[4947]: I1129 07:39:03.172197 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dec28edc-46ae-456a-9be2-ec56bdfd409f-logs\") pod \"horizon-bf95bd4cd-8f5st\" (UID: \"dec28edc-46ae-456a-9be2-ec56bdfd409f\") " pod="openstack/horizon-bf95bd4cd-8f5st" Nov 29 07:39:03 crc kubenswrapper[4947]: I1129 07:39:03.174505 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dec28edc-46ae-456a-9be2-ec56bdfd409f-scripts\") pod \"horizon-bf95bd4cd-8f5st\" (UID: \"dec28edc-46ae-456a-9be2-ec56bdfd409f\") " pod="openstack/horizon-bf95bd4cd-8f5st" Nov 29 07:39:03 crc kubenswrapper[4947]: I1129 07:39:03.186620 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dec28edc-46ae-456a-9be2-ec56bdfd409f-horizon-secret-key\") pod \"horizon-bf95bd4cd-8f5st\" (UID: \"dec28edc-46ae-456a-9be2-ec56bdfd409f\") " pod="openstack/horizon-bf95bd4cd-8f5st" Nov 29 07:39:03 crc kubenswrapper[4947]: I1129 07:39:03.187330 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/dec28edc-46ae-456a-9be2-ec56bdfd409f-horizon-tls-certs\") pod \"horizon-bf95bd4cd-8f5st\" (UID: \"dec28edc-46ae-456a-9be2-ec56bdfd409f\") " pod="openstack/horizon-bf95bd4cd-8f5st" Nov 29 07:39:03 crc kubenswrapper[4947]: I1129 07:39:03.189392 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dec28edc-46ae-456a-9be2-ec56bdfd409f-combined-ca-bundle\") pod \"horizon-bf95bd4cd-8f5st\" (UID: \"dec28edc-46ae-456a-9be2-ec56bdfd409f\") " pod="openstack/horizon-bf95bd4cd-8f5st" Nov 29 07:39:03 crc kubenswrapper[4947]: I1129 07:39:03.195565 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zrzh\" (UniqueName: \"kubernetes.io/projected/dec28edc-46ae-456a-9be2-ec56bdfd409f-kube-api-access-8zrzh\") pod \"horizon-bf95bd4cd-8f5st\" (UID: \"dec28edc-46ae-456a-9be2-ec56bdfd409f\") " pod="openstack/horizon-bf95bd4cd-8f5st" Nov 29 07:39:03 crc kubenswrapper[4947]: I1129 07:39:03.220273 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6757b657b4-vdhrb"] Nov 29 07:39:03 crc kubenswrapper[4947]: I1129 07:39:03.222778 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6757b657b4-vdhrb" Nov 29 07:39:03 crc kubenswrapper[4947]: I1129 07:39:03.235434 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6757b657b4-vdhrb"] Nov 29 07:39:03 crc kubenswrapper[4947]: I1129 07:39:03.297908 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-bf95bd4cd-8f5st" Nov 29 07:39:03 crc kubenswrapper[4947]: I1129 07:39:03.376278 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4d917345-655a-4f24-bfd1-57dd9a7e9880-horizon-secret-key\") pod \"horizon-6757b657b4-vdhrb\" (UID: \"4d917345-655a-4f24-bfd1-57dd9a7e9880\") " pod="openstack/horizon-6757b657b4-vdhrb" Nov 29 07:39:03 crc kubenswrapper[4947]: I1129 07:39:03.376387 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d917345-655a-4f24-bfd1-57dd9a7e9880-logs\") pod \"horizon-6757b657b4-vdhrb\" (UID: \"4d917345-655a-4f24-bfd1-57dd9a7e9880\") " pod="openstack/horizon-6757b657b4-vdhrb" Nov 29 07:39:03 crc kubenswrapper[4947]: I1129 07:39:03.376415 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4d917345-655a-4f24-bfd1-57dd9a7e9880-scripts\") pod \"horizon-6757b657b4-vdhrb\" (UID: \"4d917345-655a-4f24-bfd1-57dd9a7e9880\") " pod="openstack/horizon-6757b657b4-vdhrb" Nov 29 07:39:03 crc kubenswrapper[4947]: I1129 07:39:03.376450 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5lqs\" (UniqueName: \"kubernetes.io/projected/4d917345-655a-4f24-bfd1-57dd9a7e9880-kube-api-access-q5lqs\") pod \"horizon-6757b657b4-vdhrb\" (UID: \"4d917345-655a-4f24-bfd1-57dd9a7e9880\") " pod="openstack/horizon-6757b657b4-vdhrb" Nov 29 07:39:03 crc kubenswrapper[4947]: I1129 07:39:03.376484 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d917345-655a-4f24-bfd1-57dd9a7e9880-horizon-tls-certs\") pod \"horizon-6757b657b4-vdhrb\" (UID: \"4d917345-655a-4f24-bfd1-57dd9a7e9880\") " pod="openstack/horizon-6757b657b4-vdhrb" Nov 29 07:39:03 crc kubenswrapper[4947]: I1129 07:39:03.376562 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d917345-655a-4f24-bfd1-57dd9a7e9880-combined-ca-bundle\") pod \"horizon-6757b657b4-vdhrb\" (UID: \"4d917345-655a-4f24-bfd1-57dd9a7e9880\") " pod="openstack/horizon-6757b657b4-vdhrb" Nov 29 07:39:03 crc kubenswrapper[4947]: I1129 07:39:03.376629 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4d917345-655a-4f24-bfd1-57dd9a7e9880-config-data\") pod \"horizon-6757b657b4-vdhrb\" (UID: \"4d917345-655a-4f24-bfd1-57dd9a7e9880\") " pod="openstack/horizon-6757b657b4-vdhrb" Nov 29 07:39:03 crc kubenswrapper[4947]: I1129 07:39:03.478700 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d917345-655a-4f24-bfd1-57dd9a7e9880-combined-ca-bundle\") pod \"horizon-6757b657b4-vdhrb\" (UID: \"4d917345-655a-4f24-bfd1-57dd9a7e9880\") " pod="openstack/horizon-6757b657b4-vdhrb" Nov 29 07:39:03 crc kubenswrapper[4947]: I1129 07:39:03.479288 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4d917345-655a-4f24-bfd1-57dd9a7e9880-config-data\") pod \"horizon-6757b657b4-vdhrb\" (UID: \"4d917345-655a-4f24-bfd1-57dd9a7e9880\") " pod="openstack/horizon-6757b657b4-vdhrb" Nov 29 07:39:03 crc kubenswrapper[4947]: I1129 07:39:03.479386 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4d917345-655a-4f24-bfd1-57dd9a7e9880-horizon-secret-key\") pod \"horizon-6757b657b4-vdhrb\" (UID: \"4d917345-655a-4f24-bfd1-57dd9a7e9880\") " pod="openstack/horizon-6757b657b4-vdhrb" Nov 29 07:39:03 crc kubenswrapper[4947]: I1129 07:39:03.479426 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d917345-655a-4f24-bfd1-57dd9a7e9880-logs\") pod \"horizon-6757b657b4-vdhrb\" (UID: \"4d917345-655a-4f24-bfd1-57dd9a7e9880\") " pod="openstack/horizon-6757b657b4-vdhrb" Nov 29 07:39:03 crc kubenswrapper[4947]: I1129 07:39:03.479447 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4d917345-655a-4f24-bfd1-57dd9a7e9880-scripts\") pod \"horizon-6757b657b4-vdhrb\" (UID: \"4d917345-655a-4f24-bfd1-57dd9a7e9880\") " pod="openstack/horizon-6757b657b4-vdhrb" Nov 29 07:39:03 crc kubenswrapper[4947]: I1129 07:39:03.479479 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5lqs\" (UniqueName: \"kubernetes.io/projected/4d917345-655a-4f24-bfd1-57dd9a7e9880-kube-api-access-q5lqs\") pod \"horizon-6757b657b4-vdhrb\" (UID: \"4d917345-655a-4f24-bfd1-57dd9a7e9880\") " pod="openstack/horizon-6757b657b4-vdhrb" Nov 29 07:39:03 crc kubenswrapper[4947]: I1129 07:39:03.479510 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d917345-655a-4f24-bfd1-57dd9a7e9880-horizon-tls-certs\") pod \"horizon-6757b657b4-vdhrb\" (UID: \"4d917345-655a-4f24-bfd1-57dd9a7e9880\") " pod="openstack/horizon-6757b657b4-vdhrb" Nov 29 07:39:03 crc kubenswrapper[4947]: I1129 07:39:03.488348 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d917345-655a-4f24-bfd1-57dd9a7e9880-logs\") pod \"horizon-6757b657b4-vdhrb\" (UID: \"4d917345-655a-4f24-bfd1-57dd9a7e9880\") " pod="openstack/horizon-6757b657b4-vdhrb" Nov 29 07:39:03 crc kubenswrapper[4947]: I1129 07:39:03.488588 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4d917345-655a-4f24-bfd1-57dd9a7e9880-scripts\") pod \"horizon-6757b657b4-vdhrb\" (UID: \"4d917345-655a-4f24-bfd1-57dd9a7e9880\") " pod="openstack/horizon-6757b657b4-vdhrb" Nov 29 07:39:03 crc kubenswrapper[4947]: I1129 07:39:03.489652 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4d917345-655a-4f24-bfd1-57dd9a7e9880-config-data\") pod \"horizon-6757b657b4-vdhrb\" (UID: \"4d917345-655a-4f24-bfd1-57dd9a7e9880\") " pod="openstack/horizon-6757b657b4-vdhrb" Nov 29 07:39:03 crc kubenswrapper[4947]: I1129 07:39:03.494136 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d917345-655a-4f24-bfd1-57dd9a7e9880-horizon-tls-certs\") pod \"horizon-6757b657b4-vdhrb\" (UID: \"4d917345-655a-4f24-bfd1-57dd9a7e9880\") " pod="openstack/horizon-6757b657b4-vdhrb" Nov 29 07:39:03 crc kubenswrapper[4947]: I1129 07:39:03.494789 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4d917345-655a-4f24-bfd1-57dd9a7e9880-horizon-secret-key\") pod \"horizon-6757b657b4-vdhrb\" (UID: \"4d917345-655a-4f24-bfd1-57dd9a7e9880\") " pod="openstack/horizon-6757b657b4-vdhrb" Nov 29 07:39:03 crc kubenswrapper[4947]: I1129 07:39:03.497454 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d917345-655a-4f24-bfd1-57dd9a7e9880-combined-ca-bundle\") pod \"horizon-6757b657b4-vdhrb\" (UID: \"4d917345-655a-4f24-bfd1-57dd9a7e9880\") " pod="openstack/horizon-6757b657b4-vdhrb" Nov 29 07:39:03 crc kubenswrapper[4947]: I1129 07:39:03.515474 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5lqs\" (UniqueName: \"kubernetes.io/projected/4d917345-655a-4f24-bfd1-57dd9a7e9880-kube-api-access-q5lqs\") pod \"horizon-6757b657b4-vdhrb\" (UID: \"4d917345-655a-4f24-bfd1-57dd9a7e9880\") " pod="openstack/horizon-6757b657b4-vdhrb" Nov 29 07:39:03 crc kubenswrapper[4947]: I1129 07:39:03.586191 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6757b657b4-vdhrb" Nov 29 07:39:03 crc kubenswrapper[4947]: I1129 07:39:03.686601 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"06d1947f-99c6-4f58-93f6-b15ea0b89743","Type":"ContainerStarted","Data":"a6b4570249517aea7f1e0fd5fd92f30c0101115c0ece47f97638e4dafe813737"} Nov 29 07:39:03 crc kubenswrapper[4947]: I1129 07:39:03.690177 4947 generic.go:334] "Generic (PLEG): container finished" podID="eb628464-6376-4dc1-8142-0b9134560fb3" containerID="f08a08db469a0514ca910b1df3d7dd7ab4fe4c18f7416f9ec1c748152905fd83" exitCode=0 Nov 29 07:39:03 crc kubenswrapper[4947]: I1129 07:39:03.690289 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-83fe-account-create-update-lc9t4" event={"ID":"eb628464-6376-4dc1-8142-0b9134560fb3","Type":"ContainerDied","Data":"f08a08db469a0514ca910b1df3d7dd7ab4fe4c18f7416f9ec1c748152905fd83"} Nov 29 07:39:03 crc kubenswrapper[4947]: I1129 07:39:03.707016 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e13a36c7-9b31-4592-91dd-881412235914","Type":"ContainerStarted","Data":"1e7ce1939c7d651a2fc6564c5381e708dd3fd9784bdcaa7d2a7e7c67002a4d25"} Nov 29 07:39:03 crc kubenswrapper[4947]: I1129 07:39:03.711541 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"7887622b-8fe9-4ea2-a867-3490f70c1d87","Type":"ContainerStarted","Data":"480fe3317e5ecacf59255c7e63d29c27c931c4181bf105dab9cd1515d1fdf7ab"} Nov 29 07:39:03 crc kubenswrapper[4947]: I1129 07:39:03.754080 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=4.294011708 podStartE2EDuration="5.754050157s" podCreationTimestamp="2025-11-29 07:38:58 +0000 UTC" firstStartedPulling="2025-11-29 07:39:00.28353813 +0000 UTC m=+3891.327920211" lastFinishedPulling="2025-11-29 07:39:01.743576589 +0000 UTC m=+3892.787958660" observedRunningTime="2025-11-29 07:39:03.736243867 +0000 UTC m=+3894.780625968" watchObservedRunningTime="2025-11-29 07:39:03.754050157 +0000 UTC m=+3894.798432258" Nov 29 07:39:03 crc kubenswrapper[4947]: I1129 07:39:03.803819 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=4.311374926 podStartE2EDuration="5.803779892s" podCreationTimestamp="2025-11-29 07:38:58 +0000 UTC" firstStartedPulling="2025-11-29 07:38:59.737678986 +0000 UTC m=+3890.782061067" lastFinishedPulling="2025-11-29 07:39:01.230083952 +0000 UTC m=+3892.274466033" observedRunningTime="2025-11-29 07:39:03.770575474 +0000 UTC m=+3894.814957555" watchObservedRunningTime="2025-11-29 07:39:03.803779892 +0000 UTC m=+3894.848161973" Nov 29 07:39:04 crc kubenswrapper[4947]: I1129 07:39:04.145998 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-bf95bd4cd-8f5st"] Nov 29 07:39:04 crc kubenswrapper[4947]: W1129 07:39:04.190877 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddec28edc_46ae_456a_9be2_ec56bdfd409f.slice/crio-e49fb151ce2885b6b1617adeae58403f34d78f3a0308f3441b476dc192ab8f39 WatchSource:0}: Error finding container e49fb151ce2885b6b1617adeae58403f34d78f3a0308f3441b476dc192ab8f39: Status 404 returned error can't find the container with id e49fb151ce2885b6b1617adeae58403f34d78f3a0308f3441b476dc192ab8f39 Nov 29 07:39:04 crc kubenswrapper[4947]: I1129 07:39:04.474015 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6757b657b4-vdhrb"] Nov 29 07:39:04 crc kubenswrapper[4947]: W1129 07:39:04.482584 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d917345_655a_4f24_bfd1_57dd9a7e9880.slice/crio-5e343044ba262f061a1f4ed6a3b4b9e51a6a1df0235ab8686cb38cdb18c035bb WatchSource:0}: Error finding container 5e343044ba262f061a1f4ed6a3b4b9e51a6a1df0235ab8686cb38cdb18c035bb: Status 404 returned error can't find the container with id 5e343044ba262f061a1f4ed6a3b4b9e51a6a1df0235ab8686cb38cdb18c035bb Nov 29 07:39:04 crc kubenswrapper[4947]: I1129 07:39:04.670799 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-tjns7" Nov 29 07:39:04 crc kubenswrapper[4947]: I1129 07:39:04.736343 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-tjns7" Nov 29 07:39:04 crc kubenswrapper[4947]: I1129 07:39:04.736383 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-tjns7" event={"ID":"15bbf1b5-d20c-4156-a60e-d2720d012aa0","Type":"ContainerDied","Data":"8bf24da7149bebf57d869a22f20663b5b05e35a9941acd98a5bab19f0f3ac85f"} Nov 29 07:39:04 crc kubenswrapper[4947]: I1129 07:39:04.736500 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bf24da7149bebf57d869a22f20663b5b05e35a9941acd98a5bab19f0f3ac85f" Nov 29 07:39:04 crc kubenswrapper[4947]: I1129 07:39:04.743239 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e13a36c7-9b31-4592-91dd-881412235914","Type":"ContainerStarted","Data":"219b6e108855e197d99ad838e86f059fccb856798da5e5191fef7417dd2f0965"} Nov 29 07:39:04 crc kubenswrapper[4947]: I1129 07:39:04.743288 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e13a36c7-9b31-4592-91dd-881412235914" containerName="glance-log" containerID="cri-o://1e7ce1939c7d651a2fc6564c5381e708dd3fd9784bdcaa7d2a7e7c67002a4d25" gracePeriod=30 Nov 29 07:39:04 crc kubenswrapper[4947]: I1129 07:39:04.743559 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e13a36c7-9b31-4592-91dd-881412235914" containerName="glance-httpd" containerID="cri-o://219b6e108855e197d99ad838e86f059fccb856798da5e5191fef7417dd2f0965" gracePeriod=30 Nov 29 07:39:04 crc kubenswrapper[4947]: I1129 07:39:04.753431 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6757b657b4-vdhrb" event={"ID":"4d917345-655a-4f24-bfd1-57dd9a7e9880","Type":"ContainerStarted","Data":"5e343044ba262f061a1f4ed6a3b4b9e51a6a1df0235ab8686cb38cdb18c035bb"} Nov 29 07:39:04 crc kubenswrapper[4947]: I1129 07:39:04.756445 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d01fd87b-812e-4368-9f75-39cf578e76ac","Type":"ContainerStarted","Data":"b40d59f4db608684e2452c01c32195513615ea679223d0c9b197a45e95d31e1b"} Nov 29 07:39:04 crc kubenswrapper[4947]: I1129 07:39:04.756687 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="d01fd87b-812e-4368-9f75-39cf578e76ac" containerName="glance-log" containerID="cri-o://a826a116655c5526d40dc05a03cdbdfeedbfed6a62874ead9275c788f4f403c5" gracePeriod=30 Nov 29 07:39:04 crc kubenswrapper[4947]: I1129 07:39:04.756843 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="d01fd87b-812e-4368-9f75-39cf578e76ac" containerName="glance-httpd" containerID="cri-o://b40d59f4db608684e2452c01c32195513615ea679223d0c9b197a45e95d31e1b" gracePeriod=30 Nov 29 07:39:04 crc kubenswrapper[4947]: I1129 07:39:04.761515 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bf95bd4cd-8f5st" event={"ID":"dec28edc-46ae-456a-9be2-ec56bdfd409f","Type":"ContainerStarted","Data":"e49fb151ce2885b6b1617adeae58403f34d78f3a0308f3441b476dc192ab8f39"} Nov 29 07:39:04 crc kubenswrapper[4947]: I1129 07:39:04.809122 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.809092918 podStartE2EDuration="6.809092918s" podCreationTimestamp="2025-11-29 07:38:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:39:04.78697899 +0000 UTC m=+3895.831361081" watchObservedRunningTime="2025-11-29 07:39:04.809092918 +0000 UTC m=+3895.853474999" Nov 29 07:39:04 crc kubenswrapper[4947]: I1129 07:39:04.823153 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.823102972 podStartE2EDuration="7.823102972s" podCreationTimestamp="2025-11-29 07:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:39:04.816565597 +0000 UTC m=+3895.860947698" watchObservedRunningTime="2025-11-29 07:39:04.823102972 +0000 UTC m=+3895.867485053" Nov 29 07:39:04 crc kubenswrapper[4947]: I1129 07:39:04.845743 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rz57l\" (UniqueName: \"kubernetes.io/projected/15bbf1b5-d20c-4156-a60e-d2720d012aa0-kube-api-access-rz57l\") pod \"15bbf1b5-d20c-4156-a60e-d2720d012aa0\" (UID: \"15bbf1b5-d20c-4156-a60e-d2720d012aa0\") " Nov 29 07:39:04 crc kubenswrapper[4947]: I1129 07:39:04.845990 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15bbf1b5-d20c-4156-a60e-d2720d012aa0-operator-scripts\") pod \"15bbf1b5-d20c-4156-a60e-d2720d012aa0\" (UID: \"15bbf1b5-d20c-4156-a60e-d2720d012aa0\") " Nov 29 07:39:04 crc kubenswrapper[4947]: I1129 07:39:04.854738 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15bbf1b5-d20c-4156-a60e-d2720d012aa0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "15bbf1b5-d20c-4156-a60e-d2720d012aa0" (UID: "15bbf1b5-d20c-4156-a60e-d2720d012aa0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:04 crc kubenswrapper[4947]: I1129 07:39:04.894791 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15bbf1b5-d20c-4156-a60e-d2720d012aa0-kube-api-access-rz57l" (OuterVolumeSpecName: "kube-api-access-rz57l") pod "15bbf1b5-d20c-4156-a60e-d2720d012aa0" (UID: "15bbf1b5-d20c-4156-a60e-d2720d012aa0"). InnerVolumeSpecName "kube-api-access-rz57l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:04 crc kubenswrapper[4947]: I1129 07:39:04.950395 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rz57l\" (UniqueName: \"kubernetes.io/projected/15bbf1b5-d20c-4156-a60e-d2720d012aa0-kube-api-access-rz57l\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:04 crc kubenswrapper[4947]: I1129 07:39:04.950790 4947 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15bbf1b5-d20c-4156-a60e-d2720d012aa0-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.243094 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-83fe-account-create-update-lc9t4" Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.262673 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cj76x\" (UniqueName: \"kubernetes.io/projected/eb628464-6376-4dc1-8142-0b9134560fb3-kube-api-access-cj76x\") pod \"eb628464-6376-4dc1-8142-0b9134560fb3\" (UID: \"eb628464-6376-4dc1-8142-0b9134560fb3\") " Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.265240 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb628464-6376-4dc1-8142-0b9134560fb3-operator-scripts\") pod \"eb628464-6376-4dc1-8142-0b9134560fb3\" (UID: \"eb628464-6376-4dc1-8142-0b9134560fb3\") " Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.266183 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb628464-6376-4dc1-8142-0b9134560fb3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eb628464-6376-4dc1-8142-0b9134560fb3" (UID: "eb628464-6376-4dc1-8142-0b9134560fb3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.270569 4947 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb628464-6376-4dc1-8142-0b9134560fb3-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.281793 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb628464-6376-4dc1-8142-0b9134560fb3-kube-api-access-cj76x" (OuterVolumeSpecName: "kube-api-access-cj76x") pod "eb628464-6376-4dc1-8142-0b9134560fb3" (UID: "eb628464-6376-4dc1-8142-0b9134560fb3"). InnerVolumeSpecName "kube-api-access-cj76x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.380891 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cj76x\" (UniqueName: \"kubernetes.io/projected/eb628464-6376-4dc1-8142-0b9134560fb3-kube-api-access-cj76x\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.765398 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.805963 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e13a36c7-9b31-4592-91dd-881412235914-config-data\") pod \"e13a36c7-9b31-4592-91dd-881412235914\" (UID: \"e13a36c7-9b31-4592-91dd-881412235914\") " Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.806041 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e13a36c7-9b31-4592-91dd-881412235914-scripts\") pod \"e13a36c7-9b31-4592-91dd-881412235914\" (UID: \"e13a36c7-9b31-4592-91dd-881412235914\") " Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.806081 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e13a36c7-9b31-4592-91dd-881412235914-internal-tls-certs\") pod \"e13a36c7-9b31-4592-91dd-881412235914\" (UID: \"e13a36c7-9b31-4592-91dd-881412235914\") " Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.806183 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"e13a36c7-9b31-4592-91dd-881412235914\" (UID: \"e13a36c7-9b31-4592-91dd-881412235914\") " Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.806740 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nfzl\" (UniqueName: \"kubernetes.io/projected/e13a36c7-9b31-4592-91dd-881412235914-kube-api-access-4nfzl\") pod \"e13a36c7-9b31-4592-91dd-881412235914\" (UID: \"e13a36c7-9b31-4592-91dd-881412235914\") " Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.806793 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e13a36c7-9b31-4592-91dd-881412235914-combined-ca-bundle\") pod \"e13a36c7-9b31-4592-91dd-881412235914\" (UID: \"e13a36c7-9b31-4592-91dd-881412235914\") " Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.806900 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e13a36c7-9b31-4592-91dd-881412235914-ceph\") pod \"e13a36c7-9b31-4592-91dd-881412235914\" (UID: \"e13a36c7-9b31-4592-91dd-881412235914\") " Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.807008 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e13a36c7-9b31-4592-91dd-881412235914-logs\") pod \"e13a36c7-9b31-4592-91dd-881412235914\" (UID: \"e13a36c7-9b31-4592-91dd-881412235914\") " Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.807078 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e13a36c7-9b31-4592-91dd-881412235914-httpd-run\") pod \"e13a36c7-9b31-4592-91dd-881412235914\" (UID: \"e13a36c7-9b31-4592-91dd-881412235914\") " Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.811855 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.813407 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e13a36c7-9b31-4592-91dd-881412235914-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e13a36c7-9b31-4592-91dd-881412235914" (UID: "e13a36c7-9b31-4592-91dd-881412235914"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.813715 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e13a36c7-9b31-4592-91dd-881412235914-logs" (OuterVolumeSpecName: "logs") pod "e13a36c7-9b31-4592-91dd-881412235914" (UID: "e13a36c7-9b31-4592-91dd-881412235914"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.823104 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e13a36c7-9b31-4592-91dd-881412235914-ceph" (OuterVolumeSpecName: "ceph") pod "e13a36c7-9b31-4592-91dd-881412235914" (UID: "e13a36c7-9b31-4592-91dd-881412235914"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.824006 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-83fe-account-create-update-lc9t4" event={"ID":"eb628464-6376-4dc1-8142-0b9134560fb3","Type":"ContainerDied","Data":"08e3476778002bd39993fc0032c57ac16b40cc93207e9c36986cf27490c82ca9"} Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.824079 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08e3476778002bd39993fc0032c57ac16b40cc93207e9c36986cf27490c82ca9" Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.824293 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-83fe-account-create-update-lc9t4" Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.831432 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "e13a36c7-9b31-4592-91dd-881412235914" (UID: "e13a36c7-9b31-4592-91dd-881412235914"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.838322 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e13a36c7-9b31-4592-91dd-881412235914-scripts" (OuterVolumeSpecName: "scripts") pod "e13a36c7-9b31-4592-91dd-881412235914" (UID: "e13a36c7-9b31-4592-91dd-881412235914"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.852552 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e13a36c7-9b31-4592-91dd-881412235914-kube-api-access-4nfzl" (OuterVolumeSpecName: "kube-api-access-4nfzl") pod "e13a36c7-9b31-4592-91dd-881412235914" (UID: "e13a36c7-9b31-4592-91dd-881412235914"). InnerVolumeSpecName "kube-api-access-4nfzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.855210 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e13a36c7-9b31-4592-91dd-881412235914-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e13a36c7-9b31-4592-91dd-881412235914" (UID: "e13a36c7-9b31-4592-91dd-881412235914"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.858500 4947 generic.go:334] "Generic (PLEG): container finished" podID="e13a36c7-9b31-4592-91dd-881412235914" containerID="219b6e108855e197d99ad838e86f059fccb856798da5e5191fef7417dd2f0965" exitCode=143 Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.858552 4947 generic.go:334] "Generic (PLEG): container finished" podID="e13a36c7-9b31-4592-91dd-881412235914" containerID="1e7ce1939c7d651a2fc6564c5381e708dd3fd9784bdcaa7d2a7e7c67002a4d25" exitCode=143 Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.858723 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e13a36c7-9b31-4592-91dd-881412235914","Type":"ContainerDied","Data":"219b6e108855e197d99ad838e86f059fccb856798da5e5191fef7417dd2f0965"} Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.858773 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e13a36c7-9b31-4592-91dd-881412235914","Type":"ContainerDied","Data":"1e7ce1939c7d651a2fc6564c5381e708dd3fd9784bdcaa7d2a7e7c67002a4d25"} Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.858789 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e13a36c7-9b31-4592-91dd-881412235914","Type":"ContainerDied","Data":"2e624e810807e380218ff68950bd8e67b40aecf056154159131a75067570aa45"} Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.858812 4947 scope.go:117] "RemoveContainer" containerID="219b6e108855e197d99ad838e86f059fccb856798da5e5191fef7417dd2f0965" Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.859137 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.873699 4947 generic.go:334] "Generic (PLEG): container finished" podID="d01fd87b-812e-4368-9f75-39cf578e76ac" containerID="b40d59f4db608684e2452c01c32195513615ea679223d0c9b197a45e95d31e1b" exitCode=0 Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.873769 4947 generic.go:334] "Generic (PLEG): container finished" podID="d01fd87b-812e-4368-9f75-39cf578e76ac" containerID="a826a116655c5526d40dc05a03cdbdfeedbfed6a62874ead9275c788f4f403c5" exitCode=143 Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.873695 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d01fd87b-812e-4368-9f75-39cf578e76ac","Type":"ContainerDied","Data":"b40d59f4db608684e2452c01c32195513615ea679223d0c9b197a45e95d31e1b"} Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.873938 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.874260 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d01fd87b-812e-4368-9f75-39cf578e76ac","Type":"ContainerDied","Data":"a826a116655c5526d40dc05a03cdbdfeedbfed6a62874ead9275c788f4f403c5"} Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.874289 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d01fd87b-812e-4368-9f75-39cf578e76ac","Type":"ContainerDied","Data":"75e0c8c0d18462b8f5682fe519ec113e82f88747cb859f357c7cd4a82edf5e0f"} Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.917981 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d01fd87b-812e-4368-9f75-39cf578e76ac-config-data\") pod \"d01fd87b-812e-4368-9f75-39cf578e76ac\" (UID: \"d01fd87b-812e-4368-9f75-39cf578e76ac\") " Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.918071 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d01fd87b-812e-4368-9f75-39cf578e76ac-logs\") pod \"d01fd87b-812e-4368-9f75-39cf578e76ac\" (UID: \"d01fd87b-812e-4368-9f75-39cf578e76ac\") " Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.918115 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d01fd87b-812e-4368-9f75-39cf578e76ac-httpd-run\") pod \"d01fd87b-812e-4368-9f75-39cf578e76ac\" (UID: \"d01fd87b-812e-4368-9f75-39cf578e76ac\") " Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.918133 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d01fd87b-812e-4368-9f75-39cf578e76ac-combined-ca-bundle\") pod \"d01fd87b-812e-4368-9f75-39cf578e76ac\" (UID: \"d01fd87b-812e-4368-9f75-39cf578e76ac\") " Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.918203 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d01fd87b-812e-4368-9f75-39cf578e76ac-ceph\") pod \"d01fd87b-812e-4368-9f75-39cf578e76ac\" (UID: \"d01fd87b-812e-4368-9f75-39cf578e76ac\") " Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.918278 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d01fd87b-812e-4368-9f75-39cf578e76ac-public-tls-certs\") pod \"d01fd87b-812e-4368-9f75-39cf578e76ac\" (UID: \"d01fd87b-812e-4368-9f75-39cf578e76ac\") " Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.918341 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q27dd\" (UniqueName: \"kubernetes.io/projected/d01fd87b-812e-4368-9f75-39cf578e76ac-kube-api-access-q27dd\") pod \"d01fd87b-812e-4368-9f75-39cf578e76ac\" (UID: \"d01fd87b-812e-4368-9f75-39cf578e76ac\") " Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.918401 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"d01fd87b-812e-4368-9f75-39cf578e76ac\" (UID: \"d01fd87b-812e-4368-9f75-39cf578e76ac\") " Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.918430 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d01fd87b-812e-4368-9f75-39cf578e76ac-scripts\") pod \"d01fd87b-812e-4368-9f75-39cf578e76ac\" (UID: \"d01fd87b-812e-4368-9f75-39cf578e76ac\") " Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.918869 4947 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e13a36c7-9b31-4592-91dd-881412235914-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.918895 4947 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.918905 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nfzl\" (UniqueName: \"kubernetes.io/projected/e13a36c7-9b31-4592-91dd-881412235914-kube-api-access-4nfzl\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.918916 4947 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e13a36c7-9b31-4592-91dd-881412235914-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.918925 4947 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e13a36c7-9b31-4592-91dd-881412235914-ceph\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.918933 4947 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e13a36c7-9b31-4592-91dd-881412235914-logs\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.918941 4947 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e13a36c7-9b31-4592-91dd-881412235914-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.923634 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d01fd87b-812e-4368-9f75-39cf578e76ac-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d01fd87b-812e-4368-9f75-39cf578e76ac" (UID: "d01fd87b-812e-4368-9f75-39cf578e76ac"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.923939 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d01fd87b-812e-4368-9f75-39cf578e76ac-logs" (OuterVolumeSpecName: "logs") pod "d01fd87b-812e-4368-9f75-39cf578e76ac" (UID: "d01fd87b-812e-4368-9f75-39cf578e76ac"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.953506 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "d01fd87b-812e-4368-9f75-39cf578e76ac" (UID: "d01fd87b-812e-4368-9f75-39cf578e76ac"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.953609 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d01fd87b-812e-4368-9f75-39cf578e76ac-ceph" (OuterVolumeSpecName: "ceph") pod "d01fd87b-812e-4368-9f75-39cf578e76ac" (UID: "d01fd87b-812e-4368-9f75-39cf578e76ac"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.956888 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d01fd87b-812e-4368-9f75-39cf578e76ac-scripts" (OuterVolumeSpecName: "scripts") pod "d01fd87b-812e-4368-9f75-39cf578e76ac" (UID: "d01fd87b-812e-4368-9f75-39cf578e76ac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.958540 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d01fd87b-812e-4368-9f75-39cf578e76ac-kube-api-access-q27dd" (OuterVolumeSpecName: "kube-api-access-q27dd") pod "d01fd87b-812e-4368-9f75-39cf578e76ac" (UID: "d01fd87b-812e-4368-9f75-39cf578e76ac"). InnerVolumeSpecName "kube-api-access-q27dd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.970778 4947 scope.go:117] "RemoveContainer" containerID="1e7ce1939c7d651a2fc6564c5381e708dd3fd9784bdcaa7d2a7e7c67002a4d25" Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.990929 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d01fd87b-812e-4368-9f75-39cf578e76ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d01fd87b-812e-4368-9f75-39cf578e76ac" (UID: "d01fd87b-812e-4368-9f75-39cf578e76ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:05 crc kubenswrapper[4947]: I1129 07:39:05.998550 4947 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.012019 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e13a36c7-9b31-4592-91dd-881412235914-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "e13a36c7-9b31-4592-91dd-881412235914" (UID: "e13a36c7-9b31-4592-91dd-881412235914"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.015433 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d01fd87b-812e-4368-9f75-39cf578e76ac-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d01fd87b-812e-4368-9f75-39cf578e76ac" (UID: "d01fd87b-812e-4368-9f75-39cf578e76ac"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.015567 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d01fd87b-812e-4368-9f75-39cf578e76ac-config-data" (OuterVolumeSpecName: "config-data") pod "d01fd87b-812e-4368-9f75-39cf578e76ac" (UID: "d01fd87b-812e-4368-9f75-39cf578e76ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.021351 4947 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d01fd87b-812e-4368-9f75-39cf578e76ac-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.021396 4947 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e13a36c7-9b31-4592-91dd-881412235914-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.021410 4947 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d01fd87b-812e-4368-9f75-39cf578e76ac-logs\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.021423 4947 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d01fd87b-812e-4368-9f75-39cf578e76ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.021432 4947 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d01fd87b-812e-4368-9f75-39cf578e76ac-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.021440 4947 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.021450 4947 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d01fd87b-812e-4368-9f75-39cf578e76ac-ceph\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.021469 4947 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d01fd87b-812e-4368-9f75-39cf578e76ac-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.021478 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q27dd\" (UniqueName: \"kubernetes.io/projected/d01fd87b-812e-4368-9f75-39cf578e76ac-kube-api-access-q27dd\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.021512 4947 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.021521 4947 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d01fd87b-812e-4368-9f75-39cf578e76ac-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.025055 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e13a36c7-9b31-4592-91dd-881412235914-config-data" (OuterVolumeSpecName: "config-data") pod "e13a36c7-9b31-4592-91dd-881412235914" (UID: "e13a36c7-9b31-4592-91dd-881412235914"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.032265 4947 scope.go:117] "RemoveContainer" containerID="219b6e108855e197d99ad838e86f059fccb856798da5e5191fef7417dd2f0965" Nov 29 07:39:06 crc kubenswrapper[4947]: E1129 07:39:06.032914 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"219b6e108855e197d99ad838e86f059fccb856798da5e5191fef7417dd2f0965\": container with ID starting with 219b6e108855e197d99ad838e86f059fccb856798da5e5191fef7417dd2f0965 not found: ID does not exist" containerID="219b6e108855e197d99ad838e86f059fccb856798da5e5191fef7417dd2f0965" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.033083 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"219b6e108855e197d99ad838e86f059fccb856798da5e5191fef7417dd2f0965"} err="failed to get container status \"219b6e108855e197d99ad838e86f059fccb856798da5e5191fef7417dd2f0965\": rpc error: code = NotFound desc = could not find container \"219b6e108855e197d99ad838e86f059fccb856798da5e5191fef7417dd2f0965\": container with ID starting with 219b6e108855e197d99ad838e86f059fccb856798da5e5191fef7417dd2f0965 not found: ID does not exist" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.033121 4947 scope.go:117] "RemoveContainer" containerID="1e7ce1939c7d651a2fc6564c5381e708dd3fd9784bdcaa7d2a7e7c67002a4d25" Nov 29 07:39:06 crc kubenswrapper[4947]: E1129 07:39:06.033852 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e7ce1939c7d651a2fc6564c5381e708dd3fd9784bdcaa7d2a7e7c67002a4d25\": container with ID starting with 1e7ce1939c7d651a2fc6564c5381e708dd3fd9784bdcaa7d2a7e7c67002a4d25 not found: ID does not exist" containerID="1e7ce1939c7d651a2fc6564c5381e708dd3fd9784bdcaa7d2a7e7c67002a4d25" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.033897 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e7ce1939c7d651a2fc6564c5381e708dd3fd9784bdcaa7d2a7e7c67002a4d25"} err="failed to get container status \"1e7ce1939c7d651a2fc6564c5381e708dd3fd9784bdcaa7d2a7e7c67002a4d25\": rpc error: code = NotFound desc = could not find container \"1e7ce1939c7d651a2fc6564c5381e708dd3fd9784bdcaa7d2a7e7c67002a4d25\": container with ID starting with 1e7ce1939c7d651a2fc6564c5381e708dd3fd9784bdcaa7d2a7e7c67002a4d25 not found: ID does not exist" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.033927 4947 scope.go:117] "RemoveContainer" containerID="219b6e108855e197d99ad838e86f059fccb856798da5e5191fef7417dd2f0965" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.034287 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"219b6e108855e197d99ad838e86f059fccb856798da5e5191fef7417dd2f0965"} err="failed to get container status \"219b6e108855e197d99ad838e86f059fccb856798da5e5191fef7417dd2f0965\": rpc error: code = NotFound desc = could not find container \"219b6e108855e197d99ad838e86f059fccb856798da5e5191fef7417dd2f0965\": container with ID starting with 219b6e108855e197d99ad838e86f059fccb856798da5e5191fef7417dd2f0965 not found: ID does not exist" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.034318 4947 scope.go:117] "RemoveContainer" containerID="1e7ce1939c7d651a2fc6564c5381e708dd3fd9784bdcaa7d2a7e7c67002a4d25" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.034659 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e7ce1939c7d651a2fc6564c5381e708dd3fd9784bdcaa7d2a7e7c67002a4d25"} err="failed to get container status \"1e7ce1939c7d651a2fc6564c5381e708dd3fd9784bdcaa7d2a7e7c67002a4d25\": rpc error: code = NotFound desc = could not find container \"1e7ce1939c7d651a2fc6564c5381e708dd3fd9784bdcaa7d2a7e7c67002a4d25\": container with ID starting with 1e7ce1939c7d651a2fc6564c5381e708dd3fd9784bdcaa7d2a7e7c67002a4d25 not found: ID does not exist" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.034695 4947 scope.go:117] "RemoveContainer" containerID="b40d59f4db608684e2452c01c32195513615ea679223d0c9b197a45e95d31e1b" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.049654 4947 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.088355 4947 scope.go:117] "RemoveContainer" containerID="a826a116655c5526d40dc05a03cdbdfeedbfed6a62874ead9275c788f4f403c5" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.117453 4947 scope.go:117] "RemoveContainer" containerID="b40d59f4db608684e2452c01c32195513615ea679223d0c9b197a45e95d31e1b" Nov 29 07:39:06 crc kubenswrapper[4947]: E1129 07:39:06.120154 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b40d59f4db608684e2452c01c32195513615ea679223d0c9b197a45e95d31e1b\": container with ID starting with b40d59f4db608684e2452c01c32195513615ea679223d0c9b197a45e95d31e1b not found: ID does not exist" containerID="b40d59f4db608684e2452c01c32195513615ea679223d0c9b197a45e95d31e1b" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.120203 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b40d59f4db608684e2452c01c32195513615ea679223d0c9b197a45e95d31e1b"} err="failed to get container status \"b40d59f4db608684e2452c01c32195513615ea679223d0c9b197a45e95d31e1b\": rpc error: code = NotFound desc = could not find container \"b40d59f4db608684e2452c01c32195513615ea679223d0c9b197a45e95d31e1b\": container with ID starting with b40d59f4db608684e2452c01c32195513615ea679223d0c9b197a45e95d31e1b not found: ID does not exist" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.120255 4947 scope.go:117] "RemoveContainer" containerID="a826a116655c5526d40dc05a03cdbdfeedbfed6a62874ead9275c788f4f403c5" Nov 29 07:39:06 crc kubenswrapper[4947]: E1129 07:39:06.122719 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a826a116655c5526d40dc05a03cdbdfeedbfed6a62874ead9275c788f4f403c5\": container with ID starting with a826a116655c5526d40dc05a03cdbdfeedbfed6a62874ead9275c788f4f403c5 not found: ID does not exist" containerID="a826a116655c5526d40dc05a03cdbdfeedbfed6a62874ead9275c788f4f403c5" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.122759 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a826a116655c5526d40dc05a03cdbdfeedbfed6a62874ead9275c788f4f403c5"} err="failed to get container status \"a826a116655c5526d40dc05a03cdbdfeedbfed6a62874ead9275c788f4f403c5\": rpc error: code = NotFound desc = could not find container \"a826a116655c5526d40dc05a03cdbdfeedbfed6a62874ead9275c788f4f403c5\": container with ID starting with a826a116655c5526d40dc05a03cdbdfeedbfed6a62874ead9275c788f4f403c5 not found: ID does not exist" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.122780 4947 scope.go:117] "RemoveContainer" containerID="b40d59f4db608684e2452c01c32195513615ea679223d0c9b197a45e95d31e1b" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.123354 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b40d59f4db608684e2452c01c32195513615ea679223d0c9b197a45e95d31e1b"} err="failed to get container status \"b40d59f4db608684e2452c01c32195513615ea679223d0c9b197a45e95d31e1b\": rpc error: code = NotFound desc = could not find container \"b40d59f4db608684e2452c01c32195513615ea679223d0c9b197a45e95d31e1b\": container with ID starting with b40d59f4db608684e2452c01c32195513615ea679223d0c9b197a45e95d31e1b not found: ID does not exist" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.123384 4947 scope.go:117] "RemoveContainer" containerID="a826a116655c5526d40dc05a03cdbdfeedbfed6a62874ead9275c788f4f403c5" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.123865 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a826a116655c5526d40dc05a03cdbdfeedbfed6a62874ead9275c788f4f403c5"} err="failed to get container status \"a826a116655c5526d40dc05a03cdbdfeedbfed6a62874ead9275c788f4f403c5\": rpc error: code = NotFound desc = could not find container \"a826a116655c5526d40dc05a03cdbdfeedbfed6a62874ead9275c788f4f403c5\": container with ID starting with a826a116655c5526d40dc05a03cdbdfeedbfed6a62874ead9275c788f4f403c5 not found: ID does not exist" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.123891 4947 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.123909 4947 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e13a36c7-9b31-4592-91dd-881412235914-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.255718 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.321669 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.343416 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.360895 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.382394 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 07:39:06 crc kubenswrapper[4947]: E1129 07:39:06.382920 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e13a36c7-9b31-4592-91dd-881412235914" containerName="glance-httpd" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.382935 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="e13a36c7-9b31-4592-91dd-881412235914" containerName="glance-httpd" Nov 29 07:39:06 crc kubenswrapper[4947]: E1129 07:39:06.382955 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15bbf1b5-d20c-4156-a60e-d2720d012aa0" containerName="mariadb-database-create" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.382962 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="15bbf1b5-d20c-4156-a60e-d2720d012aa0" containerName="mariadb-database-create" Nov 29 07:39:06 crc kubenswrapper[4947]: E1129 07:39:06.382978 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d01fd87b-812e-4368-9f75-39cf578e76ac" containerName="glance-log" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.382984 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="d01fd87b-812e-4368-9f75-39cf578e76ac" containerName="glance-log" Nov 29 07:39:06 crc kubenswrapper[4947]: E1129 07:39:06.382991 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb628464-6376-4dc1-8142-0b9134560fb3" containerName="mariadb-account-create-update" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.382996 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb628464-6376-4dc1-8142-0b9134560fb3" containerName="mariadb-account-create-update" Nov 29 07:39:06 crc kubenswrapper[4947]: E1129 07:39:06.383012 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d01fd87b-812e-4368-9f75-39cf578e76ac" containerName="glance-httpd" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.383017 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="d01fd87b-812e-4368-9f75-39cf578e76ac" containerName="glance-httpd" Nov 29 07:39:06 crc kubenswrapper[4947]: E1129 07:39:06.383033 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e13a36c7-9b31-4592-91dd-881412235914" containerName="glance-log" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.383038 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="e13a36c7-9b31-4592-91dd-881412235914" containerName="glance-log" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.383293 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="e13a36c7-9b31-4592-91dd-881412235914" containerName="glance-httpd" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.383308 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb628464-6376-4dc1-8142-0b9134560fb3" containerName="mariadb-account-create-update" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.383329 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="e13a36c7-9b31-4592-91dd-881412235914" containerName="glance-log" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.383338 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="d01fd87b-812e-4368-9f75-39cf578e76ac" containerName="glance-httpd" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.383351 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="d01fd87b-812e-4368-9f75-39cf578e76ac" containerName="glance-log" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.383361 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="15bbf1b5-d20c-4156-a60e-d2720d012aa0" containerName="mariadb-database-create" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.384627 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.387382 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.387784 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.388577 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-8278w" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.391084 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.396767 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.412977 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.416857 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.422167 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.422410 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.428251 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.436647 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7473ae66-4c18-4a4a-92ab-cd0ce58ace1c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7473ae66-4c18-4a4a-92ab-cd0ce58ace1c\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.436705 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjvbq\" (UniqueName: \"kubernetes.io/projected/7473ae66-4c18-4a4a-92ab-cd0ce58ace1c-kube-api-access-zjvbq\") pod \"glance-default-internal-api-0\" (UID: \"7473ae66-4c18-4a4a-92ab-cd0ce58ace1c\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.436743 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7473ae66-4c18-4a4a-92ab-cd0ce58ace1c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7473ae66-4c18-4a4a-92ab-cd0ce58ace1c\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.436772 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7473ae66-4c18-4a4a-92ab-cd0ce58ace1c-logs\") pod \"glance-default-internal-api-0\" (UID: \"7473ae66-4c18-4a4a-92ab-cd0ce58ace1c\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.436811 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7473ae66-4c18-4a4a-92ab-cd0ce58ace1c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7473ae66-4c18-4a4a-92ab-cd0ce58ace1c\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.436842 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7473ae66-4c18-4a4a-92ab-cd0ce58ace1c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7473ae66-4c18-4a4a-92ab-cd0ce58ace1c\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.437656 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"7473ae66-4c18-4a4a-92ab-cd0ce58ace1c\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.437714 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7473ae66-4c18-4a4a-92ab-cd0ce58ace1c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7473ae66-4c18-4a4a-92ab-cd0ce58ace1c\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.437778 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7473ae66-4c18-4a4a-92ab-cd0ce58ace1c-ceph\") pod \"glance-default-internal-api-0\" (UID: \"7473ae66-4c18-4a4a-92ab-cd0ce58ace1c\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.540748 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7473ae66-4c18-4a4a-92ab-cd0ce58ace1c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7473ae66-4c18-4a4a-92ab-cd0ce58ace1c\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.540823 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7473ae66-4c18-4a4a-92ab-cd0ce58ace1c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7473ae66-4c18-4a4a-92ab-cd0ce58ace1c\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.540880 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5577dab3-c69a-480a-8fed-a9fdaf4152c8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5577dab3-c69a-480a-8fed-a9fdaf4152c8\") " pod="openstack/glance-default-external-api-0" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.540944 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5r6b\" (UniqueName: \"kubernetes.io/projected/5577dab3-c69a-480a-8fed-a9fdaf4152c8-kube-api-access-h5r6b\") pod \"glance-default-external-api-0\" (UID: \"5577dab3-c69a-480a-8fed-a9fdaf4152c8\") " pod="openstack/glance-default-external-api-0" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.540996 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5577dab3-c69a-480a-8fed-a9fdaf4152c8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5577dab3-c69a-480a-8fed-a9fdaf4152c8\") " pod="openstack/glance-default-external-api-0" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.541027 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"7473ae66-4c18-4a4a-92ab-cd0ce58ace1c\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.541059 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7473ae66-4c18-4a4a-92ab-cd0ce58ace1c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7473ae66-4c18-4a4a-92ab-cd0ce58ace1c\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.541108 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"5577dab3-c69a-480a-8fed-a9fdaf4152c8\") " pod="openstack/glance-default-external-api-0" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.541150 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5577dab3-c69a-480a-8fed-a9fdaf4152c8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5577dab3-c69a-480a-8fed-a9fdaf4152c8\") " pod="openstack/glance-default-external-api-0" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.541186 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7473ae66-4c18-4a4a-92ab-cd0ce58ace1c-ceph\") pod \"glance-default-internal-api-0\" (UID: \"7473ae66-4c18-4a4a-92ab-cd0ce58ace1c\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.541257 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/5577dab3-c69a-480a-8fed-a9fdaf4152c8-ceph\") pod \"glance-default-external-api-0\" (UID: \"5577dab3-c69a-480a-8fed-a9fdaf4152c8\") " pod="openstack/glance-default-external-api-0" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.541290 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5577dab3-c69a-480a-8fed-a9fdaf4152c8-logs\") pod \"glance-default-external-api-0\" (UID: \"5577dab3-c69a-480a-8fed-a9fdaf4152c8\") " pod="openstack/glance-default-external-api-0" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.541317 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5577dab3-c69a-480a-8fed-a9fdaf4152c8-config-data\") pod \"glance-default-external-api-0\" (UID: \"5577dab3-c69a-480a-8fed-a9fdaf4152c8\") " pod="openstack/glance-default-external-api-0" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.541369 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7473ae66-4c18-4a4a-92ab-cd0ce58ace1c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7473ae66-4c18-4a4a-92ab-cd0ce58ace1c\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.541400 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjvbq\" (UniqueName: \"kubernetes.io/projected/7473ae66-4c18-4a4a-92ab-cd0ce58ace1c-kube-api-access-zjvbq\") pod \"glance-default-internal-api-0\" (UID: \"7473ae66-4c18-4a4a-92ab-cd0ce58ace1c\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.541434 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7473ae66-4c18-4a4a-92ab-cd0ce58ace1c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7473ae66-4c18-4a4a-92ab-cd0ce58ace1c\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.541460 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5577dab3-c69a-480a-8fed-a9fdaf4152c8-scripts\") pod \"glance-default-external-api-0\" (UID: \"5577dab3-c69a-480a-8fed-a9fdaf4152c8\") " pod="openstack/glance-default-external-api-0" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.541486 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7473ae66-4c18-4a4a-92ab-cd0ce58ace1c-logs\") pod \"glance-default-internal-api-0\" (UID: \"7473ae66-4c18-4a4a-92ab-cd0ce58ace1c\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.542152 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7473ae66-4c18-4a4a-92ab-cd0ce58ace1c-logs\") pod \"glance-default-internal-api-0\" (UID: \"7473ae66-4c18-4a4a-92ab-cd0ce58ace1c\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.543541 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7473ae66-4c18-4a4a-92ab-cd0ce58ace1c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7473ae66-4c18-4a4a-92ab-cd0ce58ace1c\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.543776 4947 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"7473ae66-4c18-4a4a-92ab-cd0ce58ace1c\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.549408 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7473ae66-4c18-4a4a-92ab-cd0ce58ace1c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7473ae66-4c18-4a4a-92ab-cd0ce58ace1c\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.549670 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7473ae66-4c18-4a4a-92ab-cd0ce58ace1c-ceph\") pod \"glance-default-internal-api-0\" (UID: \"7473ae66-4c18-4a4a-92ab-cd0ce58ace1c\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.552775 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7473ae66-4c18-4a4a-92ab-cd0ce58ace1c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7473ae66-4c18-4a4a-92ab-cd0ce58ace1c\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.581267 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7473ae66-4c18-4a4a-92ab-cd0ce58ace1c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7473ae66-4c18-4a4a-92ab-cd0ce58ace1c\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.582497 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7473ae66-4c18-4a4a-92ab-cd0ce58ace1c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7473ae66-4c18-4a4a-92ab-cd0ce58ace1c\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.586835 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjvbq\" (UniqueName: \"kubernetes.io/projected/7473ae66-4c18-4a4a-92ab-cd0ce58ace1c-kube-api-access-zjvbq\") pod \"glance-default-internal-api-0\" (UID: \"7473ae66-4c18-4a4a-92ab-cd0ce58ace1c\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.599541 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"7473ae66-4c18-4a4a-92ab-cd0ce58ace1c\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.644242 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5577dab3-c69a-480a-8fed-a9fdaf4152c8-scripts\") pod \"glance-default-external-api-0\" (UID: \"5577dab3-c69a-480a-8fed-a9fdaf4152c8\") " pod="openstack/glance-default-external-api-0" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.644427 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5577dab3-c69a-480a-8fed-a9fdaf4152c8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5577dab3-c69a-480a-8fed-a9fdaf4152c8\") " pod="openstack/glance-default-external-api-0" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.644580 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5r6b\" (UniqueName: \"kubernetes.io/projected/5577dab3-c69a-480a-8fed-a9fdaf4152c8-kube-api-access-h5r6b\") pod \"glance-default-external-api-0\" (UID: \"5577dab3-c69a-480a-8fed-a9fdaf4152c8\") " pod="openstack/glance-default-external-api-0" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.644703 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5577dab3-c69a-480a-8fed-a9fdaf4152c8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5577dab3-c69a-480a-8fed-a9fdaf4152c8\") " pod="openstack/glance-default-external-api-0" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.645546 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"5577dab3-c69a-480a-8fed-a9fdaf4152c8\") " pod="openstack/glance-default-external-api-0" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.645654 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5577dab3-c69a-480a-8fed-a9fdaf4152c8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5577dab3-c69a-480a-8fed-a9fdaf4152c8\") " pod="openstack/glance-default-external-api-0" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.646244 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/5577dab3-c69a-480a-8fed-a9fdaf4152c8-ceph\") pod \"glance-default-external-api-0\" (UID: \"5577dab3-c69a-480a-8fed-a9fdaf4152c8\") " pod="openstack/glance-default-external-api-0" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.646299 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5577dab3-c69a-480a-8fed-a9fdaf4152c8-logs\") pod \"glance-default-external-api-0\" (UID: \"5577dab3-c69a-480a-8fed-a9fdaf4152c8\") " pod="openstack/glance-default-external-api-0" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.646334 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5577dab3-c69a-480a-8fed-a9fdaf4152c8-config-data\") pod \"glance-default-external-api-0\" (UID: \"5577dab3-c69a-480a-8fed-a9fdaf4152c8\") " pod="openstack/glance-default-external-api-0" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.646491 4947 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"5577dab3-c69a-480a-8fed-a9fdaf4152c8\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-external-api-0" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.649213 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5577dab3-c69a-480a-8fed-a9fdaf4152c8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5577dab3-c69a-480a-8fed-a9fdaf4152c8\") " pod="openstack/glance-default-external-api-0" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.650010 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5577dab3-c69a-480a-8fed-a9fdaf4152c8-logs\") pod \"glance-default-external-api-0\" (UID: \"5577dab3-c69a-480a-8fed-a9fdaf4152c8\") " pod="openstack/glance-default-external-api-0" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.653894 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5577dab3-c69a-480a-8fed-a9fdaf4152c8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5577dab3-c69a-480a-8fed-a9fdaf4152c8\") " pod="openstack/glance-default-external-api-0" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.656548 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5577dab3-c69a-480a-8fed-a9fdaf4152c8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5577dab3-c69a-480a-8fed-a9fdaf4152c8\") " pod="openstack/glance-default-external-api-0" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.656654 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/5577dab3-c69a-480a-8fed-a9fdaf4152c8-ceph\") pod \"glance-default-external-api-0\" (UID: \"5577dab3-c69a-480a-8fed-a9fdaf4152c8\") " pod="openstack/glance-default-external-api-0" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.656919 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5577dab3-c69a-480a-8fed-a9fdaf4152c8-scripts\") pod \"glance-default-external-api-0\" (UID: \"5577dab3-c69a-480a-8fed-a9fdaf4152c8\") " pod="openstack/glance-default-external-api-0" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.657820 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5577dab3-c69a-480a-8fed-a9fdaf4152c8-config-data\") pod \"glance-default-external-api-0\" (UID: \"5577dab3-c69a-480a-8fed-a9fdaf4152c8\") " pod="openstack/glance-default-external-api-0" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.669427 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5r6b\" (UniqueName: \"kubernetes.io/projected/5577dab3-c69a-480a-8fed-a9fdaf4152c8-kube-api-access-h5r6b\") pod \"glance-default-external-api-0\" (UID: \"5577dab3-c69a-480a-8fed-a9fdaf4152c8\") " pod="openstack/glance-default-external-api-0" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.719779 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"5577dab3-c69a-480a-8fed-a9fdaf4152c8\") " pod="openstack/glance-default-external-api-0" Nov 29 07:39:06 crc kubenswrapper[4947]: I1129 07:39:06.748594 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 29 07:39:07 crc kubenswrapper[4947]: I1129 07:39:07.023518 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 29 07:39:07 crc kubenswrapper[4947]: I1129 07:39:07.187130 4947 scope.go:117] "RemoveContainer" containerID="befe8bc1f518b72b2765c4bbae633eaff2671198765b803461fe977b3f76f166" Nov 29 07:39:07 crc kubenswrapper[4947]: E1129 07:39:07.187787 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:39:07 crc kubenswrapper[4947]: I1129 07:39:07.204123 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d01fd87b-812e-4368-9f75-39cf578e76ac" path="/var/lib/kubelet/pods/d01fd87b-812e-4368-9f75-39cf578e76ac/volumes" Nov 29 07:39:07 crc kubenswrapper[4947]: I1129 07:39:07.205523 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e13a36c7-9b31-4592-91dd-881412235914" path="/var/lib/kubelet/pods/e13a36c7-9b31-4592-91dd-881412235914/volumes" Nov 29 07:39:07 crc kubenswrapper[4947]: I1129 07:39:07.520359 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 07:39:07 crc kubenswrapper[4947]: W1129 07:39:07.543769 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7473ae66_4c18_4a4a_92ab_cd0ce58ace1c.slice/crio-7307a9002634b0d0a2fe26ccf734912f7748967783a308b3ed13afafa63b05c7 WatchSource:0}: Error finding container 7307a9002634b0d0a2fe26ccf734912f7748967783a308b3ed13afafa63b05c7: Status 404 returned error can't find the container with id 7307a9002634b0d0a2fe26ccf734912f7748967783a308b3ed13afafa63b05c7 Nov 29 07:39:07 crc kubenswrapper[4947]: I1129 07:39:07.704932 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 07:39:07 crc kubenswrapper[4947]: I1129 07:39:07.971924 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7473ae66-4c18-4a4a-92ab-cd0ce58ace1c","Type":"ContainerStarted","Data":"7307a9002634b0d0a2fe26ccf734912f7748967783a308b3ed13afafa63b05c7"} Nov 29 07:39:08 crc kubenswrapper[4947]: I1129 07:39:08.013525 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5577dab3-c69a-480a-8fed-a9fdaf4152c8","Type":"ContainerStarted","Data":"d6b30d5859a0c00e43a9be96b8d7cd8269c3f2db4cdf5b24e4bd9e40090e16e1"} Nov 29 07:39:08 crc kubenswrapper[4947]: I1129 07:39:08.550025 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Nov 29 07:39:08 crc kubenswrapper[4947]: I1129 07:39:08.678156 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Nov 29 07:39:08 crc kubenswrapper[4947]: I1129 07:39:08.867244 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Nov 29 07:39:09 crc kubenswrapper[4947]: I1129 07:39:09.046784 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Nov 29 07:39:09 crc kubenswrapper[4947]: I1129 07:39:09.051508 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7473ae66-4c18-4a4a-92ab-cd0ce58ace1c","Type":"ContainerStarted","Data":"fd2ecd89ad8f4d3b0af6f709a624bfff6806e767da119b0a4861ead1e8b00ed6"} Nov 29 07:39:09 crc kubenswrapper[4947]: I1129 07:39:09.054934 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5577dab3-c69a-480a-8fed-a9fdaf4152c8","Type":"ContainerStarted","Data":"9b99cd9e56024a11cd012b6896217ae51becf61534514dbd988ecd36be4316a8"} Nov 29 07:39:09 crc kubenswrapper[4947]: I1129 07:39:09.787066 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-9sh8j"] Nov 29 07:39:09 crc kubenswrapper[4947]: I1129 07:39:09.789153 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-9sh8j" Nov 29 07:39:09 crc kubenswrapper[4947]: I1129 07:39:09.792191 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Nov 29 07:39:09 crc kubenswrapper[4947]: I1129 07:39:09.793695 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-ph78k" Nov 29 07:39:09 crc kubenswrapper[4947]: I1129 07:39:09.802526 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-9sh8j"] Nov 29 07:39:09 crc kubenswrapper[4947]: I1129 07:39:09.875322 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0759a6d0-0585-4855-8f73-db253214a75b-combined-ca-bundle\") pod \"manila-db-sync-9sh8j\" (UID: \"0759a6d0-0585-4855-8f73-db253214a75b\") " pod="openstack/manila-db-sync-9sh8j" Nov 29 07:39:09 crc kubenswrapper[4947]: I1129 07:39:09.875426 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0759a6d0-0585-4855-8f73-db253214a75b-config-data\") pod \"manila-db-sync-9sh8j\" (UID: \"0759a6d0-0585-4855-8f73-db253214a75b\") " pod="openstack/manila-db-sync-9sh8j" Nov 29 07:39:09 crc kubenswrapper[4947]: I1129 07:39:09.875517 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqtrr\" (UniqueName: \"kubernetes.io/projected/0759a6d0-0585-4855-8f73-db253214a75b-kube-api-access-mqtrr\") pod \"manila-db-sync-9sh8j\" (UID: \"0759a6d0-0585-4855-8f73-db253214a75b\") " pod="openstack/manila-db-sync-9sh8j" Nov 29 07:39:09 crc kubenswrapper[4947]: I1129 07:39:09.875919 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/0759a6d0-0585-4855-8f73-db253214a75b-job-config-data\") pod \"manila-db-sync-9sh8j\" (UID: \"0759a6d0-0585-4855-8f73-db253214a75b\") " pod="openstack/manila-db-sync-9sh8j" Nov 29 07:39:09 crc kubenswrapper[4947]: I1129 07:39:09.979879 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/0759a6d0-0585-4855-8f73-db253214a75b-job-config-data\") pod \"manila-db-sync-9sh8j\" (UID: \"0759a6d0-0585-4855-8f73-db253214a75b\") " pod="openstack/manila-db-sync-9sh8j" Nov 29 07:39:09 crc kubenswrapper[4947]: I1129 07:39:09.980084 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0759a6d0-0585-4855-8f73-db253214a75b-combined-ca-bundle\") pod \"manila-db-sync-9sh8j\" (UID: \"0759a6d0-0585-4855-8f73-db253214a75b\") " pod="openstack/manila-db-sync-9sh8j" Nov 29 07:39:09 crc kubenswrapper[4947]: I1129 07:39:09.980682 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0759a6d0-0585-4855-8f73-db253214a75b-config-data\") pod \"manila-db-sync-9sh8j\" (UID: \"0759a6d0-0585-4855-8f73-db253214a75b\") " pod="openstack/manila-db-sync-9sh8j" Nov 29 07:39:09 crc kubenswrapper[4947]: I1129 07:39:09.981803 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqtrr\" (UniqueName: \"kubernetes.io/projected/0759a6d0-0585-4855-8f73-db253214a75b-kube-api-access-mqtrr\") pod \"manila-db-sync-9sh8j\" (UID: \"0759a6d0-0585-4855-8f73-db253214a75b\") " pod="openstack/manila-db-sync-9sh8j" Nov 29 07:39:09 crc kubenswrapper[4947]: I1129 07:39:09.992073 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0759a6d0-0585-4855-8f73-db253214a75b-config-data\") pod \"manila-db-sync-9sh8j\" (UID: \"0759a6d0-0585-4855-8f73-db253214a75b\") " pod="openstack/manila-db-sync-9sh8j" Nov 29 07:39:09 crc kubenswrapper[4947]: I1129 07:39:09.997884 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0759a6d0-0585-4855-8f73-db253214a75b-combined-ca-bundle\") pod \"manila-db-sync-9sh8j\" (UID: \"0759a6d0-0585-4855-8f73-db253214a75b\") " pod="openstack/manila-db-sync-9sh8j" Nov 29 07:39:10 crc kubenswrapper[4947]: I1129 07:39:10.003320 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqtrr\" (UniqueName: \"kubernetes.io/projected/0759a6d0-0585-4855-8f73-db253214a75b-kube-api-access-mqtrr\") pod \"manila-db-sync-9sh8j\" (UID: \"0759a6d0-0585-4855-8f73-db253214a75b\") " pod="openstack/manila-db-sync-9sh8j" Nov 29 07:39:10 crc kubenswrapper[4947]: I1129 07:39:10.006801 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/0759a6d0-0585-4855-8f73-db253214a75b-job-config-data\") pod \"manila-db-sync-9sh8j\" (UID: \"0759a6d0-0585-4855-8f73-db253214a75b\") " pod="openstack/manila-db-sync-9sh8j" Nov 29 07:39:10 crc kubenswrapper[4947]: I1129 07:39:10.126446 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-9sh8j" Nov 29 07:39:16 crc kubenswrapper[4947]: I1129 07:39:16.279056 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-9sh8j"] Nov 29 07:39:16 crc kubenswrapper[4947]: W1129 07:39:16.292157 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0759a6d0_0585_4855_8f73_db253214a75b.slice/crio-574340d1166afd8ba48fc65d8bb75321753af98acc2b974695530871d756cffc WatchSource:0}: Error finding container 574340d1166afd8ba48fc65d8bb75321753af98acc2b974695530871d756cffc: Status 404 returned error can't find the container with id 574340d1166afd8ba48fc65d8bb75321753af98acc2b974695530871d756cffc Nov 29 07:39:17 crc kubenswrapper[4947]: I1129 07:39:17.169411 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5577dab3-c69a-480a-8fed-a9fdaf4152c8","Type":"ContainerStarted","Data":"03b651bc4f9693dedbee4170c527040eaf6307caf028fc45291c8fe0d0adc897"} Nov 29 07:39:17 crc kubenswrapper[4947]: I1129 07:39:17.171648 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bf95bd4cd-8f5st" event={"ID":"dec28edc-46ae-456a-9be2-ec56bdfd409f","Type":"ContainerStarted","Data":"aef36c05b7b3247775c426899af1ec33afa2b063acb7b808f071eca790a6a14c"} Nov 29 07:39:17 crc kubenswrapper[4947]: I1129 07:39:17.173690 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7473ae66-4c18-4a4a-92ab-cd0ce58ace1c","Type":"ContainerStarted","Data":"b25e40d1e1f04c717cf00d92d10f8114cccdb328f393ddceda44e751060ff6e3"} Nov 29 07:39:17 crc kubenswrapper[4947]: I1129 07:39:17.176654 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68c46dc8bf-mgc5p" event={"ID":"d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e","Type":"ContainerStarted","Data":"d05b654bba351c3dc646fb160dd55507d9f26f4eb099778365614bb595ced129"} Nov 29 07:39:17 crc kubenswrapper[4947]: I1129 07:39:17.192310 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=11.192287324 podStartE2EDuration="11.192287324s" podCreationTimestamp="2025-11-29 07:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:39:17.189526744 +0000 UTC m=+3908.233908835" watchObservedRunningTime="2025-11-29 07:39:17.192287324 +0000 UTC m=+3908.236669405" Nov 29 07:39:17 crc kubenswrapper[4947]: I1129 07:39:17.195375 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6757b657b4-vdhrb" event={"ID":"4d917345-655a-4f24-bfd1-57dd9a7e9880","Type":"ContainerStarted","Data":"e3ce3774778e4c4e878e016511bfae3502f9ce5a3ae5e09c9526eb1b57c51b09"} Nov 29 07:39:17 crc kubenswrapper[4947]: I1129 07:39:17.195433 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-9sh8j" event={"ID":"0759a6d0-0585-4855-8f73-db253214a75b","Type":"ContainerStarted","Data":"574340d1166afd8ba48fc65d8bb75321753af98acc2b974695530871d756cffc"} Nov 29 07:39:17 crc kubenswrapper[4947]: I1129 07:39:17.195447 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5468b7496f-chvvc" event={"ID":"18340ffc-dd9e-437a-b838-7d63a0fd0102","Type":"ContainerStarted","Data":"e0e1f032a1815e7718c76bda796e1e9862b16f9e97d2a0c1f22e634308d8e0a3"} Nov 29 07:39:17 crc kubenswrapper[4947]: I1129 07:39:17.221641 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=11.221611704 podStartE2EDuration="11.221611704s" podCreationTimestamp="2025-11-29 07:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:39:17.215323256 +0000 UTC m=+3908.259705347" watchObservedRunningTime="2025-11-29 07:39:17.221611704 +0000 UTC m=+3908.265993796" Nov 29 07:39:18 crc kubenswrapper[4947]: I1129 07:39:18.179563 4947 scope.go:117] "RemoveContainer" containerID="befe8bc1f518b72b2765c4bbae633eaff2671198765b803461fe977b3f76f166" Nov 29 07:39:18 crc kubenswrapper[4947]: E1129 07:39:18.180152 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:39:18 crc kubenswrapper[4947]: I1129 07:39:18.193869 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6757b657b4-vdhrb" event={"ID":"4d917345-655a-4f24-bfd1-57dd9a7e9880","Type":"ContainerStarted","Data":"0cdf8441365e1d74791271ef903992e2409ada8c42d62cf786b96bebcc11ee74"} Nov 29 07:39:18 crc kubenswrapper[4947]: I1129 07:39:18.196919 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5468b7496f-chvvc" event={"ID":"18340ffc-dd9e-437a-b838-7d63a0fd0102","Type":"ContainerStarted","Data":"2cebbe656ea80617e388888c64afb54e5e4afc13cf51661964b26df6a719cf2a"} Nov 29 07:39:18 crc kubenswrapper[4947]: I1129 07:39:18.196985 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5468b7496f-chvvc" podUID="18340ffc-dd9e-437a-b838-7d63a0fd0102" containerName="horizon-log" containerID="cri-o://e0e1f032a1815e7718c76bda796e1e9862b16f9e97d2a0c1f22e634308d8e0a3" gracePeriod=30 Nov 29 07:39:18 crc kubenswrapper[4947]: I1129 07:39:18.197014 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5468b7496f-chvvc" podUID="18340ffc-dd9e-437a-b838-7d63a0fd0102" containerName="horizon" containerID="cri-o://2cebbe656ea80617e388888c64afb54e5e4afc13cf51661964b26df6a719cf2a" gracePeriod=30 Nov 29 07:39:18 crc kubenswrapper[4947]: I1129 07:39:18.205971 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bf95bd4cd-8f5st" event={"ID":"dec28edc-46ae-456a-9be2-ec56bdfd409f","Type":"ContainerStarted","Data":"9ae6aeca4958aaa8abddb6e01d2d0b4cd34a211b67647584071a9757a1733ad3"} Nov 29 07:39:18 crc kubenswrapper[4947]: I1129 07:39:18.209887 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-68c46dc8bf-mgc5p" podUID="d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e" containerName="horizon-log" containerID="cri-o://d05b654bba351c3dc646fb160dd55507d9f26f4eb099778365614bb595ced129" gracePeriod=30 Nov 29 07:39:18 crc kubenswrapper[4947]: I1129 07:39:18.210164 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68c46dc8bf-mgc5p" event={"ID":"d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e","Type":"ContainerStarted","Data":"44fe49be02f60c96bff52d5533117a7c792a89010c1fd72de82bbf66aba26a15"} Nov 29 07:39:18 crc kubenswrapper[4947]: I1129 07:39:18.210708 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-68c46dc8bf-mgc5p" podUID="d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e" containerName="horizon" containerID="cri-o://44fe49be02f60c96bff52d5533117a7c792a89010c1fd72de82bbf66aba26a15" gracePeriod=30 Nov 29 07:39:18 crc kubenswrapper[4947]: I1129 07:39:18.218366 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6757b657b4-vdhrb" podStartSLOduration=3.76467993 podStartE2EDuration="15.218343414s" podCreationTimestamp="2025-11-29 07:39:03 +0000 UTC" firstStartedPulling="2025-11-29 07:39:04.492378001 +0000 UTC m=+3895.536760082" lastFinishedPulling="2025-11-29 07:39:15.946041485 +0000 UTC m=+3906.990423566" observedRunningTime="2025-11-29 07:39:18.217236156 +0000 UTC m=+3909.261618247" watchObservedRunningTime="2025-11-29 07:39:18.218343414 +0000 UTC m=+3909.262725495" Nov 29 07:39:18 crc kubenswrapper[4947]: I1129 07:39:18.255709 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5468b7496f-chvvc" podStartSLOduration=4.138812631 podStartE2EDuration="19.255687817s" podCreationTimestamp="2025-11-29 07:38:59 +0000 UTC" firstStartedPulling="2025-11-29 07:39:00.881457529 +0000 UTC m=+3891.925839610" lastFinishedPulling="2025-11-29 07:39:15.998332715 +0000 UTC m=+3907.042714796" observedRunningTime="2025-11-29 07:39:18.243058538 +0000 UTC m=+3909.287440619" watchObservedRunningTime="2025-11-29 07:39:18.255687817 +0000 UTC m=+3909.300069898" Nov 29 07:39:18 crc kubenswrapper[4947]: I1129 07:39:18.271843 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-bf95bd4cd-8f5st" podStartSLOduration=4.436695647 podStartE2EDuration="16.271814544s" podCreationTimestamp="2025-11-29 07:39:02 +0000 UTC" firstStartedPulling="2025-11-29 07:39:04.216460793 +0000 UTC m=+3895.260842874" lastFinishedPulling="2025-11-29 07:39:16.05157969 +0000 UTC m=+3907.095961771" observedRunningTime="2025-11-29 07:39:18.265570456 +0000 UTC m=+3909.309952537" watchObservedRunningTime="2025-11-29 07:39:18.271814544 +0000 UTC m=+3909.316196625" Nov 29 07:39:18 crc kubenswrapper[4947]: I1129 07:39:18.295416 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-68c46dc8bf-mgc5p" podStartSLOduration=4.63499361 podStartE2EDuration="19.29539465s" podCreationTimestamp="2025-11-29 07:38:59 +0000 UTC" firstStartedPulling="2025-11-29 07:39:01.282306531 +0000 UTC m=+3892.326688612" lastFinishedPulling="2025-11-29 07:39:15.942707571 +0000 UTC m=+3906.987089652" observedRunningTime="2025-11-29 07:39:18.289055129 +0000 UTC m=+3909.333437210" watchObservedRunningTime="2025-11-29 07:39:18.29539465 +0000 UTC m=+3909.339776731" Nov 29 07:39:19 crc kubenswrapper[4947]: I1129 07:39:19.966543 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5468b7496f-chvvc" Nov 29 07:39:20 crc kubenswrapper[4947]: I1129 07:39:20.338546 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-68c46dc8bf-mgc5p" Nov 29 07:39:23 crc kubenswrapper[4947]: I1129 07:39:23.299141 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-bf95bd4cd-8f5st" Nov 29 07:39:23 crc kubenswrapper[4947]: I1129 07:39:23.299931 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-bf95bd4cd-8f5st" Nov 29 07:39:23 crc kubenswrapper[4947]: I1129 07:39:23.586768 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6757b657b4-vdhrb" Nov 29 07:39:23 crc kubenswrapper[4947]: I1129 07:39:23.586832 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6757b657b4-vdhrb" Nov 29 07:39:25 crc kubenswrapper[4947]: I1129 07:39:25.298609 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-9sh8j" event={"ID":"0759a6d0-0585-4855-8f73-db253214a75b","Type":"ContainerStarted","Data":"1846f44ffbb038b201977d69380185fb60435d6213183da535c84ddf89ebf1ef"} Nov 29 07:39:25 crc kubenswrapper[4947]: I1129 07:39:25.326374 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-9sh8j" podStartSLOduration=8.551823805 podStartE2EDuration="16.326345273s" podCreationTimestamp="2025-11-29 07:39:09 +0000 UTC" firstStartedPulling="2025-11-29 07:39:16.295781957 +0000 UTC m=+3907.340164038" lastFinishedPulling="2025-11-29 07:39:24.070303425 +0000 UTC m=+3915.114685506" observedRunningTime="2025-11-29 07:39:25.320504705 +0000 UTC m=+3916.364886786" watchObservedRunningTime="2025-11-29 07:39:25.326345273 +0000 UTC m=+3916.370727364" Nov 29 07:39:26 crc kubenswrapper[4947]: I1129 07:39:26.750469 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 29 07:39:26 crc kubenswrapper[4947]: I1129 07:39:26.750825 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 29 07:39:27 crc kubenswrapper[4947]: I1129 07:39:27.024284 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 29 07:39:27 crc kubenswrapper[4947]: I1129 07:39:27.024712 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 29 07:39:27 crc kubenswrapper[4947]: I1129 07:39:27.430811 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 29 07:39:27 crc kubenswrapper[4947]: I1129 07:39:27.441138 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 29 07:39:27 crc kubenswrapper[4947]: I1129 07:39:27.452050 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 29 07:39:27 crc kubenswrapper[4947]: I1129 07:39:27.452659 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 29 07:39:27 crc kubenswrapper[4947]: I1129 07:39:27.463811 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 29 07:39:28 crc kubenswrapper[4947]: I1129 07:39:28.347498 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 29 07:39:28 crc kubenswrapper[4947]: I1129 07:39:28.348032 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 29 07:39:28 crc kubenswrapper[4947]: I1129 07:39:28.348057 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 29 07:39:29 crc kubenswrapper[4947]: I1129 07:39:29.350955 4947 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 29 07:39:30 crc kubenswrapper[4947]: I1129 07:39:30.361531 4947 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 29 07:39:30 crc kubenswrapper[4947]: I1129 07:39:30.361593 4947 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 29 07:39:30 crc kubenswrapper[4947]: I1129 07:39:30.361538 4947 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 29 07:39:30 crc kubenswrapper[4947]: I1129 07:39:30.361704 4947 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 29 07:39:32 crc kubenswrapper[4947]: I1129 07:39:32.169812 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 29 07:39:32 crc kubenswrapper[4947]: I1129 07:39:32.172711 4947 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 29 07:39:32 crc kubenswrapper[4947]: I1129 07:39:32.173378 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 29 07:39:32 crc kubenswrapper[4947]: I1129 07:39:32.218343 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 29 07:39:32 crc kubenswrapper[4947]: I1129 07:39:32.218491 4947 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 29 07:39:32 crc kubenswrapper[4947]: I1129 07:39:32.238845 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 29 07:39:33 crc kubenswrapper[4947]: I1129 07:39:33.179810 4947 scope.go:117] "RemoveContainer" containerID="befe8bc1f518b72b2765c4bbae633eaff2671198765b803461fe977b3f76f166" Nov 29 07:39:33 crc kubenswrapper[4947]: E1129 07:39:33.180152 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:39:33 crc kubenswrapper[4947]: I1129 07:39:33.301014 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-bf95bd4cd-8f5st" podUID="dec28edc-46ae-456a-9be2-ec56bdfd409f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.244:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.244:8443: connect: connection refused" Nov 29 07:39:33 crc kubenswrapper[4947]: I1129 07:39:33.591428 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6757b657b4-vdhrb" podUID="4d917345-655a-4f24-bfd1-57dd9a7e9880" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.245:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.245:8443: connect: connection refused" Nov 29 07:39:45 crc kubenswrapper[4947]: I1129 07:39:45.892010 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-bf95bd4cd-8f5st" Nov 29 07:39:45 crc kubenswrapper[4947]: I1129 07:39:45.893101 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6757b657b4-vdhrb" Nov 29 07:39:46 crc kubenswrapper[4947]: I1129 07:39:46.179828 4947 scope.go:117] "RemoveContainer" containerID="befe8bc1f518b72b2765c4bbae633eaff2671198765b803461fe977b3f76f166" Nov 29 07:39:46 crc kubenswrapper[4947]: E1129 07:39:46.181434 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:39:48 crc kubenswrapper[4947]: I1129 07:39:48.065597 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-bf95bd4cd-8f5st" Nov 29 07:39:48 crc kubenswrapper[4947]: I1129 07:39:48.359881 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6757b657b4-vdhrb" Nov 29 07:39:48 crc kubenswrapper[4947]: I1129 07:39:48.481806 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-bf95bd4cd-8f5st"] Nov 29 07:39:48 crc kubenswrapper[4947]: I1129 07:39:48.617686 4947 generic.go:334] "Generic (PLEG): container finished" podID="d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e" containerID="44fe49be02f60c96bff52d5533117a7c792a89010c1fd72de82bbf66aba26a15" exitCode=137 Nov 29 07:39:48 crc kubenswrapper[4947]: I1129 07:39:48.617723 4947 generic.go:334] "Generic (PLEG): container finished" podID="d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e" containerID="d05b654bba351c3dc646fb160dd55507d9f26f4eb099778365614bb595ced129" exitCode=137 Nov 29 07:39:48 crc kubenswrapper[4947]: I1129 07:39:48.617792 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68c46dc8bf-mgc5p" event={"ID":"d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e","Type":"ContainerDied","Data":"44fe49be02f60c96bff52d5533117a7c792a89010c1fd72de82bbf66aba26a15"} Nov 29 07:39:48 crc kubenswrapper[4947]: I1129 07:39:48.617861 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68c46dc8bf-mgc5p" event={"ID":"d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e","Type":"ContainerDied","Data":"d05b654bba351c3dc646fb160dd55507d9f26f4eb099778365614bb595ced129"} Nov 29 07:39:48 crc kubenswrapper[4947]: I1129 07:39:48.620096 4947 generic.go:334] "Generic (PLEG): container finished" podID="18340ffc-dd9e-437a-b838-7d63a0fd0102" containerID="2cebbe656ea80617e388888c64afb54e5e4afc13cf51661964b26df6a719cf2a" exitCode=137 Nov 29 07:39:48 crc kubenswrapper[4947]: I1129 07:39:48.620128 4947 generic.go:334] "Generic (PLEG): container finished" podID="18340ffc-dd9e-437a-b838-7d63a0fd0102" containerID="e0e1f032a1815e7718c76bda796e1e9862b16f9e97d2a0c1f22e634308d8e0a3" exitCode=137 Nov 29 07:39:48 crc kubenswrapper[4947]: I1129 07:39:48.620197 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5468b7496f-chvvc" event={"ID":"18340ffc-dd9e-437a-b838-7d63a0fd0102","Type":"ContainerDied","Data":"2cebbe656ea80617e388888c64afb54e5e4afc13cf51661964b26df6a719cf2a"} Nov 29 07:39:48 crc kubenswrapper[4947]: I1129 07:39:48.620267 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5468b7496f-chvvc" event={"ID":"18340ffc-dd9e-437a-b838-7d63a0fd0102","Type":"ContainerDied","Data":"e0e1f032a1815e7718c76bda796e1e9862b16f9e97d2a0c1f22e634308d8e0a3"} Nov 29 07:39:48 crc kubenswrapper[4947]: I1129 07:39:48.620458 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-bf95bd4cd-8f5st" podUID="dec28edc-46ae-456a-9be2-ec56bdfd409f" containerName="horizon-log" containerID="cri-o://aef36c05b7b3247775c426899af1ec33afa2b063acb7b808f071eca790a6a14c" gracePeriod=30 Nov 29 07:39:48 crc kubenswrapper[4947]: I1129 07:39:48.620656 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-bf95bd4cd-8f5st" podUID="dec28edc-46ae-456a-9be2-ec56bdfd409f" containerName="horizon" containerID="cri-o://9ae6aeca4958aaa8abddb6e01d2d0b4cd34a211b67647584071a9757a1733ad3" gracePeriod=30 Nov 29 07:39:50 crc kubenswrapper[4947]: I1129 07:39:50.046896 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68c46dc8bf-mgc5p" Nov 29 07:39:50 crc kubenswrapper[4947]: I1129 07:39:50.054553 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5468b7496f-chvvc" Nov 29 07:39:50 crc kubenswrapper[4947]: I1129 07:39:50.191298 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwhgs\" (UniqueName: \"kubernetes.io/projected/18340ffc-dd9e-437a-b838-7d63a0fd0102-kube-api-access-wwhgs\") pod \"18340ffc-dd9e-437a-b838-7d63a0fd0102\" (UID: \"18340ffc-dd9e-437a-b838-7d63a0fd0102\") " Nov 29 07:39:50 crc kubenswrapper[4947]: I1129 07:39:50.191460 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/18340ffc-dd9e-437a-b838-7d63a0fd0102-scripts\") pod \"18340ffc-dd9e-437a-b838-7d63a0fd0102\" (UID: \"18340ffc-dd9e-437a-b838-7d63a0fd0102\") " Nov 29 07:39:50 crc kubenswrapper[4947]: I1129 07:39:50.191583 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/18340ffc-dd9e-437a-b838-7d63a0fd0102-config-data\") pod \"18340ffc-dd9e-437a-b838-7d63a0fd0102\" (UID: \"18340ffc-dd9e-437a-b838-7d63a0fd0102\") " Nov 29 07:39:50 crc kubenswrapper[4947]: I1129 07:39:50.191644 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e-horizon-secret-key\") pod \"d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e\" (UID: \"d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e\") " Nov 29 07:39:50 crc kubenswrapper[4947]: I1129 07:39:50.191754 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e-scripts\") pod \"d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e\" (UID: \"d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e\") " Nov 29 07:39:50 crc kubenswrapper[4947]: I1129 07:39:50.191785 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e-config-data\") pod \"d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e\" (UID: \"d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e\") " Nov 29 07:39:50 crc kubenswrapper[4947]: I1129 07:39:50.191882 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e-logs\") pod \"d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e\" (UID: \"d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e\") " Nov 29 07:39:50 crc kubenswrapper[4947]: I1129 07:39:50.191928 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18340ffc-dd9e-437a-b838-7d63a0fd0102-logs\") pod \"18340ffc-dd9e-437a-b838-7d63a0fd0102\" (UID: \"18340ffc-dd9e-437a-b838-7d63a0fd0102\") " Nov 29 07:39:50 crc kubenswrapper[4947]: I1129 07:39:50.191977 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/18340ffc-dd9e-437a-b838-7d63a0fd0102-horizon-secret-key\") pod \"18340ffc-dd9e-437a-b838-7d63a0fd0102\" (UID: \"18340ffc-dd9e-437a-b838-7d63a0fd0102\") " Nov 29 07:39:50 crc kubenswrapper[4947]: I1129 07:39:50.192122 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ctss\" (UniqueName: \"kubernetes.io/projected/d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e-kube-api-access-2ctss\") pod \"d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e\" (UID: \"d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e\") " Nov 29 07:39:50 crc kubenswrapper[4947]: I1129 07:39:50.196991 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e-logs" (OuterVolumeSpecName: "logs") pod "d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e" (UID: "d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:39:50 crc kubenswrapper[4947]: I1129 07:39:50.197068 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18340ffc-dd9e-437a-b838-7d63a0fd0102-logs" (OuterVolumeSpecName: "logs") pod "18340ffc-dd9e-437a-b838-7d63a0fd0102" (UID: "18340ffc-dd9e-437a-b838-7d63a0fd0102"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:39:50 crc kubenswrapper[4947]: I1129 07:39:50.204492 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18340ffc-dd9e-437a-b838-7d63a0fd0102-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "18340ffc-dd9e-437a-b838-7d63a0fd0102" (UID: "18340ffc-dd9e-437a-b838-7d63a0fd0102"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:50 crc kubenswrapper[4947]: I1129 07:39:50.219044 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18340ffc-dd9e-437a-b838-7d63a0fd0102-kube-api-access-wwhgs" (OuterVolumeSpecName: "kube-api-access-wwhgs") pod "18340ffc-dd9e-437a-b838-7d63a0fd0102" (UID: "18340ffc-dd9e-437a-b838-7d63a0fd0102"). InnerVolumeSpecName "kube-api-access-wwhgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:50 crc kubenswrapper[4947]: I1129 07:39:50.219745 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e-kube-api-access-2ctss" (OuterVolumeSpecName: "kube-api-access-2ctss") pod "d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e" (UID: "d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e"). InnerVolumeSpecName "kube-api-access-2ctss". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:50 crc kubenswrapper[4947]: I1129 07:39:50.220850 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e" (UID: "d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:50 crc kubenswrapper[4947]: I1129 07:39:50.243695 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e-config-data" (OuterVolumeSpecName: "config-data") pod "d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e" (UID: "d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:50 crc kubenswrapper[4947]: I1129 07:39:50.245147 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18340ffc-dd9e-437a-b838-7d63a0fd0102-scripts" (OuterVolumeSpecName: "scripts") pod "18340ffc-dd9e-437a-b838-7d63a0fd0102" (UID: "18340ffc-dd9e-437a-b838-7d63a0fd0102"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:50 crc kubenswrapper[4947]: I1129 07:39:50.250982 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18340ffc-dd9e-437a-b838-7d63a0fd0102-config-data" (OuterVolumeSpecName: "config-data") pod "18340ffc-dd9e-437a-b838-7d63a0fd0102" (UID: "18340ffc-dd9e-437a-b838-7d63a0fd0102"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:50 crc kubenswrapper[4947]: I1129 07:39:50.266648 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e-scripts" (OuterVolumeSpecName: "scripts") pod "d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e" (UID: "d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:50 crc kubenswrapper[4947]: I1129 07:39:50.299404 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ctss\" (UniqueName: \"kubernetes.io/projected/d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e-kube-api-access-2ctss\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:50 crc kubenswrapper[4947]: I1129 07:39:50.299850 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwhgs\" (UniqueName: \"kubernetes.io/projected/18340ffc-dd9e-437a-b838-7d63a0fd0102-kube-api-access-wwhgs\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:50 crc kubenswrapper[4947]: I1129 07:39:50.299990 4947 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/18340ffc-dd9e-437a-b838-7d63a0fd0102-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:50 crc kubenswrapper[4947]: I1129 07:39:50.300103 4947 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/18340ffc-dd9e-437a-b838-7d63a0fd0102-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:50 crc kubenswrapper[4947]: I1129 07:39:50.300212 4947 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:50 crc kubenswrapper[4947]: I1129 07:39:50.300359 4947 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:50 crc kubenswrapper[4947]: I1129 07:39:50.300460 4947 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:50 crc kubenswrapper[4947]: I1129 07:39:50.300570 4947 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e-logs\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:50 crc kubenswrapper[4947]: I1129 07:39:50.300683 4947 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18340ffc-dd9e-437a-b838-7d63a0fd0102-logs\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:50 crc kubenswrapper[4947]: I1129 07:39:50.300819 4947 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/18340ffc-dd9e-437a-b838-7d63a0fd0102-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:50 crc kubenswrapper[4947]: I1129 07:39:50.670492 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68c46dc8bf-mgc5p" event={"ID":"d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e","Type":"ContainerDied","Data":"d10eef009857a2306aeeaea68249eb64ecda22ca0133944d0d91145d6dccd39e"} Nov 29 07:39:50 crc kubenswrapper[4947]: I1129 07:39:50.670963 4947 scope.go:117] "RemoveContainer" containerID="44fe49be02f60c96bff52d5533117a7c792a89010c1fd72de82bbf66aba26a15" Nov 29 07:39:50 crc kubenswrapper[4947]: I1129 07:39:50.671321 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68c46dc8bf-mgc5p" Nov 29 07:39:50 crc kubenswrapper[4947]: I1129 07:39:50.689889 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5468b7496f-chvvc" event={"ID":"18340ffc-dd9e-437a-b838-7d63a0fd0102","Type":"ContainerDied","Data":"8f8f0b0bc92736696ebd3ee8a4832ab677e7e4cc732e98c11de88610d417a911"} Nov 29 07:39:50 crc kubenswrapper[4947]: I1129 07:39:50.690168 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5468b7496f-chvvc" Nov 29 07:39:50 crc kubenswrapper[4947]: I1129 07:39:50.723107 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-68c46dc8bf-mgc5p"] Nov 29 07:39:50 crc kubenswrapper[4947]: I1129 07:39:50.745875 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-68c46dc8bf-mgc5p"] Nov 29 07:39:50 crc kubenswrapper[4947]: I1129 07:39:50.766313 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5468b7496f-chvvc"] Nov 29 07:39:50 crc kubenswrapper[4947]: I1129 07:39:50.781009 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5468b7496f-chvvc"] Nov 29 07:39:50 crc kubenswrapper[4947]: I1129 07:39:50.923595 4947 scope.go:117] "RemoveContainer" containerID="d05b654bba351c3dc646fb160dd55507d9f26f4eb099778365614bb595ced129" Nov 29 07:39:50 crc kubenswrapper[4947]: I1129 07:39:50.960701 4947 scope.go:117] "RemoveContainer" containerID="2cebbe656ea80617e388888c64afb54e5e4afc13cf51661964b26df6a719cf2a" Nov 29 07:39:51 crc kubenswrapper[4947]: I1129 07:39:51.169618 4947 scope.go:117] "RemoveContainer" containerID="e0e1f032a1815e7718c76bda796e1e9862b16f9e97d2a0c1f22e634308d8e0a3" Nov 29 07:39:51 crc kubenswrapper[4947]: I1129 07:39:51.201156 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18340ffc-dd9e-437a-b838-7d63a0fd0102" path="/var/lib/kubelet/pods/18340ffc-dd9e-437a-b838-7d63a0fd0102/volumes" Nov 29 07:39:51 crc kubenswrapper[4947]: I1129 07:39:51.201955 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e" path="/var/lib/kubelet/pods/d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e/volumes" Nov 29 07:39:52 crc kubenswrapper[4947]: I1129 07:39:52.718445 4947 generic.go:334] "Generic (PLEG): container finished" podID="dec28edc-46ae-456a-9be2-ec56bdfd409f" containerID="9ae6aeca4958aaa8abddb6e01d2d0b4cd34a211b67647584071a9757a1733ad3" exitCode=0 Nov 29 07:39:52 crc kubenswrapper[4947]: I1129 07:39:52.718543 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bf95bd4cd-8f5st" event={"ID":"dec28edc-46ae-456a-9be2-ec56bdfd409f","Type":"ContainerDied","Data":"9ae6aeca4958aaa8abddb6e01d2d0b4cd34a211b67647584071a9757a1733ad3"} Nov 29 07:39:53 crc kubenswrapper[4947]: I1129 07:39:53.300384 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-bf95bd4cd-8f5st" podUID="dec28edc-46ae-456a-9be2-ec56bdfd409f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.244:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.244:8443: connect: connection refused" Nov 29 07:39:54 crc kubenswrapper[4947]: I1129 07:39:54.741451 4947 generic.go:334] "Generic (PLEG): container finished" podID="0759a6d0-0585-4855-8f73-db253214a75b" containerID="1846f44ffbb038b201977d69380185fb60435d6213183da535c84ddf89ebf1ef" exitCode=0 Nov 29 07:39:54 crc kubenswrapper[4947]: I1129 07:39:54.741870 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-9sh8j" event={"ID":"0759a6d0-0585-4855-8f73-db253214a75b","Type":"ContainerDied","Data":"1846f44ffbb038b201977d69380185fb60435d6213183da535c84ddf89ebf1ef"} Nov 29 07:39:56 crc kubenswrapper[4947]: I1129 07:39:56.251881 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-9sh8j" Nov 29 07:39:56 crc kubenswrapper[4947]: I1129 07:39:56.399452 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqtrr\" (UniqueName: \"kubernetes.io/projected/0759a6d0-0585-4855-8f73-db253214a75b-kube-api-access-mqtrr\") pod \"0759a6d0-0585-4855-8f73-db253214a75b\" (UID: \"0759a6d0-0585-4855-8f73-db253214a75b\") " Nov 29 07:39:56 crc kubenswrapper[4947]: I1129 07:39:56.399641 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/0759a6d0-0585-4855-8f73-db253214a75b-job-config-data\") pod \"0759a6d0-0585-4855-8f73-db253214a75b\" (UID: \"0759a6d0-0585-4855-8f73-db253214a75b\") " Nov 29 07:39:56 crc kubenswrapper[4947]: I1129 07:39:56.399693 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0759a6d0-0585-4855-8f73-db253214a75b-combined-ca-bundle\") pod \"0759a6d0-0585-4855-8f73-db253214a75b\" (UID: \"0759a6d0-0585-4855-8f73-db253214a75b\") " Nov 29 07:39:56 crc kubenswrapper[4947]: I1129 07:39:56.399756 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0759a6d0-0585-4855-8f73-db253214a75b-config-data\") pod \"0759a6d0-0585-4855-8f73-db253214a75b\" (UID: \"0759a6d0-0585-4855-8f73-db253214a75b\") " Nov 29 07:39:56 crc kubenswrapper[4947]: I1129 07:39:56.409631 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0759a6d0-0585-4855-8f73-db253214a75b-kube-api-access-mqtrr" (OuterVolumeSpecName: "kube-api-access-mqtrr") pod "0759a6d0-0585-4855-8f73-db253214a75b" (UID: "0759a6d0-0585-4855-8f73-db253214a75b"). InnerVolumeSpecName "kube-api-access-mqtrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:56 crc kubenswrapper[4947]: I1129 07:39:56.409828 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0759a6d0-0585-4855-8f73-db253214a75b-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "0759a6d0-0585-4855-8f73-db253214a75b" (UID: "0759a6d0-0585-4855-8f73-db253214a75b"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:56 crc kubenswrapper[4947]: I1129 07:39:56.417576 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0759a6d0-0585-4855-8f73-db253214a75b-config-data" (OuterVolumeSpecName: "config-data") pod "0759a6d0-0585-4855-8f73-db253214a75b" (UID: "0759a6d0-0585-4855-8f73-db253214a75b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:56 crc kubenswrapper[4947]: I1129 07:39:56.436399 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0759a6d0-0585-4855-8f73-db253214a75b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0759a6d0-0585-4855-8f73-db253214a75b" (UID: "0759a6d0-0585-4855-8f73-db253214a75b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:56 crc kubenswrapper[4947]: I1129 07:39:56.502204 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqtrr\" (UniqueName: \"kubernetes.io/projected/0759a6d0-0585-4855-8f73-db253214a75b-kube-api-access-mqtrr\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:56 crc kubenswrapper[4947]: I1129 07:39:56.502274 4947 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/0759a6d0-0585-4855-8f73-db253214a75b-job-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:56 crc kubenswrapper[4947]: I1129 07:39:56.502284 4947 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0759a6d0-0585-4855-8f73-db253214a75b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:56 crc kubenswrapper[4947]: I1129 07:39:56.502292 4947 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0759a6d0-0585-4855-8f73-db253214a75b-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:56 crc kubenswrapper[4947]: I1129 07:39:56.763030 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-9sh8j" event={"ID":"0759a6d0-0585-4855-8f73-db253214a75b","Type":"ContainerDied","Data":"574340d1166afd8ba48fc65d8bb75321753af98acc2b974695530871d756cffc"} Nov 29 07:39:56 crc kubenswrapper[4947]: I1129 07:39:56.763103 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="574340d1166afd8ba48fc65d8bb75321753af98acc2b974695530871d756cffc" Nov 29 07:39:56 crc kubenswrapper[4947]: I1129 07:39:56.763090 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-9sh8j" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.099513 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Nov 29 07:39:57 crc kubenswrapper[4947]: E1129 07:39:57.100335 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18340ffc-dd9e-437a-b838-7d63a0fd0102" containerName="horizon" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.100357 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="18340ffc-dd9e-437a-b838-7d63a0fd0102" containerName="horizon" Nov 29 07:39:57 crc kubenswrapper[4947]: E1129 07:39:57.100385 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e" containerName="horizon" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.100392 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e" containerName="horizon" Nov 29 07:39:57 crc kubenswrapper[4947]: E1129 07:39:57.100416 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e" containerName="horizon-log" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.100422 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e" containerName="horizon-log" Nov 29 07:39:57 crc kubenswrapper[4947]: E1129 07:39:57.100429 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18340ffc-dd9e-437a-b838-7d63a0fd0102" containerName="horizon-log" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.100435 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="18340ffc-dd9e-437a-b838-7d63a0fd0102" containerName="horizon-log" Nov 29 07:39:57 crc kubenswrapper[4947]: E1129 07:39:57.100451 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0759a6d0-0585-4855-8f73-db253214a75b" containerName="manila-db-sync" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.100459 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="0759a6d0-0585-4855-8f73-db253214a75b" containerName="manila-db-sync" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.100618 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="18340ffc-dd9e-437a-b838-7d63a0fd0102" containerName="horizon" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.100637 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="18340ffc-dd9e-437a-b838-7d63a0fd0102" containerName="horizon-log" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.100649 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="0759a6d0-0585-4855-8f73-db253214a75b" containerName="manila-db-sync" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.100661 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e" containerName="horizon-log" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.100668 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1d5f6b4-d1bf-4b69-9a6d-0388bf90264e" containerName="horizon" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.101754 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.107080 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.107341 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.107397 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-ph78k" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.110924 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.134502 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.148441 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.150273 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.156650 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.166996 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.219996 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tns6k\" (UniqueName: \"kubernetes.io/projected/d3b88e10-85c8-44e5-b382-7930353e7201-kube-api-access-tns6k\") pod \"manila-scheduler-0\" (UID: \"d3b88e10-85c8-44e5-b382-7930353e7201\") " pod="openstack/manila-scheduler-0" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.220089 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d3b88e10-85c8-44e5-b382-7930353e7201-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"d3b88e10-85c8-44e5-b382-7930353e7201\") " pod="openstack/manila-scheduler-0" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.220305 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3b88e10-85c8-44e5-b382-7930353e7201-scripts\") pod \"manila-scheduler-0\" (UID: \"d3b88e10-85c8-44e5-b382-7930353e7201\") " pod="openstack/manila-scheduler-0" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.220346 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3b88e10-85c8-44e5-b382-7930353e7201-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"d3b88e10-85c8-44e5-b382-7930353e7201\") " pod="openstack/manila-scheduler-0" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.220373 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3b88e10-85c8-44e5-b382-7930353e7201-config-data\") pod \"manila-scheduler-0\" (UID: \"d3b88e10-85c8-44e5-b382-7930353e7201\") " pod="openstack/manila-scheduler-0" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.220470 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d3b88e10-85c8-44e5-b382-7930353e7201-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"d3b88e10-85c8-44e5-b382-7930353e7201\") " pod="openstack/manila-scheduler-0" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.337211 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fca98943-b0c1-461f-91fb-56f3f476810c-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"fca98943-b0c1-461f-91fb-56f3f476810c\") " pod="openstack/manila-share-share1-0" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.347604 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fca98943-b0c1-461f-91fb-56f3f476810c-config-data\") pod \"manila-share-share1-0\" (UID: \"fca98943-b0c1-461f-91fb-56f3f476810c\") " pod="openstack/manila-share-share1-0" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.347665 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fca98943-b0c1-461f-91fb-56f3f476810c-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"fca98943-b0c1-461f-91fb-56f3f476810c\") " pod="openstack/manila-share-share1-0" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.347741 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3b88e10-85c8-44e5-b382-7930353e7201-scripts\") pod \"manila-scheduler-0\" (UID: \"d3b88e10-85c8-44e5-b382-7930353e7201\") " pod="openstack/manila-scheduler-0" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.347786 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3b88e10-85c8-44e5-b382-7930353e7201-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"d3b88e10-85c8-44e5-b382-7930353e7201\") " pod="openstack/manila-scheduler-0" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.347822 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3b88e10-85c8-44e5-b382-7930353e7201-config-data\") pod \"manila-scheduler-0\" (UID: \"d3b88e10-85c8-44e5-b382-7930353e7201\") " pod="openstack/manila-scheduler-0" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.347912 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fca98943-b0c1-461f-91fb-56f3f476810c-scripts\") pod \"manila-share-share1-0\" (UID: \"fca98943-b0c1-461f-91fb-56f3f476810c\") " pod="openstack/manila-share-share1-0" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.347979 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhs77\" (UniqueName: \"kubernetes.io/projected/fca98943-b0c1-461f-91fb-56f3f476810c-kube-api-access-mhs77\") pod \"manila-share-share1-0\" (UID: \"fca98943-b0c1-461f-91fb-56f3f476810c\") " pod="openstack/manila-share-share1-0" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.348034 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d3b88e10-85c8-44e5-b382-7930353e7201-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"d3b88e10-85c8-44e5-b382-7930353e7201\") " pod="openstack/manila-scheduler-0" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.348111 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tns6k\" (UniqueName: \"kubernetes.io/projected/d3b88e10-85c8-44e5-b382-7930353e7201-kube-api-access-tns6k\") pod \"manila-scheduler-0\" (UID: \"d3b88e10-85c8-44e5-b382-7930353e7201\") " pod="openstack/manila-scheduler-0" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.348165 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d3b88e10-85c8-44e5-b382-7930353e7201-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"d3b88e10-85c8-44e5-b382-7930353e7201\") " pod="openstack/manila-scheduler-0" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.348262 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fca98943-b0c1-461f-91fb-56f3f476810c-ceph\") pod \"manila-share-share1-0\" (UID: \"fca98943-b0c1-461f-91fb-56f3f476810c\") " pod="openstack/manila-share-share1-0" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.348350 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/fca98943-b0c1-461f-91fb-56f3f476810c-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"fca98943-b0c1-461f-91fb-56f3f476810c\") " pod="openstack/manila-share-share1-0" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.348409 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fca98943-b0c1-461f-91fb-56f3f476810c-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"fca98943-b0c1-461f-91fb-56f3f476810c\") " pod="openstack/manila-share-share1-0" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.370301 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d3b88e10-85c8-44e5-b382-7930353e7201-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"d3b88e10-85c8-44e5-b382-7930353e7201\") " pod="openstack/manila-scheduler-0" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.423752 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76b5fdb995-mqmgd"] Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.444883 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76b5fdb995-mqmgd" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.451555 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fca98943-b0c1-461f-91fb-56f3f476810c-ceph\") pod \"manila-share-share1-0\" (UID: \"fca98943-b0c1-461f-91fb-56f3f476810c\") " pod="openstack/manila-share-share1-0" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.451650 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/fca98943-b0c1-461f-91fb-56f3f476810c-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"fca98943-b0c1-461f-91fb-56f3f476810c\") " pod="openstack/manila-share-share1-0" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.451684 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fca98943-b0c1-461f-91fb-56f3f476810c-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"fca98943-b0c1-461f-91fb-56f3f476810c\") " pod="openstack/manila-share-share1-0" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.451736 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fca98943-b0c1-461f-91fb-56f3f476810c-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"fca98943-b0c1-461f-91fb-56f3f476810c\") " pod="openstack/manila-share-share1-0" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.451774 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fca98943-b0c1-461f-91fb-56f3f476810c-config-data\") pod \"manila-share-share1-0\" (UID: \"fca98943-b0c1-461f-91fb-56f3f476810c\") " pod="openstack/manila-share-share1-0" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.451793 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fca98943-b0c1-461f-91fb-56f3f476810c-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"fca98943-b0c1-461f-91fb-56f3f476810c\") " pod="openstack/manila-share-share1-0" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.451869 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fca98943-b0c1-461f-91fb-56f3f476810c-scripts\") pod \"manila-share-share1-0\" (UID: \"fca98943-b0c1-461f-91fb-56f3f476810c\") " pod="openstack/manila-share-share1-0" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.451894 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhs77\" (UniqueName: \"kubernetes.io/projected/fca98943-b0c1-461f-91fb-56f3f476810c-kube-api-access-mhs77\") pod \"manila-share-share1-0\" (UID: \"fca98943-b0c1-461f-91fb-56f3f476810c\") " pod="openstack/manila-share-share1-0" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.455423 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fca98943-b0c1-461f-91fb-56f3f476810c-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"fca98943-b0c1-461f-91fb-56f3f476810c\") " pod="openstack/manila-share-share1-0" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.462167 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/fca98943-b0c1-461f-91fb-56f3f476810c-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"fca98943-b0c1-461f-91fb-56f3f476810c\") " pod="openstack/manila-share-share1-0" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.497562 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76b5fdb995-mqmgd"] Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.555580 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfb75\" (UniqueName: \"kubernetes.io/projected/3eb0b137-b0e2-495d-afdf-a81bdc9b10b2-kube-api-access-hfb75\") pod \"dnsmasq-dns-76b5fdb995-mqmgd\" (UID: \"3eb0b137-b0e2-495d-afdf-a81bdc9b10b2\") " pod="openstack/dnsmasq-dns-76b5fdb995-mqmgd" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.555770 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eb0b137-b0e2-495d-afdf-a81bdc9b10b2-config\") pod \"dnsmasq-dns-76b5fdb995-mqmgd\" (UID: \"3eb0b137-b0e2-495d-afdf-a81bdc9b10b2\") " pod="openstack/dnsmasq-dns-76b5fdb995-mqmgd" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.555806 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3eb0b137-b0e2-495d-afdf-a81bdc9b10b2-dns-svc\") pod \"dnsmasq-dns-76b5fdb995-mqmgd\" (UID: \"3eb0b137-b0e2-495d-afdf-a81bdc9b10b2\") " pod="openstack/dnsmasq-dns-76b5fdb995-mqmgd" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.555844 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3eb0b137-b0e2-495d-afdf-a81bdc9b10b2-ovsdbserver-sb\") pod \"dnsmasq-dns-76b5fdb995-mqmgd\" (UID: \"3eb0b137-b0e2-495d-afdf-a81bdc9b10b2\") " pod="openstack/dnsmasq-dns-76b5fdb995-mqmgd" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.555870 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3eb0b137-b0e2-495d-afdf-a81bdc9b10b2-openstack-edpm-ipam\") pod \"dnsmasq-dns-76b5fdb995-mqmgd\" (UID: \"3eb0b137-b0e2-495d-afdf-a81bdc9b10b2\") " pod="openstack/dnsmasq-dns-76b5fdb995-mqmgd" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.556105 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3eb0b137-b0e2-495d-afdf-a81bdc9b10b2-ovsdbserver-nb\") pod \"dnsmasq-dns-76b5fdb995-mqmgd\" (UID: \"3eb0b137-b0e2-495d-afdf-a81bdc9b10b2\") " pod="openstack/dnsmasq-dns-76b5fdb995-mqmgd" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.658259 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3eb0b137-b0e2-495d-afdf-a81bdc9b10b2-ovsdbserver-nb\") pod \"dnsmasq-dns-76b5fdb995-mqmgd\" (UID: \"3eb0b137-b0e2-495d-afdf-a81bdc9b10b2\") " pod="openstack/dnsmasq-dns-76b5fdb995-mqmgd" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.658399 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfb75\" (UniqueName: \"kubernetes.io/projected/3eb0b137-b0e2-495d-afdf-a81bdc9b10b2-kube-api-access-hfb75\") pod \"dnsmasq-dns-76b5fdb995-mqmgd\" (UID: \"3eb0b137-b0e2-495d-afdf-a81bdc9b10b2\") " pod="openstack/dnsmasq-dns-76b5fdb995-mqmgd" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.658448 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eb0b137-b0e2-495d-afdf-a81bdc9b10b2-config\") pod \"dnsmasq-dns-76b5fdb995-mqmgd\" (UID: \"3eb0b137-b0e2-495d-afdf-a81bdc9b10b2\") " pod="openstack/dnsmasq-dns-76b5fdb995-mqmgd" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.658505 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3eb0b137-b0e2-495d-afdf-a81bdc9b10b2-dns-svc\") pod \"dnsmasq-dns-76b5fdb995-mqmgd\" (UID: \"3eb0b137-b0e2-495d-afdf-a81bdc9b10b2\") " pod="openstack/dnsmasq-dns-76b5fdb995-mqmgd" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.658572 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3eb0b137-b0e2-495d-afdf-a81bdc9b10b2-ovsdbserver-sb\") pod \"dnsmasq-dns-76b5fdb995-mqmgd\" (UID: \"3eb0b137-b0e2-495d-afdf-a81bdc9b10b2\") " pod="openstack/dnsmasq-dns-76b5fdb995-mqmgd" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.658610 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3eb0b137-b0e2-495d-afdf-a81bdc9b10b2-openstack-edpm-ipam\") pod \"dnsmasq-dns-76b5fdb995-mqmgd\" (UID: \"3eb0b137-b0e2-495d-afdf-a81bdc9b10b2\") " pod="openstack/dnsmasq-dns-76b5fdb995-mqmgd" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.659705 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eb0b137-b0e2-495d-afdf-a81bdc9b10b2-config\") pod \"dnsmasq-dns-76b5fdb995-mqmgd\" (UID: \"3eb0b137-b0e2-495d-afdf-a81bdc9b10b2\") " pod="openstack/dnsmasq-dns-76b5fdb995-mqmgd" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.659754 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3eb0b137-b0e2-495d-afdf-a81bdc9b10b2-ovsdbserver-nb\") pod \"dnsmasq-dns-76b5fdb995-mqmgd\" (UID: \"3eb0b137-b0e2-495d-afdf-a81bdc9b10b2\") " pod="openstack/dnsmasq-dns-76b5fdb995-mqmgd" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.662648 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3eb0b137-b0e2-495d-afdf-a81bdc9b10b2-openstack-edpm-ipam\") pod \"dnsmasq-dns-76b5fdb995-mqmgd\" (UID: \"3eb0b137-b0e2-495d-afdf-a81bdc9b10b2\") " pod="openstack/dnsmasq-dns-76b5fdb995-mqmgd" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.662791 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3eb0b137-b0e2-495d-afdf-a81bdc9b10b2-dns-svc\") pod \"dnsmasq-dns-76b5fdb995-mqmgd\" (UID: \"3eb0b137-b0e2-495d-afdf-a81bdc9b10b2\") " pod="openstack/dnsmasq-dns-76b5fdb995-mqmgd" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.663272 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3eb0b137-b0e2-495d-afdf-a81bdc9b10b2-ovsdbserver-sb\") pod \"dnsmasq-dns-76b5fdb995-mqmgd\" (UID: \"3eb0b137-b0e2-495d-afdf-a81bdc9b10b2\") " pod="openstack/dnsmasq-dns-76b5fdb995-mqmgd" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.785167 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.787880 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.796996 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.805960 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.830854 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3b88e10-85c8-44e5-b382-7930353e7201-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"d3b88e10-85c8-44e5-b382-7930353e7201\") " pod="openstack/manila-scheduler-0" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.831267 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d3b88e10-85c8-44e5-b382-7930353e7201-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"d3b88e10-85c8-44e5-b382-7930353e7201\") " pod="openstack/manila-scheduler-0" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.831607 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tns6k\" (UniqueName: \"kubernetes.io/projected/d3b88e10-85c8-44e5-b382-7930353e7201-kube-api-access-tns6k\") pod \"manila-scheduler-0\" (UID: \"d3b88e10-85c8-44e5-b382-7930353e7201\") " pod="openstack/manila-scheduler-0" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.831748 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3b88e10-85c8-44e5-b382-7930353e7201-scripts\") pod \"manila-scheduler-0\" (UID: \"d3b88e10-85c8-44e5-b382-7930353e7201\") " pod="openstack/manila-scheduler-0" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.832678 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3b88e10-85c8-44e5-b382-7930353e7201-config-data\") pod \"manila-scheduler-0\" (UID: \"d3b88e10-85c8-44e5-b382-7930353e7201\") " pod="openstack/manila-scheduler-0" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.834545 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fca98943-b0c1-461f-91fb-56f3f476810c-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"fca98943-b0c1-461f-91fb-56f3f476810c\") " pod="openstack/manila-share-share1-0" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.835605 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fca98943-b0c1-461f-91fb-56f3f476810c-scripts\") pod \"manila-share-share1-0\" (UID: \"fca98943-b0c1-461f-91fb-56f3f476810c\") " pod="openstack/manila-share-share1-0" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.837877 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fca98943-b0c1-461f-91fb-56f3f476810c-ceph\") pod \"manila-share-share1-0\" (UID: \"fca98943-b0c1-461f-91fb-56f3f476810c\") " pod="openstack/manila-share-share1-0" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.838064 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fca98943-b0c1-461f-91fb-56f3f476810c-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"fca98943-b0c1-461f-91fb-56f3f476810c\") " pod="openstack/manila-share-share1-0" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.838958 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fca98943-b0c1-461f-91fb-56f3f476810c-config-data\") pod \"manila-share-share1-0\" (UID: \"fca98943-b0c1-461f-91fb-56f3f476810c\") " pod="openstack/manila-share-share1-0" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.840155 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhs77\" (UniqueName: \"kubernetes.io/projected/fca98943-b0c1-461f-91fb-56f3f476810c-kube-api-access-mhs77\") pod \"manila-share-share1-0\" (UID: \"fca98943-b0c1-461f-91fb-56f3f476810c\") " pod="openstack/manila-share-share1-0" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.867586 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfb75\" (UniqueName: \"kubernetes.io/projected/3eb0b137-b0e2-495d-afdf-a81bdc9b10b2-kube-api-access-hfb75\") pod \"dnsmasq-dns-76b5fdb995-mqmgd\" (UID: \"3eb0b137-b0e2-495d-afdf-a81bdc9b10b2\") " pod="openstack/dnsmasq-dns-76b5fdb995-mqmgd" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.966313 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b04a472b-82b0-4fe4-9d6c-49b9481034bb-logs\") pod \"manila-api-0\" (UID: \"b04a472b-82b0-4fe4-9d6c-49b9481034bb\") " pod="openstack/manila-api-0" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.966395 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b04a472b-82b0-4fe4-9d6c-49b9481034bb-scripts\") pod \"manila-api-0\" (UID: \"b04a472b-82b0-4fe4-9d6c-49b9481034bb\") " pod="openstack/manila-api-0" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.966675 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b04a472b-82b0-4fe4-9d6c-49b9481034bb-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"b04a472b-82b0-4fe4-9d6c-49b9481034bb\") " pod="openstack/manila-api-0" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.966970 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b04a472b-82b0-4fe4-9d6c-49b9481034bb-etc-machine-id\") pod \"manila-api-0\" (UID: \"b04a472b-82b0-4fe4-9d6c-49b9481034bb\") " pod="openstack/manila-api-0" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.967053 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b04a472b-82b0-4fe4-9d6c-49b9481034bb-config-data\") pod \"manila-api-0\" (UID: \"b04a472b-82b0-4fe4-9d6c-49b9481034bb\") " pod="openstack/manila-api-0" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.967252 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b04a472b-82b0-4fe4-9d6c-49b9481034bb-config-data-custom\") pod \"manila-api-0\" (UID: \"b04a472b-82b0-4fe4-9d6c-49b9481034bb\") " pod="openstack/manila-api-0" Nov 29 07:39:57 crc kubenswrapper[4947]: I1129 07:39:57.967376 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5vhf\" (UniqueName: \"kubernetes.io/projected/b04a472b-82b0-4fe4-9d6c-49b9481034bb-kube-api-access-t5vhf\") pod \"manila-api-0\" (UID: \"b04a472b-82b0-4fe4-9d6c-49b9481034bb\") " pod="openstack/manila-api-0" Nov 29 07:39:58 crc kubenswrapper[4947]: I1129 07:39:58.023621 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Nov 29 07:39:58 crc kubenswrapper[4947]: I1129 07:39:58.069567 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b04a472b-82b0-4fe4-9d6c-49b9481034bb-etc-machine-id\") pod \"manila-api-0\" (UID: \"b04a472b-82b0-4fe4-9d6c-49b9481034bb\") " pod="openstack/manila-api-0" Nov 29 07:39:58 crc kubenswrapper[4947]: I1129 07:39:58.069663 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b04a472b-82b0-4fe4-9d6c-49b9481034bb-config-data\") pod \"manila-api-0\" (UID: \"b04a472b-82b0-4fe4-9d6c-49b9481034bb\") " pod="openstack/manila-api-0" Nov 29 07:39:58 crc kubenswrapper[4947]: I1129 07:39:58.069723 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b04a472b-82b0-4fe4-9d6c-49b9481034bb-etc-machine-id\") pod \"manila-api-0\" (UID: \"b04a472b-82b0-4fe4-9d6c-49b9481034bb\") " pod="openstack/manila-api-0" Nov 29 07:39:58 crc kubenswrapper[4947]: I1129 07:39:58.069763 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b04a472b-82b0-4fe4-9d6c-49b9481034bb-config-data-custom\") pod \"manila-api-0\" (UID: \"b04a472b-82b0-4fe4-9d6c-49b9481034bb\") " pod="openstack/manila-api-0" Nov 29 07:39:58 crc kubenswrapper[4947]: I1129 07:39:58.069849 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5vhf\" (UniqueName: \"kubernetes.io/projected/b04a472b-82b0-4fe4-9d6c-49b9481034bb-kube-api-access-t5vhf\") pod \"manila-api-0\" (UID: \"b04a472b-82b0-4fe4-9d6c-49b9481034bb\") " pod="openstack/manila-api-0" Nov 29 07:39:58 crc kubenswrapper[4947]: I1129 07:39:58.069900 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b04a472b-82b0-4fe4-9d6c-49b9481034bb-logs\") pod \"manila-api-0\" (UID: \"b04a472b-82b0-4fe4-9d6c-49b9481034bb\") " pod="openstack/manila-api-0" Nov 29 07:39:58 crc kubenswrapper[4947]: I1129 07:39:58.069958 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b04a472b-82b0-4fe4-9d6c-49b9481034bb-scripts\") pod \"manila-api-0\" (UID: \"b04a472b-82b0-4fe4-9d6c-49b9481034bb\") " pod="openstack/manila-api-0" Nov 29 07:39:58 crc kubenswrapper[4947]: I1129 07:39:58.070042 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b04a472b-82b0-4fe4-9d6c-49b9481034bb-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"b04a472b-82b0-4fe4-9d6c-49b9481034bb\") " pod="openstack/manila-api-0" Nov 29 07:39:58 crc kubenswrapper[4947]: I1129 07:39:58.071440 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b04a472b-82b0-4fe4-9d6c-49b9481034bb-logs\") pod \"manila-api-0\" (UID: \"b04a472b-82b0-4fe4-9d6c-49b9481034bb\") " pod="openstack/manila-api-0" Nov 29 07:39:58 crc kubenswrapper[4947]: I1129 07:39:58.074030 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Nov 29 07:39:58 crc kubenswrapper[4947]: I1129 07:39:58.079335 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b04a472b-82b0-4fe4-9d6c-49b9481034bb-config-data-custom\") pod \"manila-api-0\" (UID: \"b04a472b-82b0-4fe4-9d6c-49b9481034bb\") " pod="openstack/manila-api-0" Nov 29 07:39:58 crc kubenswrapper[4947]: I1129 07:39:58.081160 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b04a472b-82b0-4fe4-9d6c-49b9481034bb-config-data\") pod \"manila-api-0\" (UID: \"b04a472b-82b0-4fe4-9d6c-49b9481034bb\") " pod="openstack/manila-api-0" Nov 29 07:39:58 crc kubenswrapper[4947]: I1129 07:39:58.085837 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b04a472b-82b0-4fe4-9d6c-49b9481034bb-scripts\") pod \"manila-api-0\" (UID: \"b04a472b-82b0-4fe4-9d6c-49b9481034bb\") " pod="openstack/manila-api-0" Nov 29 07:39:58 crc kubenswrapper[4947]: I1129 07:39:58.093289 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b04a472b-82b0-4fe4-9d6c-49b9481034bb-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"b04a472b-82b0-4fe4-9d6c-49b9481034bb\") " pod="openstack/manila-api-0" Nov 29 07:39:58 crc kubenswrapper[4947]: I1129 07:39:58.106012 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76b5fdb995-mqmgd" Nov 29 07:39:58 crc kubenswrapper[4947]: I1129 07:39:58.106432 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5vhf\" (UniqueName: \"kubernetes.io/projected/b04a472b-82b0-4fe4-9d6c-49b9481034bb-kube-api-access-t5vhf\") pod \"manila-api-0\" (UID: \"b04a472b-82b0-4fe4-9d6c-49b9481034bb\") " pod="openstack/manila-api-0" Nov 29 07:39:58 crc kubenswrapper[4947]: I1129 07:39:58.122641 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Nov 29 07:39:58 crc kubenswrapper[4947]: I1129 07:39:58.642735 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Nov 29 07:39:58 crc kubenswrapper[4947]: I1129 07:39:58.776875 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76b5fdb995-mqmgd"] Nov 29 07:39:58 crc kubenswrapper[4947]: I1129 07:39:58.804563 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"d3b88e10-85c8-44e5-b382-7930353e7201","Type":"ContainerStarted","Data":"c450bd48dbc42b73fc1ec64a29c8ae6eda1376c1720d52bb8fea0bd44432db19"} Nov 29 07:39:59 crc kubenswrapper[4947]: I1129 07:39:58.921938 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Nov 29 07:39:59 crc kubenswrapper[4947]: I1129 07:39:59.006561 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Nov 29 07:39:59 crc kubenswrapper[4947]: I1129 07:39:59.191341 4947 scope.go:117] "RemoveContainer" containerID="befe8bc1f518b72b2765c4bbae633eaff2671198765b803461fe977b3f76f166" Nov 29 07:39:59 crc kubenswrapper[4947]: E1129 07:39:59.191721 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:39:59 crc kubenswrapper[4947]: I1129 07:39:59.830281 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"fca98943-b0c1-461f-91fb-56f3f476810c","Type":"ContainerStarted","Data":"41e631925b09eb92d0d1359728dad7f81099c95a05da334273fb29ab54fdd000"} Nov 29 07:39:59 crc kubenswrapper[4947]: I1129 07:39:59.832742 4947 generic.go:334] "Generic (PLEG): container finished" podID="3eb0b137-b0e2-495d-afdf-a81bdc9b10b2" containerID="c9401df021c95359c53c54c0bb7c270dddf1b5a69cdcef128f2b33043121f12a" exitCode=0 Nov 29 07:39:59 crc kubenswrapper[4947]: I1129 07:39:59.832832 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b5fdb995-mqmgd" event={"ID":"3eb0b137-b0e2-495d-afdf-a81bdc9b10b2","Type":"ContainerDied","Data":"c9401df021c95359c53c54c0bb7c270dddf1b5a69cdcef128f2b33043121f12a"} Nov 29 07:39:59 crc kubenswrapper[4947]: I1129 07:39:59.832920 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b5fdb995-mqmgd" event={"ID":"3eb0b137-b0e2-495d-afdf-a81bdc9b10b2","Type":"ContainerStarted","Data":"49c15eaa40496282f05ff9dd4eb3890b2f837bd97eec4405ad566ca90855a556"} Nov 29 07:39:59 crc kubenswrapper[4947]: I1129 07:39:59.841613 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"b04a472b-82b0-4fe4-9d6c-49b9481034bb","Type":"ContainerStarted","Data":"1b54eae459c1f711f82130515fd5f40b733a67ffdd9e7f03ce3fdf9a23d3bfeb"} Nov 29 07:39:59 crc kubenswrapper[4947]: I1129 07:39:59.841658 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"b04a472b-82b0-4fe4-9d6c-49b9481034bb","Type":"ContainerStarted","Data":"285acd300ad1ff054c6de43516064ca58752e66cb1f7c23bfe84cc67eb2bc09c"} Nov 29 07:40:00 crc kubenswrapper[4947]: I1129 07:40:00.792778 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Nov 29 07:40:00 crc kubenswrapper[4947]: I1129 07:40:00.870140 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b5fdb995-mqmgd" event={"ID":"3eb0b137-b0e2-495d-afdf-a81bdc9b10b2","Type":"ContainerStarted","Data":"071ebdc9b1e3ea7742f9f9d429ed6702ecdb98a8b45fb9384c16d813bc6ec7cd"} Nov 29 07:40:00 crc kubenswrapper[4947]: I1129 07:40:00.871635 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-76b5fdb995-mqmgd" Nov 29 07:40:00 crc kubenswrapper[4947]: I1129 07:40:00.880276 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"d3b88e10-85c8-44e5-b382-7930353e7201","Type":"ContainerStarted","Data":"854f6c5a7e8fdc6f7cffdebb9292ff392b8e94cfee13cf051371f79e0a671848"} Nov 29 07:40:00 crc kubenswrapper[4947]: I1129 07:40:00.882592 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"b04a472b-82b0-4fe4-9d6c-49b9481034bb","Type":"ContainerStarted","Data":"ad64405e931ade59a738488f81bcfe6c8c134f3724ba169352c9421b4ee8b693"} Nov 29 07:40:00 crc kubenswrapper[4947]: I1129 07:40:00.882755 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Nov 29 07:40:00 crc kubenswrapper[4947]: I1129 07:40:00.895755 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-76b5fdb995-mqmgd" podStartSLOduration=3.89573348 podStartE2EDuration="3.89573348s" podCreationTimestamp="2025-11-29 07:39:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:40:00.895133145 +0000 UTC m=+3951.939515226" watchObservedRunningTime="2025-11-29 07:40:00.89573348 +0000 UTC m=+3951.940115561" Nov 29 07:40:00 crc kubenswrapper[4947]: I1129 07:40:00.955428 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=3.955403637 podStartE2EDuration="3.955403637s" podCreationTimestamp="2025-11-29 07:39:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:40:00.938084479 +0000 UTC m=+3951.982466560" watchObservedRunningTime="2025-11-29 07:40:00.955403637 +0000 UTC m=+3951.999785708" Nov 29 07:40:01 crc kubenswrapper[4947]: I1129 07:40:01.895928 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"d3b88e10-85c8-44e5-b382-7930353e7201","Type":"ContainerStarted","Data":"4e2ed1794bafc7102002897561ba63320ef019ca3a755ef510bd2022af0789a8"} Nov 29 07:40:01 crc kubenswrapper[4947]: I1129 07:40:01.896442 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="b04a472b-82b0-4fe4-9d6c-49b9481034bb" containerName="manila-api" containerID="cri-o://ad64405e931ade59a738488f81bcfe6c8c134f3724ba169352c9421b4ee8b693" gracePeriod=30 Nov 29 07:40:01 crc kubenswrapper[4947]: I1129 07:40:01.896409 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="b04a472b-82b0-4fe4-9d6c-49b9481034bb" containerName="manila-api-log" containerID="cri-o://1b54eae459c1f711f82130515fd5f40b733a67ffdd9e7f03ce3fdf9a23d3bfeb" gracePeriod=30 Nov 29 07:40:01 crc kubenswrapper[4947]: I1129 07:40:01.927336 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=4.107405636 podStartE2EDuration="4.927309699s" podCreationTimestamp="2025-11-29 07:39:57 +0000 UTC" firstStartedPulling="2025-11-29 07:39:58.647513949 +0000 UTC m=+3949.691896040" lastFinishedPulling="2025-11-29 07:39:59.467418022 +0000 UTC m=+3950.511800103" observedRunningTime="2025-11-29 07:40:01.916001484 +0000 UTC m=+3952.960383565" watchObservedRunningTime="2025-11-29 07:40:01.927309699 +0000 UTC m=+3952.971691780" Nov 29 07:40:02 crc kubenswrapper[4947]: I1129 07:40:02.713914 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Nov 29 07:40:02 crc kubenswrapper[4947]: I1129 07:40:02.827368 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b04a472b-82b0-4fe4-9d6c-49b9481034bb-scripts\") pod \"b04a472b-82b0-4fe4-9d6c-49b9481034bb\" (UID: \"b04a472b-82b0-4fe4-9d6c-49b9481034bb\") " Nov 29 07:40:02 crc kubenswrapper[4947]: I1129 07:40:02.827442 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b04a472b-82b0-4fe4-9d6c-49b9481034bb-etc-machine-id\") pod \"b04a472b-82b0-4fe4-9d6c-49b9481034bb\" (UID: \"b04a472b-82b0-4fe4-9d6c-49b9481034bb\") " Nov 29 07:40:02 crc kubenswrapper[4947]: I1129 07:40:02.827513 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b04a472b-82b0-4fe4-9d6c-49b9481034bb-config-data\") pod \"b04a472b-82b0-4fe4-9d6c-49b9481034bb\" (UID: \"b04a472b-82b0-4fe4-9d6c-49b9481034bb\") " Nov 29 07:40:02 crc kubenswrapper[4947]: I1129 07:40:02.827685 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b04a472b-82b0-4fe4-9d6c-49b9481034bb-combined-ca-bundle\") pod \"b04a472b-82b0-4fe4-9d6c-49b9481034bb\" (UID: \"b04a472b-82b0-4fe4-9d6c-49b9481034bb\") " Nov 29 07:40:02 crc kubenswrapper[4947]: I1129 07:40:02.827830 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b04a472b-82b0-4fe4-9d6c-49b9481034bb-logs\") pod \"b04a472b-82b0-4fe4-9d6c-49b9481034bb\" (UID: \"b04a472b-82b0-4fe4-9d6c-49b9481034bb\") " Nov 29 07:40:02 crc kubenswrapper[4947]: I1129 07:40:02.827910 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b04a472b-82b0-4fe4-9d6c-49b9481034bb-config-data-custom\") pod \"b04a472b-82b0-4fe4-9d6c-49b9481034bb\" (UID: \"b04a472b-82b0-4fe4-9d6c-49b9481034bb\") " Nov 29 07:40:02 crc kubenswrapper[4947]: I1129 07:40:02.827949 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5vhf\" (UniqueName: \"kubernetes.io/projected/b04a472b-82b0-4fe4-9d6c-49b9481034bb-kube-api-access-t5vhf\") pod \"b04a472b-82b0-4fe4-9d6c-49b9481034bb\" (UID: \"b04a472b-82b0-4fe4-9d6c-49b9481034bb\") " Nov 29 07:40:02 crc kubenswrapper[4947]: I1129 07:40:02.828836 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b04a472b-82b0-4fe4-9d6c-49b9481034bb-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b04a472b-82b0-4fe4-9d6c-49b9481034bb" (UID: "b04a472b-82b0-4fe4-9d6c-49b9481034bb"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:40:02 crc kubenswrapper[4947]: I1129 07:40:02.831080 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b04a472b-82b0-4fe4-9d6c-49b9481034bb-logs" (OuterVolumeSpecName: "logs") pod "b04a472b-82b0-4fe4-9d6c-49b9481034bb" (UID: "b04a472b-82b0-4fe4-9d6c-49b9481034bb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:40:02 crc kubenswrapper[4947]: I1129 07:40:02.837625 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b04a472b-82b0-4fe4-9d6c-49b9481034bb-kube-api-access-t5vhf" (OuterVolumeSpecName: "kube-api-access-t5vhf") pod "b04a472b-82b0-4fe4-9d6c-49b9481034bb" (UID: "b04a472b-82b0-4fe4-9d6c-49b9481034bb"). InnerVolumeSpecName "kube-api-access-t5vhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:40:02 crc kubenswrapper[4947]: I1129 07:40:02.843655 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b04a472b-82b0-4fe4-9d6c-49b9481034bb-scripts" (OuterVolumeSpecName: "scripts") pod "b04a472b-82b0-4fe4-9d6c-49b9481034bb" (UID: "b04a472b-82b0-4fe4-9d6c-49b9481034bb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:02 crc kubenswrapper[4947]: I1129 07:40:02.856448 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b04a472b-82b0-4fe4-9d6c-49b9481034bb-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b04a472b-82b0-4fe4-9d6c-49b9481034bb" (UID: "b04a472b-82b0-4fe4-9d6c-49b9481034bb"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:02 crc kubenswrapper[4947]: I1129 07:40:02.885099 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b04a472b-82b0-4fe4-9d6c-49b9481034bb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b04a472b-82b0-4fe4-9d6c-49b9481034bb" (UID: "b04a472b-82b0-4fe4-9d6c-49b9481034bb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:02 crc kubenswrapper[4947]: I1129 07:40:02.931971 4947 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b04a472b-82b0-4fe4-9d6c-49b9481034bb-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:02 crc kubenswrapper[4947]: I1129 07:40:02.932390 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5vhf\" (UniqueName: \"kubernetes.io/projected/b04a472b-82b0-4fe4-9d6c-49b9481034bb-kube-api-access-t5vhf\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:02 crc kubenswrapper[4947]: I1129 07:40:02.932405 4947 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b04a472b-82b0-4fe4-9d6c-49b9481034bb-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:02 crc kubenswrapper[4947]: I1129 07:40:02.932414 4947 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b04a472b-82b0-4fe4-9d6c-49b9481034bb-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:02 crc kubenswrapper[4947]: I1129 07:40:02.932424 4947 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b04a472b-82b0-4fe4-9d6c-49b9481034bb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:02 crc kubenswrapper[4947]: I1129 07:40:02.932433 4947 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b04a472b-82b0-4fe4-9d6c-49b9481034bb-logs\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:02 crc kubenswrapper[4947]: I1129 07:40:02.935417 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b04a472b-82b0-4fe4-9d6c-49b9481034bb-config-data" (OuterVolumeSpecName: "config-data") pod "b04a472b-82b0-4fe4-9d6c-49b9481034bb" (UID: "b04a472b-82b0-4fe4-9d6c-49b9481034bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:02 crc kubenswrapper[4947]: I1129 07:40:02.935825 4947 generic.go:334] "Generic (PLEG): container finished" podID="b04a472b-82b0-4fe4-9d6c-49b9481034bb" containerID="ad64405e931ade59a738488f81bcfe6c8c134f3724ba169352c9421b4ee8b693" exitCode=0 Nov 29 07:40:02 crc kubenswrapper[4947]: I1129 07:40:02.935868 4947 generic.go:334] "Generic (PLEG): container finished" podID="b04a472b-82b0-4fe4-9d6c-49b9481034bb" containerID="1b54eae459c1f711f82130515fd5f40b733a67ffdd9e7f03ce3fdf9a23d3bfeb" exitCode=143 Nov 29 07:40:02 crc kubenswrapper[4947]: I1129 07:40:02.935985 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"b04a472b-82b0-4fe4-9d6c-49b9481034bb","Type":"ContainerDied","Data":"ad64405e931ade59a738488f81bcfe6c8c134f3724ba169352c9421b4ee8b693"} Nov 29 07:40:02 crc kubenswrapper[4947]: I1129 07:40:02.936192 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"b04a472b-82b0-4fe4-9d6c-49b9481034bb","Type":"ContainerDied","Data":"1b54eae459c1f711f82130515fd5f40b733a67ffdd9e7f03ce3fdf9a23d3bfeb"} Nov 29 07:40:02 crc kubenswrapper[4947]: I1129 07:40:02.936260 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"b04a472b-82b0-4fe4-9d6c-49b9481034bb","Type":"ContainerDied","Data":"285acd300ad1ff054c6de43516064ca58752e66cb1f7c23bfe84cc67eb2bc09c"} Nov 29 07:40:02 crc kubenswrapper[4947]: I1129 07:40:02.936330 4947 scope.go:117] "RemoveContainer" containerID="ad64405e931ade59a738488f81bcfe6c8c134f3724ba169352c9421b4ee8b693" Nov 29 07:40:02 crc kubenswrapper[4947]: I1129 07:40:02.936606 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Nov 29 07:40:02 crc kubenswrapper[4947]: I1129 07:40:02.984383 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Nov 29 07:40:02 crc kubenswrapper[4947]: I1129 07:40:02.995745 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-api-0"] Nov 29 07:40:03 crc kubenswrapper[4947]: I1129 07:40:03.032477 4947 scope.go:117] "RemoveContainer" containerID="1b54eae459c1f711f82130515fd5f40b733a67ffdd9e7f03ce3fdf9a23d3bfeb" Nov 29 07:40:03 crc kubenswrapper[4947]: I1129 07:40:03.041332 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Nov 29 07:40:03 crc kubenswrapper[4947]: E1129 07:40:03.045407 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b04a472b-82b0-4fe4-9d6c-49b9481034bb" containerName="manila-api-log" Nov 29 07:40:03 crc kubenswrapper[4947]: I1129 07:40:03.045463 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="b04a472b-82b0-4fe4-9d6c-49b9481034bb" containerName="manila-api-log" Nov 29 07:40:03 crc kubenswrapper[4947]: E1129 07:40:03.045543 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b04a472b-82b0-4fe4-9d6c-49b9481034bb" containerName="manila-api" Nov 29 07:40:03 crc kubenswrapper[4947]: I1129 07:40:03.045561 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="b04a472b-82b0-4fe4-9d6c-49b9481034bb" containerName="manila-api" Nov 29 07:40:03 crc kubenswrapper[4947]: I1129 07:40:03.047825 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="b04a472b-82b0-4fe4-9d6c-49b9481034bb" containerName="manila-api" Nov 29 07:40:03 crc kubenswrapper[4947]: I1129 07:40:03.047915 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="b04a472b-82b0-4fe4-9d6c-49b9481034bb" containerName="manila-api-log" Nov 29 07:40:03 crc kubenswrapper[4947]: I1129 07:40:03.059738 4947 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b04a472b-82b0-4fe4-9d6c-49b9481034bb-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:03 crc kubenswrapper[4947]: I1129 07:40:03.064302 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Nov 29 07:40:03 crc kubenswrapper[4947]: I1129 07:40:03.070812 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Nov 29 07:40:03 crc kubenswrapper[4947]: I1129 07:40:03.072037 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-internal-svc" Nov 29 07:40:03 crc kubenswrapper[4947]: I1129 07:40:03.073065 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-public-svc" Nov 29 07:40:03 crc kubenswrapper[4947]: I1129 07:40:03.084094 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Nov 29 07:40:03 crc kubenswrapper[4947]: I1129 07:40:03.119290 4947 scope.go:117] "RemoveContainer" containerID="ad64405e931ade59a738488f81bcfe6c8c134f3724ba169352c9421b4ee8b693" Nov 29 07:40:03 crc kubenswrapper[4947]: E1129 07:40:03.123887 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad64405e931ade59a738488f81bcfe6c8c134f3724ba169352c9421b4ee8b693\": container with ID starting with ad64405e931ade59a738488f81bcfe6c8c134f3724ba169352c9421b4ee8b693 not found: ID does not exist" containerID="ad64405e931ade59a738488f81bcfe6c8c134f3724ba169352c9421b4ee8b693" Nov 29 07:40:03 crc kubenswrapper[4947]: I1129 07:40:03.123973 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad64405e931ade59a738488f81bcfe6c8c134f3724ba169352c9421b4ee8b693"} err="failed to get container status \"ad64405e931ade59a738488f81bcfe6c8c134f3724ba169352c9421b4ee8b693\": rpc error: code = NotFound desc = could not find container \"ad64405e931ade59a738488f81bcfe6c8c134f3724ba169352c9421b4ee8b693\": container with ID starting with ad64405e931ade59a738488f81bcfe6c8c134f3724ba169352c9421b4ee8b693 not found: ID does not exist" Nov 29 07:40:03 crc kubenswrapper[4947]: I1129 07:40:03.124016 4947 scope.go:117] "RemoveContainer" containerID="1b54eae459c1f711f82130515fd5f40b733a67ffdd9e7f03ce3fdf9a23d3bfeb" Nov 29 07:40:03 crc kubenswrapper[4947]: E1129 07:40:03.124909 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b54eae459c1f711f82130515fd5f40b733a67ffdd9e7f03ce3fdf9a23d3bfeb\": container with ID starting with 1b54eae459c1f711f82130515fd5f40b733a67ffdd9e7f03ce3fdf9a23d3bfeb not found: ID does not exist" containerID="1b54eae459c1f711f82130515fd5f40b733a67ffdd9e7f03ce3fdf9a23d3bfeb" Nov 29 07:40:03 crc kubenswrapper[4947]: I1129 07:40:03.124975 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b54eae459c1f711f82130515fd5f40b733a67ffdd9e7f03ce3fdf9a23d3bfeb"} err="failed to get container status \"1b54eae459c1f711f82130515fd5f40b733a67ffdd9e7f03ce3fdf9a23d3bfeb\": rpc error: code = NotFound desc = could not find container \"1b54eae459c1f711f82130515fd5f40b733a67ffdd9e7f03ce3fdf9a23d3bfeb\": container with ID starting with 1b54eae459c1f711f82130515fd5f40b733a67ffdd9e7f03ce3fdf9a23d3bfeb not found: ID does not exist" Nov 29 07:40:03 crc kubenswrapper[4947]: I1129 07:40:03.125010 4947 scope.go:117] "RemoveContainer" containerID="ad64405e931ade59a738488f81bcfe6c8c134f3724ba169352c9421b4ee8b693" Nov 29 07:40:03 crc kubenswrapper[4947]: I1129 07:40:03.125437 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad64405e931ade59a738488f81bcfe6c8c134f3724ba169352c9421b4ee8b693"} err="failed to get container status \"ad64405e931ade59a738488f81bcfe6c8c134f3724ba169352c9421b4ee8b693\": rpc error: code = NotFound desc = could not find container \"ad64405e931ade59a738488f81bcfe6c8c134f3724ba169352c9421b4ee8b693\": container with ID starting with ad64405e931ade59a738488f81bcfe6c8c134f3724ba169352c9421b4ee8b693 not found: ID does not exist" Nov 29 07:40:03 crc kubenswrapper[4947]: I1129 07:40:03.125471 4947 scope.go:117] "RemoveContainer" containerID="1b54eae459c1f711f82130515fd5f40b733a67ffdd9e7f03ce3fdf9a23d3bfeb" Nov 29 07:40:03 crc kubenswrapper[4947]: I1129 07:40:03.125989 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b54eae459c1f711f82130515fd5f40b733a67ffdd9e7f03ce3fdf9a23d3bfeb"} err="failed to get container status \"1b54eae459c1f711f82130515fd5f40b733a67ffdd9e7f03ce3fdf9a23d3bfeb\": rpc error: code = NotFound desc = could not find container \"1b54eae459c1f711f82130515fd5f40b733a67ffdd9e7f03ce3fdf9a23d3bfeb\": container with ID starting with 1b54eae459c1f711f82130515fd5f40b733a67ffdd9e7f03ce3fdf9a23d3bfeb not found: ID does not exist" Nov 29 07:40:03 crc kubenswrapper[4947]: I1129 07:40:03.201238 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b04a472b-82b0-4fe4-9d6c-49b9481034bb" path="/var/lib/kubelet/pods/b04a472b-82b0-4fe4-9d6c-49b9481034bb/volumes" Nov 29 07:40:03 crc kubenswrapper[4947]: I1129 07:40:03.263578 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fa713b2-c361-4226-b17d-933dee71ba86-logs\") pod \"manila-api-0\" (UID: \"1fa713b2-c361-4226-b17d-933dee71ba86\") " pod="openstack/manila-api-0" Nov 29 07:40:03 crc kubenswrapper[4947]: I1129 07:40:03.263657 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fa713b2-c361-4226-b17d-933dee71ba86-config-data\") pod \"manila-api-0\" (UID: \"1fa713b2-c361-4226-b17d-933dee71ba86\") " pod="openstack/manila-api-0" Nov 29 07:40:03 crc kubenswrapper[4947]: I1129 07:40:03.263700 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fa713b2-c361-4226-b17d-933dee71ba86-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"1fa713b2-c361-4226-b17d-933dee71ba86\") " pod="openstack/manila-api-0" Nov 29 07:40:03 crc kubenswrapper[4947]: I1129 07:40:03.263738 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4fch\" (UniqueName: \"kubernetes.io/projected/1fa713b2-c361-4226-b17d-933dee71ba86-kube-api-access-t4fch\") pod \"manila-api-0\" (UID: \"1fa713b2-c361-4226-b17d-933dee71ba86\") " pod="openstack/manila-api-0" Nov 29 07:40:03 crc kubenswrapper[4947]: I1129 07:40:03.263795 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1fa713b2-c361-4226-b17d-933dee71ba86-config-data-custom\") pod \"manila-api-0\" (UID: \"1fa713b2-c361-4226-b17d-933dee71ba86\") " pod="openstack/manila-api-0" Nov 29 07:40:03 crc kubenswrapper[4947]: I1129 07:40:03.263823 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fa713b2-c361-4226-b17d-933dee71ba86-internal-tls-certs\") pod \"manila-api-0\" (UID: \"1fa713b2-c361-4226-b17d-933dee71ba86\") " pod="openstack/manila-api-0" Nov 29 07:40:03 crc kubenswrapper[4947]: I1129 07:40:03.263882 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fa713b2-c361-4226-b17d-933dee71ba86-scripts\") pod \"manila-api-0\" (UID: \"1fa713b2-c361-4226-b17d-933dee71ba86\") " pod="openstack/manila-api-0" Nov 29 07:40:03 crc kubenswrapper[4947]: I1129 07:40:03.263898 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fa713b2-c361-4226-b17d-933dee71ba86-public-tls-certs\") pod \"manila-api-0\" (UID: \"1fa713b2-c361-4226-b17d-933dee71ba86\") " pod="openstack/manila-api-0" Nov 29 07:40:03 crc kubenswrapper[4947]: I1129 07:40:03.263920 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1fa713b2-c361-4226-b17d-933dee71ba86-etc-machine-id\") pod \"manila-api-0\" (UID: \"1fa713b2-c361-4226-b17d-933dee71ba86\") " pod="openstack/manila-api-0" Nov 29 07:40:03 crc kubenswrapper[4947]: I1129 07:40:03.299446 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-bf95bd4cd-8f5st" podUID="dec28edc-46ae-456a-9be2-ec56bdfd409f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.244:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.244:8443: connect: connection refused" Nov 29 07:40:03 crc kubenswrapper[4947]: I1129 07:40:03.365680 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fa713b2-c361-4226-b17d-933dee71ba86-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"1fa713b2-c361-4226-b17d-933dee71ba86\") " pod="openstack/manila-api-0" Nov 29 07:40:03 crc kubenswrapper[4947]: I1129 07:40:03.365768 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4fch\" (UniqueName: \"kubernetes.io/projected/1fa713b2-c361-4226-b17d-933dee71ba86-kube-api-access-t4fch\") pod \"manila-api-0\" (UID: \"1fa713b2-c361-4226-b17d-933dee71ba86\") " pod="openstack/manila-api-0" Nov 29 07:40:03 crc kubenswrapper[4947]: I1129 07:40:03.365855 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1fa713b2-c361-4226-b17d-933dee71ba86-config-data-custom\") pod \"manila-api-0\" (UID: \"1fa713b2-c361-4226-b17d-933dee71ba86\") " pod="openstack/manila-api-0" Nov 29 07:40:03 crc kubenswrapper[4947]: I1129 07:40:03.365886 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fa713b2-c361-4226-b17d-933dee71ba86-internal-tls-certs\") pod \"manila-api-0\" (UID: \"1fa713b2-c361-4226-b17d-933dee71ba86\") " pod="openstack/manila-api-0" Nov 29 07:40:03 crc kubenswrapper[4947]: I1129 07:40:03.365970 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fa713b2-c361-4226-b17d-933dee71ba86-scripts\") pod \"manila-api-0\" (UID: \"1fa713b2-c361-4226-b17d-933dee71ba86\") " pod="openstack/manila-api-0" Nov 29 07:40:03 crc kubenswrapper[4947]: I1129 07:40:03.365999 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fa713b2-c361-4226-b17d-933dee71ba86-public-tls-certs\") pod \"manila-api-0\" (UID: \"1fa713b2-c361-4226-b17d-933dee71ba86\") " pod="openstack/manila-api-0" Nov 29 07:40:03 crc kubenswrapper[4947]: I1129 07:40:03.366032 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1fa713b2-c361-4226-b17d-933dee71ba86-etc-machine-id\") pod \"manila-api-0\" (UID: \"1fa713b2-c361-4226-b17d-933dee71ba86\") " pod="openstack/manila-api-0" Nov 29 07:40:03 crc kubenswrapper[4947]: I1129 07:40:03.366081 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fa713b2-c361-4226-b17d-933dee71ba86-logs\") pod \"manila-api-0\" (UID: \"1fa713b2-c361-4226-b17d-933dee71ba86\") " pod="openstack/manila-api-0" Nov 29 07:40:03 crc kubenswrapper[4947]: I1129 07:40:03.366115 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fa713b2-c361-4226-b17d-933dee71ba86-config-data\") pod \"manila-api-0\" (UID: \"1fa713b2-c361-4226-b17d-933dee71ba86\") " pod="openstack/manila-api-0" Nov 29 07:40:03 crc kubenswrapper[4947]: I1129 07:40:03.366200 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1fa713b2-c361-4226-b17d-933dee71ba86-etc-machine-id\") pod \"manila-api-0\" (UID: \"1fa713b2-c361-4226-b17d-933dee71ba86\") " pod="openstack/manila-api-0" Nov 29 07:40:03 crc kubenswrapper[4947]: I1129 07:40:03.366746 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fa713b2-c361-4226-b17d-933dee71ba86-logs\") pod \"manila-api-0\" (UID: \"1fa713b2-c361-4226-b17d-933dee71ba86\") " pod="openstack/manila-api-0" Nov 29 07:40:03 crc kubenswrapper[4947]: I1129 07:40:03.374743 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fa713b2-c361-4226-b17d-933dee71ba86-public-tls-certs\") pod \"manila-api-0\" (UID: \"1fa713b2-c361-4226-b17d-933dee71ba86\") " pod="openstack/manila-api-0" Nov 29 07:40:03 crc kubenswrapper[4947]: I1129 07:40:03.375134 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1fa713b2-c361-4226-b17d-933dee71ba86-config-data-custom\") pod \"manila-api-0\" (UID: \"1fa713b2-c361-4226-b17d-933dee71ba86\") " pod="openstack/manila-api-0" Nov 29 07:40:03 crc kubenswrapper[4947]: I1129 07:40:03.375261 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fa713b2-c361-4226-b17d-933dee71ba86-config-data\") pod \"manila-api-0\" (UID: \"1fa713b2-c361-4226-b17d-933dee71ba86\") " pod="openstack/manila-api-0" Nov 29 07:40:03 crc kubenswrapper[4947]: I1129 07:40:03.375899 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fa713b2-c361-4226-b17d-933dee71ba86-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"1fa713b2-c361-4226-b17d-933dee71ba86\") " pod="openstack/manila-api-0" Nov 29 07:40:03 crc kubenswrapper[4947]: I1129 07:40:03.377994 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fa713b2-c361-4226-b17d-933dee71ba86-internal-tls-certs\") pod \"manila-api-0\" (UID: \"1fa713b2-c361-4226-b17d-933dee71ba86\") " pod="openstack/manila-api-0" Nov 29 07:40:03 crc kubenswrapper[4947]: I1129 07:40:03.379760 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fa713b2-c361-4226-b17d-933dee71ba86-scripts\") pod \"manila-api-0\" (UID: \"1fa713b2-c361-4226-b17d-933dee71ba86\") " pod="openstack/manila-api-0" Nov 29 07:40:03 crc kubenswrapper[4947]: I1129 07:40:03.398351 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4fch\" (UniqueName: \"kubernetes.io/projected/1fa713b2-c361-4226-b17d-933dee71ba86-kube-api-access-t4fch\") pod \"manila-api-0\" (UID: \"1fa713b2-c361-4226-b17d-933dee71ba86\") " pod="openstack/manila-api-0" Nov 29 07:40:03 crc kubenswrapper[4947]: I1129 07:40:03.424854 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Nov 29 07:40:04 crc kubenswrapper[4947]: I1129 07:40:04.131773 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Nov 29 07:40:04 crc kubenswrapper[4947]: I1129 07:40:04.971455 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"1fa713b2-c361-4226-b17d-933dee71ba86","Type":"ContainerStarted","Data":"0cc47b27b782d44b7bc406f6a941de89402bf41c4450a1c511028e82a997e214"} Nov 29 07:40:04 crc kubenswrapper[4947]: I1129 07:40:04.972071 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"1fa713b2-c361-4226-b17d-933dee71ba86","Type":"ContainerStarted","Data":"6a2f4aa4b2bcaecf63688eb3519413cacc0f4be0c0de67f0678ed3a9e2c869ba"} Nov 29 07:40:05 crc kubenswrapper[4947]: I1129 07:40:05.119928 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:40:05 crc kubenswrapper[4947]: I1129 07:40:05.120270 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="61d5ade8-7aff-4794-b125-8976230ac2c7" containerName="ceilometer-central-agent" containerID="cri-o://16478996e26d9b8193ea7c9e93cb57fb8c498423707add8febb80121b9cc9b92" gracePeriod=30 Nov 29 07:40:05 crc kubenswrapper[4947]: I1129 07:40:05.120557 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="61d5ade8-7aff-4794-b125-8976230ac2c7" containerName="ceilometer-notification-agent" containerID="cri-o://f371057812d11d5d65ef8ad96bacb494338a8f6411b5e99349323ac4db876d5a" gracePeriod=30 Nov 29 07:40:05 crc kubenswrapper[4947]: I1129 07:40:05.120577 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="61d5ade8-7aff-4794-b125-8976230ac2c7" containerName="sg-core" containerID="cri-o://cf0ac48485450ed668311bf28161e20e6ff07face7d484829813d36539c40be3" gracePeriod=30 Nov 29 07:40:05 crc kubenswrapper[4947]: I1129 07:40:05.120615 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="61d5ade8-7aff-4794-b125-8976230ac2c7" containerName="proxy-httpd" containerID="cri-o://68808fd967124b482b1cb4b10a5cb67d25d6315894f66100a60932207750eff7" gracePeriod=30 Nov 29 07:40:05 crc kubenswrapper[4947]: I1129 07:40:05.990169 4947 generic.go:334] "Generic (PLEG): container finished" podID="61d5ade8-7aff-4794-b125-8976230ac2c7" containerID="68808fd967124b482b1cb4b10a5cb67d25d6315894f66100a60932207750eff7" exitCode=0 Nov 29 07:40:05 crc kubenswrapper[4947]: I1129 07:40:05.990612 4947 generic.go:334] "Generic (PLEG): container finished" podID="61d5ade8-7aff-4794-b125-8976230ac2c7" containerID="cf0ac48485450ed668311bf28161e20e6ff07face7d484829813d36539c40be3" exitCode=2 Nov 29 07:40:05 crc kubenswrapper[4947]: I1129 07:40:05.990624 4947 generic.go:334] "Generic (PLEG): container finished" podID="61d5ade8-7aff-4794-b125-8976230ac2c7" containerID="16478996e26d9b8193ea7c9e93cb57fb8c498423707add8febb80121b9cc9b92" exitCode=0 Nov 29 07:40:05 crc kubenswrapper[4947]: I1129 07:40:05.990442 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61d5ade8-7aff-4794-b125-8976230ac2c7","Type":"ContainerDied","Data":"68808fd967124b482b1cb4b10a5cb67d25d6315894f66100a60932207750eff7"} Nov 29 07:40:05 crc kubenswrapper[4947]: I1129 07:40:05.990678 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61d5ade8-7aff-4794-b125-8976230ac2c7","Type":"ContainerDied","Data":"cf0ac48485450ed668311bf28161e20e6ff07face7d484829813d36539c40be3"} Nov 29 07:40:05 crc kubenswrapper[4947]: I1129 07:40:05.990692 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61d5ade8-7aff-4794-b125-8976230ac2c7","Type":"ContainerDied","Data":"16478996e26d9b8193ea7c9e93cb57fb8c498423707add8febb80121b9cc9b92"} Nov 29 07:40:05 crc kubenswrapper[4947]: I1129 07:40:05.995603 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"1fa713b2-c361-4226-b17d-933dee71ba86","Type":"ContainerStarted","Data":"173c748bde780ed7c7b9ae6851567632ec5f1528807dd64ba1fbd0c5a8b6d4ca"} Nov 29 07:40:05 crc kubenswrapper[4947]: I1129 07:40:05.995758 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Nov 29 07:40:06 crc kubenswrapper[4947]: I1129 07:40:06.027893 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=4.027869754 podStartE2EDuration="4.027869754s" podCreationTimestamp="2025-11-29 07:40:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:40:06.020856476 +0000 UTC m=+3957.065238557" watchObservedRunningTime="2025-11-29 07:40:06.027869754 +0000 UTC m=+3957.072251835" Nov 29 07:40:08 crc kubenswrapper[4947]: I1129 07:40:08.024471 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Nov 29 07:40:08 crc kubenswrapper[4947]: I1129 07:40:08.113473 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-76b5fdb995-mqmgd" Nov 29 07:40:08 crc kubenswrapper[4947]: I1129 07:40:08.208631 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-7dxrl"] Nov 29 07:40:08 crc kubenswrapper[4947]: I1129 07:40:08.209022 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-864d5fc68c-7dxrl" podUID="a69f6907-c4a4-45a1-a873-ae5c0557ee41" containerName="dnsmasq-dns" containerID="cri-o://39a1ba9fcc013c234ffb2f014b8cef95647a8fc3a9eee1d1a26044ddf9e7e441" gracePeriod=10 Nov 29 07:40:10 crc kubenswrapper[4947]: I1129 07:40:10.047960 4947 generic.go:334] "Generic (PLEG): container finished" podID="61d5ade8-7aff-4794-b125-8976230ac2c7" containerID="f371057812d11d5d65ef8ad96bacb494338a8f6411b5e99349323ac4db876d5a" exitCode=0 Nov 29 07:40:10 crc kubenswrapper[4947]: I1129 07:40:10.048001 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61d5ade8-7aff-4794-b125-8976230ac2c7","Type":"ContainerDied","Data":"f371057812d11d5d65ef8ad96bacb494338a8f6411b5e99349323ac4db876d5a"} Nov 29 07:40:10 crc kubenswrapper[4947]: I1129 07:40:10.050941 4947 generic.go:334] "Generic (PLEG): container finished" podID="a69f6907-c4a4-45a1-a873-ae5c0557ee41" containerID="39a1ba9fcc013c234ffb2f014b8cef95647a8fc3a9eee1d1a26044ddf9e7e441" exitCode=0 Nov 29 07:40:10 crc kubenswrapper[4947]: I1129 07:40:10.050967 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-7dxrl" event={"ID":"a69f6907-c4a4-45a1-a873-ae5c0557ee41","Type":"ContainerDied","Data":"39a1ba9fcc013c234ffb2f014b8cef95647a8fc3a9eee1d1a26044ddf9e7e441"} Nov 29 07:40:10 crc kubenswrapper[4947]: I1129 07:40:10.239960 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:40:10 crc kubenswrapper[4947]: I1129 07:40:10.246081 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-7dxrl" Nov 29 07:40:10 crc kubenswrapper[4947]: I1129 07:40:10.385499 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61d5ade8-7aff-4794-b125-8976230ac2c7-log-httpd\") pod \"61d5ade8-7aff-4794-b125-8976230ac2c7\" (UID: \"61d5ade8-7aff-4794-b125-8976230ac2c7\") " Nov 29 07:40:10 crc kubenswrapper[4947]: I1129 07:40:10.385560 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a69f6907-c4a4-45a1-a873-ae5c0557ee41-config\") pod \"a69f6907-c4a4-45a1-a873-ae5c0557ee41\" (UID: \"a69f6907-c4a4-45a1-a873-ae5c0557ee41\") " Nov 29 07:40:10 crc kubenswrapper[4947]: I1129 07:40:10.385603 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/61d5ade8-7aff-4794-b125-8976230ac2c7-sg-core-conf-yaml\") pod \"61d5ade8-7aff-4794-b125-8976230ac2c7\" (UID: \"61d5ade8-7aff-4794-b125-8976230ac2c7\") " Nov 29 07:40:10 crc kubenswrapper[4947]: I1129 07:40:10.385641 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a69f6907-c4a4-45a1-a873-ae5c0557ee41-dns-svc\") pod \"a69f6907-c4a4-45a1-a873-ae5c0557ee41\" (UID: \"a69f6907-c4a4-45a1-a873-ae5c0557ee41\") " Nov 29 07:40:10 crc kubenswrapper[4947]: I1129 07:40:10.385686 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61d5ade8-7aff-4794-b125-8976230ac2c7-scripts\") pod \"61d5ade8-7aff-4794-b125-8976230ac2c7\" (UID: \"61d5ade8-7aff-4794-b125-8976230ac2c7\") " Nov 29 07:40:10 crc kubenswrapper[4947]: I1129 07:40:10.385818 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61d5ade8-7aff-4794-b125-8976230ac2c7-config-data\") pod \"61d5ade8-7aff-4794-b125-8976230ac2c7\" (UID: \"61d5ade8-7aff-4794-b125-8976230ac2c7\") " Nov 29 07:40:10 crc kubenswrapper[4947]: I1129 07:40:10.385898 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a69f6907-c4a4-45a1-a873-ae5c0557ee41-ovsdbserver-nb\") pod \"a69f6907-c4a4-45a1-a873-ae5c0557ee41\" (UID: \"a69f6907-c4a4-45a1-a873-ae5c0557ee41\") " Nov 29 07:40:10 crc kubenswrapper[4947]: I1129 07:40:10.385946 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61d5ade8-7aff-4794-b125-8976230ac2c7-run-httpd\") pod \"61d5ade8-7aff-4794-b125-8976230ac2c7\" (UID: \"61d5ade8-7aff-4794-b125-8976230ac2c7\") " Nov 29 07:40:10 crc kubenswrapper[4947]: I1129 07:40:10.385971 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61d5ade8-7aff-4794-b125-8976230ac2c7-combined-ca-bundle\") pod \"61d5ade8-7aff-4794-b125-8976230ac2c7\" (UID: \"61d5ade8-7aff-4794-b125-8976230ac2c7\") " Nov 29 07:40:10 crc kubenswrapper[4947]: I1129 07:40:10.385996 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a69f6907-c4a4-45a1-a873-ae5c0557ee41-openstack-edpm-ipam\") pod \"a69f6907-c4a4-45a1-a873-ae5c0557ee41\" (UID: \"a69f6907-c4a4-45a1-a873-ae5c0557ee41\") " Nov 29 07:40:10 crc kubenswrapper[4947]: I1129 07:40:10.386061 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4b76k\" (UniqueName: \"kubernetes.io/projected/a69f6907-c4a4-45a1-a873-ae5c0557ee41-kube-api-access-4b76k\") pod \"a69f6907-c4a4-45a1-a873-ae5c0557ee41\" (UID: \"a69f6907-c4a4-45a1-a873-ae5c0557ee41\") " Nov 29 07:40:10 crc kubenswrapper[4947]: I1129 07:40:10.386126 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a69f6907-c4a4-45a1-a873-ae5c0557ee41-ovsdbserver-sb\") pod \"a69f6907-c4a4-45a1-a873-ae5c0557ee41\" (UID: \"a69f6907-c4a4-45a1-a873-ae5c0557ee41\") " Nov 29 07:40:10 crc kubenswrapper[4947]: I1129 07:40:10.386335 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrcdd\" (UniqueName: \"kubernetes.io/projected/61d5ade8-7aff-4794-b125-8976230ac2c7-kube-api-access-qrcdd\") pod \"61d5ade8-7aff-4794-b125-8976230ac2c7\" (UID: \"61d5ade8-7aff-4794-b125-8976230ac2c7\") " Nov 29 07:40:10 crc kubenswrapper[4947]: I1129 07:40:10.386373 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/61d5ade8-7aff-4794-b125-8976230ac2c7-ceilometer-tls-certs\") pod \"61d5ade8-7aff-4794-b125-8976230ac2c7\" (UID: \"61d5ade8-7aff-4794-b125-8976230ac2c7\") " Nov 29 07:40:10 crc kubenswrapper[4947]: I1129 07:40:10.392181 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61d5ade8-7aff-4794-b125-8976230ac2c7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "61d5ade8-7aff-4794-b125-8976230ac2c7" (UID: "61d5ade8-7aff-4794-b125-8976230ac2c7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:40:10 crc kubenswrapper[4947]: I1129 07:40:10.395932 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61d5ade8-7aff-4794-b125-8976230ac2c7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "61d5ade8-7aff-4794-b125-8976230ac2c7" (UID: "61d5ade8-7aff-4794-b125-8976230ac2c7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:40:10 crc kubenswrapper[4947]: I1129 07:40:10.424022 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a69f6907-c4a4-45a1-a873-ae5c0557ee41-kube-api-access-4b76k" (OuterVolumeSpecName: "kube-api-access-4b76k") pod "a69f6907-c4a4-45a1-a873-ae5c0557ee41" (UID: "a69f6907-c4a4-45a1-a873-ae5c0557ee41"). InnerVolumeSpecName "kube-api-access-4b76k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:40:10 crc kubenswrapper[4947]: I1129 07:40:10.456701 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61d5ade8-7aff-4794-b125-8976230ac2c7-scripts" (OuterVolumeSpecName: "scripts") pod "61d5ade8-7aff-4794-b125-8976230ac2c7" (UID: "61d5ade8-7aff-4794-b125-8976230ac2c7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:10 crc kubenswrapper[4947]: I1129 07:40:10.457027 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61d5ade8-7aff-4794-b125-8976230ac2c7-kube-api-access-qrcdd" (OuterVolumeSpecName: "kube-api-access-qrcdd") pod "61d5ade8-7aff-4794-b125-8976230ac2c7" (UID: "61d5ade8-7aff-4794-b125-8976230ac2c7"). InnerVolumeSpecName "kube-api-access-qrcdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:40:10 crc kubenswrapper[4947]: I1129 07:40:10.490670 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4b76k\" (UniqueName: \"kubernetes.io/projected/a69f6907-c4a4-45a1-a873-ae5c0557ee41-kube-api-access-4b76k\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:10 crc kubenswrapper[4947]: I1129 07:40:10.490922 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrcdd\" (UniqueName: \"kubernetes.io/projected/61d5ade8-7aff-4794-b125-8976230ac2c7-kube-api-access-qrcdd\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:10 crc kubenswrapper[4947]: I1129 07:40:10.491007 4947 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61d5ade8-7aff-4794-b125-8976230ac2c7-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:10 crc kubenswrapper[4947]: I1129 07:40:10.491059 4947 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61d5ade8-7aff-4794-b125-8976230ac2c7-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:10 crc kubenswrapper[4947]: I1129 07:40:10.491109 4947 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61d5ade8-7aff-4794-b125-8976230ac2c7-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:10 crc kubenswrapper[4947]: I1129 07:40:10.498354 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61d5ade8-7aff-4794-b125-8976230ac2c7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "61d5ade8-7aff-4794-b125-8976230ac2c7" (UID: "61d5ade8-7aff-4794-b125-8976230ac2c7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:10 crc kubenswrapper[4947]: I1129 07:40:10.523637 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a69f6907-c4a4-45a1-a873-ae5c0557ee41-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a69f6907-c4a4-45a1-a873-ae5c0557ee41" (UID: "a69f6907-c4a4-45a1-a873-ae5c0557ee41"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:40:10 crc kubenswrapper[4947]: I1129 07:40:10.577797 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a69f6907-c4a4-45a1-a873-ae5c0557ee41-config" (OuterVolumeSpecName: "config") pod "a69f6907-c4a4-45a1-a873-ae5c0557ee41" (UID: "a69f6907-c4a4-45a1-a873-ae5c0557ee41"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:40:10 crc kubenswrapper[4947]: I1129 07:40:10.579007 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a69f6907-c4a4-45a1-a873-ae5c0557ee41-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a69f6907-c4a4-45a1-a873-ae5c0557ee41" (UID: "a69f6907-c4a4-45a1-a873-ae5c0557ee41"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:40:10 crc kubenswrapper[4947]: I1129 07:40:10.585353 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a69f6907-c4a4-45a1-a873-ae5c0557ee41-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "a69f6907-c4a4-45a1-a873-ae5c0557ee41" (UID: "a69f6907-c4a4-45a1-a873-ae5c0557ee41"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:40:10 crc kubenswrapper[4947]: I1129 07:40:10.593266 4947 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a69f6907-c4a4-45a1-a873-ae5c0557ee41-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:10 crc kubenswrapper[4947]: I1129 07:40:10.593346 4947 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a69f6907-c4a4-45a1-a873-ae5c0557ee41-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:10 crc kubenswrapper[4947]: I1129 07:40:10.593361 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a69f6907-c4a4-45a1-a873-ae5c0557ee41-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:10 crc kubenswrapper[4947]: I1129 07:40:10.593374 4947 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/61d5ade8-7aff-4794-b125-8976230ac2c7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:10 crc kubenswrapper[4947]: I1129 07:40:10.593385 4947 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a69f6907-c4a4-45a1-a873-ae5c0557ee41-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:10 crc kubenswrapper[4947]: I1129 07:40:10.607369 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61d5ade8-7aff-4794-b125-8976230ac2c7-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "61d5ade8-7aff-4794-b125-8976230ac2c7" (UID: "61d5ade8-7aff-4794-b125-8976230ac2c7"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:10 crc kubenswrapper[4947]: I1129 07:40:10.610734 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a69f6907-c4a4-45a1-a873-ae5c0557ee41-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a69f6907-c4a4-45a1-a873-ae5c0557ee41" (UID: "a69f6907-c4a4-45a1-a873-ae5c0557ee41"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:40:10 crc kubenswrapper[4947]: I1129 07:40:10.627432 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61d5ade8-7aff-4794-b125-8976230ac2c7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "61d5ade8-7aff-4794-b125-8976230ac2c7" (UID: "61d5ade8-7aff-4794-b125-8976230ac2c7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:10 crc kubenswrapper[4947]: I1129 07:40:10.697575 4947 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/61d5ade8-7aff-4794-b125-8976230ac2c7-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:10 crc kubenswrapper[4947]: I1129 07:40:10.697613 4947 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a69f6907-c4a4-45a1-a873-ae5c0557ee41-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:10 crc kubenswrapper[4947]: I1129 07:40:10.697622 4947 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61d5ade8-7aff-4794-b125-8976230ac2c7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:10 crc kubenswrapper[4947]: I1129 07:40:10.709854 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61d5ade8-7aff-4794-b125-8976230ac2c7-config-data" (OuterVolumeSpecName: "config-data") pod "61d5ade8-7aff-4794-b125-8976230ac2c7" (UID: "61d5ade8-7aff-4794-b125-8976230ac2c7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:10 crc kubenswrapper[4947]: I1129 07:40:10.799982 4947 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61d5ade8-7aff-4794-b125-8976230ac2c7-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.079276 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-7dxrl" event={"ID":"a69f6907-c4a4-45a1-a873-ae5c0557ee41","Type":"ContainerDied","Data":"2deac627820121d03fdcfbb05128bf3755e1caf821ec7c11ee5a404f94479ed5"} Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.079735 4947 scope.go:117] "RemoveContainer" containerID="39a1ba9fcc013c234ffb2f014b8cef95647a8fc3a9eee1d1a26044ddf9e7e441" Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.079547 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-7dxrl" Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.085199 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"fca98943-b0c1-461f-91fb-56f3f476810c","Type":"ContainerStarted","Data":"e8d270e99e904d3cb1a8c1af4149573334d284c0e5558a850313cc0e3617d53a"} Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.092730 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.092674 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61d5ade8-7aff-4794-b125-8976230ac2c7","Type":"ContainerDied","Data":"1781f03763779110c7515fc0152e9e6d6ce0ff1b34effcd6bfd2b0751c79d0d8"} Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.130014 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-7dxrl"] Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.135288 4947 scope.go:117] "RemoveContainer" containerID="885fd1bba3922cf325fcc284f243bf28a09153a64d2fbee9ed7e393bef95b4fa" Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.149006 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-7dxrl"] Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.165429 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.177850 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.183534 4947 scope.go:117] "RemoveContainer" containerID="68808fd967124b482b1cb4b10a5cb67d25d6315894f66100a60932207750eff7" Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.193596 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61d5ade8-7aff-4794-b125-8976230ac2c7" path="/var/lib/kubelet/pods/61d5ade8-7aff-4794-b125-8976230ac2c7/volumes" Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.194540 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a69f6907-c4a4-45a1-a873-ae5c0557ee41" path="/var/lib/kubelet/pods/a69f6907-c4a4-45a1-a873-ae5c0557ee41/volumes" Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.196096 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:40:11 crc kubenswrapper[4947]: E1129 07:40:11.196442 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61d5ade8-7aff-4794-b125-8976230ac2c7" containerName="ceilometer-notification-agent" Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.196463 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="61d5ade8-7aff-4794-b125-8976230ac2c7" containerName="ceilometer-notification-agent" Nov 29 07:40:11 crc kubenswrapper[4947]: E1129 07:40:11.196487 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61d5ade8-7aff-4794-b125-8976230ac2c7" containerName="ceilometer-central-agent" Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.196494 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="61d5ade8-7aff-4794-b125-8976230ac2c7" containerName="ceilometer-central-agent" Nov 29 07:40:11 crc kubenswrapper[4947]: E1129 07:40:11.196506 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61d5ade8-7aff-4794-b125-8976230ac2c7" containerName="sg-core" Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.196513 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="61d5ade8-7aff-4794-b125-8976230ac2c7" containerName="sg-core" Nov 29 07:40:11 crc kubenswrapper[4947]: E1129 07:40:11.196526 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a69f6907-c4a4-45a1-a873-ae5c0557ee41" containerName="init" Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.196534 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="a69f6907-c4a4-45a1-a873-ae5c0557ee41" containerName="init" Nov 29 07:40:11 crc kubenswrapper[4947]: E1129 07:40:11.196549 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a69f6907-c4a4-45a1-a873-ae5c0557ee41" containerName="dnsmasq-dns" Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.196555 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="a69f6907-c4a4-45a1-a873-ae5c0557ee41" containerName="dnsmasq-dns" Nov 29 07:40:11 crc kubenswrapper[4947]: E1129 07:40:11.196575 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61d5ade8-7aff-4794-b125-8976230ac2c7" containerName="proxy-httpd" Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.196581 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="61d5ade8-7aff-4794-b125-8976230ac2c7" containerName="proxy-httpd" Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.196753 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="a69f6907-c4a4-45a1-a873-ae5c0557ee41" containerName="dnsmasq-dns" Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.196770 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="61d5ade8-7aff-4794-b125-8976230ac2c7" containerName="ceilometer-central-agent" Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.196782 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="61d5ade8-7aff-4794-b125-8976230ac2c7" containerName="sg-core" Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.196791 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="61d5ade8-7aff-4794-b125-8976230ac2c7" containerName="proxy-httpd" Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.196809 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="61d5ade8-7aff-4794-b125-8976230ac2c7" containerName="ceilometer-notification-agent" Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.205764 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.211648 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.211949 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.211949 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.211353 4947 scope.go:117] "RemoveContainer" containerID="befe8bc1f518b72b2765c4bbae633eaff2671198765b803461fe977b3f76f166" Nov 29 07:40:11 crc kubenswrapper[4947]: E1129 07:40:11.218449 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.221846 4947 scope.go:117] "RemoveContainer" containerID="cf0ac48485450ed668311bf28161e20e6ff07face7d484829813d36539c40be3" Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.222764 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.281469 4947 scope.go:117] "RemoveContainer" containerID="f371057812d11d5d65ef8ad96bacb494338a8f6411b5e99349323ac4db876d5a" Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.311577 4947 scope.go:117] "RemoveContainer" containerID="16478996e26d9b8193ea7c9e93cb57fb8c498423707add8febb80121b9cc9b92" Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.318425 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/886ea3a6-2ccc-4e4e-8a77-25443b150f70-scripts\") pod \"ceilometer-0\" (UID: \"886ea3a6-2ccc-4e4e-8a77-25443b150f70\") " pod="openstack/ceilometer-0" Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.318535 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/886ea3a6-2ccc-4e4e-8a77-25443b150f70-run-httpd\") pod \"ceilometer-0\" (UID: \"886ea3a6-2ccc-4e4e-8a77-25443b150f70\") " pod="openstack/ceilometer-0" Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.318570 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/886ea3a6-2ccc-4e4e-8a77-25443b150f70-log-httpd\") pod \"ceilometer-0\" (UID: \"886ea3a6-2ccc-4e4e-8a77-25443b150f70\") " pod="openstack/ceilometer-0" Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.318716 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/886ea3a6-2ccc-4e4e-8a77-25443b150f70-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"886ea3a6-2ccc-4e4e-8a77-25443b150f70\") " pod="openstack/ceilometer-0" Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.318748 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/886ea3a6-2ccc-4e4e-8a77-25443b150f70-config-data\") pod \"ceilometer-0\" (UID: \"886ea3a6-2ccc-4e4e-8a77-25443b150f70\") " pod="openstack/ceilometer-0" Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.318765 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/886ea3a6-2ccc-4e4e-8a77-25443b150f70-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"886ea3a6-2ccc-4e4e-8a77-25443b150f70\") " pod="openstack/ceilometer-0" Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.319237 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/886ea3a6-2ccc-4e4e-8a77-25443b150f70-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"886ea3a6-2ccc-4e4e-8a77-25443b150f70\") " pod="openstack/ceilometer-0" Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.319357 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s28b\" (UniqueName: \"kubernetes.io/projected/886ea3a6-2ccc-4e4e-8a77-25443b150f70-kube-api-access-9s28b\") pod \"ceilometer-0\" (UID: \"886ea3a6-2ccc-4e4e-8a77-25443b150f70\") " pod="openstack/ceilometer-0" Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.421525 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/886ea3a6-2ccc-4e4e-8a77-25443b150f70-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"886ea3a6-2ccc-4e4e-8a77-25443b150f70\") " pod="openstack/ceilometer-0" Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.421606 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/886ea3a6-2ccc-4e4e-8a77-25443b150f70-config-data\") pod \"ceilometer-0\" (UID: \"886ea3a6-2ccc-4e4e-8a77-25443b150f70\") " pod="openstack/ceilometer-0" Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.421633 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/886ea3a6-2ccc-4e4e-8a77-25443b150f70-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"886ea3a6-2ccc-4e4e-8a77-25443b150f70\") " pod="openstack/ceilometer-0" Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.421664 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/886ea3a6-2ccc-4e4e-8a77-25443b150f70-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"886ea3a6-2ccc-4e4e-8a77-25443b150f70\") " pod="openstack/ceilometer-0" Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.421704 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9s28b\" (UniqueName: \"kubernetes.io/projected/886ea3a6-2ccc-4e4e-8a77-25443b150f70-kube-api-access-9s28b\") pod \"ceilometer-0\" (UID: \"886ea3a6-2ccc-4e4e-8a77-25443b150f70\") " pod="openstack/ceilometer-0" Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.421771 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/886ea3a6-2ccc-4e4e-8a77-25443b150f70-scripts\") pod \"ceilometer-0\" (UID: \"886ea3a6-2ccc-4e4e-8a77-25443b150f70\") " pod="openstack/ceilometer-0" Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.421837 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/886ea3a6-2ccc-4e4e-8a77-25443b150f70-run-httpd\") pod \"ceilometer-0\" (UID: \"886ea3a6-2ccc-4e4e-8a77-25443b150f70\") " pod="openstack/ceilometer-0" Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.421875 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/886ea3a6-2ccc-4e4e-8a77-25443b150f70-log-httpd\") pod \"ceilometer-0\" (UID: \"886ea3a6-2ccc-4e4e-8a77-25443b150f70\") " pod="openstack/ceilometer-0" Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.422693 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/886ea3a6-2ccc-4e4e-8a77-25443b150f70-log-httpd\") pod \"ceilometer-0\" (UID: \"886ea3a6-2ccc-4e4e-8a77-25443b150f70\") " pod="openstack/ceilometer-0" Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.422834 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/886ea3a6-2ccc-4e4e-8a77-25443b150f70-run-httpd\") pod \"ceilometer-0\" (UID: \"886ea3a6-2ccc-4e4e-8a77-25443b150f70\") " pod="openstack/ceilometer-0" Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.428503 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/886ea3a6-2ccc-4e4e-8a77-25443b150f70-scripts\") pod \"ceilometer-0\" (UID: \"886ea3a6-2ccc-4e4e-8a77-25443b150f70\") " pod="openstack/ceilometer-0" Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.432282 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/886ea3a6-2ccc-4e4e-8a77-25443b150f70-config-data\") pod \"ceilometer-0\" (UID: \"886ea3a6-2ccc-4e4e-8a77-25443b150f70\") " pod="openstack/ceilometer-0" Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.439752 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/886ea3a6-2ccc-4e4e-8a77-25443b150f70-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"886ea3a6-2ccc-4e4e-8a77-25443b150f70\") " pod="openstack/ceilometer-0" Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.439863 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/886ea3a6-2ccc-4e4e-8a77-25443b150f70-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"886ea3a6-2ccc-4e4e-8a77-25443b150f70\") " pod="openstack/ceilometer-0" Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.440475 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/886ea3a6-2ccc-4e4e-8a77-25443b150f70-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"886ea3a6-2ccc-4e4e-8a77-25443b150f70\") " pod="openstack/ceilometer-0" Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.443438 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9s28b\" (UniqueName: \"kubernetes.io/projected/886ea3a6-2ccc-4e4e-8a77-25443b150f70-kube-api-access-9s28b\") pod \"ceilometer-0\" (UID: \"886ea3a6-2ccc-4e4e-8a77-25443b150f70\") " pod="openstack/ceilometer-0" Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.549645 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.589534 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:40:11 crc kubenswrapper[4947]: I1129 07:40:11.959176 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:40:12 crc kubenswrapper[4947]: I1129 07:40:12.114111 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"886ea3a6-2ccc-4e4e-8a77-25443b150f70","Type":"ContainerStarted","Data":"0e87ac222542e73020f06baf8e3822c4be3e6deb1aea7213f1dd1f102fe24884"} Nov 29 07:40:12 crc kubenswrapper[4947]: I1129 07:40:12.134445 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"fca98943-b0c1-461f-91fb-56f3f476810c","Type":"ContainerStarted","Data":"fcc5c92c546d9e693b73593447f0137669ce5ca6e83e1d8e52428a96ca30302b"} Nov 29 07:40:12 crc kubenswrapper[4947]: I1129 07:40:12.194903 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=4.711264416 podStartE2EDuration="15.194870825s" podCreationTimestamp="2025-11-29 07:39:57 +0000 UTC" firstStartedPulling="2025-11-29 07:39:58.929368356 +0000 UTC m=+3949.973750437" lastFinishedPulling="2025-11-29 07:40:09.412974755 +0000 UTC m=+3960.457356846" observedRunningTime="2025-11-29 07:40:12.183521438 +0000 UTC m=+3963.227903529" watchObservedRunningTime="2025-11-29 07:40:12.194870825 +0000 UTC m=+3963.239252906" Nov 29 07:40:13 crc kubenswrapper[4947]: I1129 07:40:13.157663 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"886ea3a6-2ccc-4e4e-8a77-25443b150f70","Type":"ContainerStarted","Data":"2f99e3f713cec876c0e2e064324931b42700bf08e8d3c0a33bbd7eddc644528c"} Nov 29 07:40:13 crc kubenswrapper[4947]: I1129 07:40:13.300879 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-bf95bd4cd-8f5st" podUID="dec28edc-46ae-456a-9be2-ec56bdfd409f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.244:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.244:8443: connect: connection refused" Nov 29 07:40:13 crc kubenswrapper[4947]: I1129 07:40:13.301026 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-bf95bd4cd-8f5st" Nov 29 07:40:14 crc kubenswrapper[4947]: I1129 07:40:14.169586 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"886ea3a6-2ccc-4e4e-8a77-25443b150f70","Type":"ContainerStarted","Data":"68c9491ac39e84afb81e58387d080d44f49e1cce36c079d3c9fea1cd432061d7"} Nov 29 07:40:14 crc kubenswrapper[4947]: I1129 07:40:14.991100 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-864d5fc68c-7dxrl" podUID="a69f6907-c4a4-45a1-a873-ae5c0557ee41" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.193:5353: i/o timeout" Nov 29 07:40:15 crc kubenswrapper[4947]: I1129 07:40:15.202685 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"886ea3a6-2ccc-4e4e-8a77-25443b150f70","Type":"ContainerStarted","Data":"7fa0add00a33907b1949889b39f2121f05c701f01ebfe543e680b84275d787e3"} Nov 29 07:40:17 crc kubenswrapper[4947]: I1129 07:40:17.209027 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"886ea3a6-2ccc-4e4e-8a77-25443b150f70","Type":"ContainerStarted","Data":"da1e3f74b78a7a054a8d9daf4026308ddde2e2ff45148fa290809cc5316fb8a3"} Nov 29 07:40:17 crc kubenswrapper[4947]: I1129 07:40:17.209944 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 29 07:40:17 crc kubenswrapper[4947]: I1129 07:40:17.210115 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="886ea3a6-2ccc-4e4e-8a77-25443b150f70" containerName="proxy-httpd" containerID="cri-o://da1e3f74b78a7a054a8d9daf4026308ddde2e2ff45148fa290809cc5316fb8a3" gracePeriod=30 Nov 29 07:40:17 crc kubenswrapper[4947]: I1129 07:40:17.210336 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="886ea3a6-2ccc-4e4e-8a77-25443b150f70" containerName="sg-core" containerID="cri-o://7fa0add00a33907b1949889b39f2121f05c701f01ebfe543e680b84275d787e3" gracePeriod=30 Nov 29 07:40:17 crc kubenswrapper[4947]: I1129 07:40:17.210386 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="886ea3a6-2ccc-4e4e-8a77-25443b150f70" containerName="ceilometer-notification-agent" containerID="cri-o://68c9491ac39e84afb81e58387d080d44f49e1cce36c079d3c9fea1cd432061d7" gracePeriod=30 Nov 29 07:40:17 crc kubenswrapper[4947]: I1129 07:40:17.209210 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="886ea3a6-2ccc-4e4e-8a77-25443b150f70" containerName="ceilometer-central-agent" containerID="cri-o://2f99e3f713cec876c0e2e064324931b42700bf08e8d3c0a33bbd7eddc644528c" gracePeriod=30 Nov 29 07:40:17 crc kubenswrapper[4947]: I1129 07:40:17.258185 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.074036953 podStartE2EDuration="6.258156824s" podCreationTimestamp="2025-11-29 07:40:11 +0000 UTC" firstStartedPulling="2025-11-29 07:40:11.964995265 +0000 UTC m=+3963.009377346" lastFinishedPulling="2025-11-29 07:40:16.149115136 +0000 UTC m=+3967.193497217" observedRunningTime="2025-11-29 07:40:17.24576197 +0000 UTC m=+3968.290144061" watchObservedRunningTime="2025-11-29 07:40:17.258156824 +0000 UTC m=+3968.302538925" Nov 29 07:40:18 crc kubenswrapper[4947]: I1129 07:40:18.075063 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Nov 29 07:40:18 crc kubenswrapper[4947]: I1129 07:40:18.224320 4947 generic.go:334] "Generic (PLEG): container finished" podID="886ea3a6-2ccc-4e4e-8a77-25443b150f70" containerID="da1e3f74b78a7a054a8d9daf4026308ddde2e2ff45148fa290809cc5316fb8a3" exitCode=0 Nov 29 07:40:18 crc kubenswrapper[4947]: I1129 07:40:18.224361 4947 generic.go:334] "Generic (PLEG): container finished" podID="886ea3a6-2ccc-4e4e-8a77-25443b150f70" containerID="7fa0add00a33907b1949889b39f2121f05c701f01ebfe543e680b84275d787e3" exitCode=2 Nov 29 07:40:18 crc kubenswrapper[4947]: I1129 07:40:18.224371 4947 generic.go:334] "Generic (PLEG): container finished" podID="886ea3a6-2ccc-4e4e-8a77-25443b150f70" containerID="68c9491ac39e84afb81e58387d080d44f49e1cce36c079d3c9fea1cd432061d7" exitCode=0 Nov 29 07:40:18 crc kubenswrapper[4947]: I1129 07:40:18.224394 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"886ea3a6-2ccc-4e4e-8a77-25443b150f70","Type":"ContainerDied","Data":"da1e3f74b78a7a054a8d9daf4026308ddde2e2ff45148fa290809cc5316fb8a3"} Nov 29 07:40:18 crc kubenswrapper[4947]: I1129 07:40:18.224427 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"886ea3a6-2ccc-4e4e-8a77-25443b150f70","Type":"ContainerDied","Data":"7fa0add00a33907b1949889b39f2121f05c701f01ebfe543e680b84275d787e3"} Nov 29 07:40:18 crc kubenswrapper[4947]: I1129 07:40:18.224437 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"886ea3a6-2ccc-4e4e-8a77-25443b150f70","Type":"ContainerDied","Data":"68c9491ac39e84afb81e58387d080d44f49e1cce36c079d3c9fea1cd432061d7"} Nov 29 07:40:18 crc kubenswrapper[4947]: I1129 07:40:18.858856 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:40:18 crc kubenswrapper[4947]: I1129 07:40:18.943090 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/886ea3a6-2ccc-4e4e-8a77-25443b150f70-combined-ca-bundle\") pod \"886ea3a6-2ccc-4e4e-8a77-25443b150f70\" (UID: \"886ea3a6-2ccc-4e4e-8a77-25443b150f70\") " Nov 29 07:40:18 crc kubenswrapper[4947]: I1129 07:40:18.943249 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/886ea3a6-2ccc-4e4e-8a77-25443b150f70-scripts\") pod \"886ea3a6-2ccc-4e4e-8a77-25443b150f70\" (UID: \"886ea3a6-2ccc-4e4e-8a77-25443b150f70\") " Nov 29 07:40:18 crc kubenswrapper[4947]: I1129 07:40:18.943287 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/886ea3a6-2ccc-4e4e-8a77-25443b150f70-run-httpd\") pod \"886ea3a6-2ccc-4e4e-8a77-25443b150f70\" (UID: \"886ea3a6-2ccc-4e4e-8a77-25443b150f70\") " Nov 29 07:40:18 crc kubenswrapper[4947]: I1129 07:40:18.943320 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/886ea3a6-2ccc-4e4e-8a77-25443b150f70-sg-core-conf-yaml\") pod \"886ea3a6-2ccc-4e4e-8a77-25443b150f70\" (UID: \"886ea3a6-2ccc-4e4e-8a77-25443b150f70\") " Nov 29 07:40:18 crc kubenswrapper[4947]: I1129 07:40:18.943336 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/886ea3a6-2ccc-4e4e-8a77-25443b150f70-config-data\") pod \"886ea3a6-2ccc-4e4e-8a77-25443b150f70\" (UID: \"886ea3a6-2ccc-4e4e-8a77-25443b150f70\") " Nov 29 07:40:18 crc kubenswrapper[4947]: I1129 07:40:18.943390 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/886ea3a6-2ccc-4e4e-8a77-25443b150f70-ceilometer-tls-certs\") pod \"886ea3a6-2ccc-4e4e-8a77-25443b150f70\" (UID: \"886ea3a6-2ccc-4e4e-8a77-25443b150f70\") " Nov 29 07:40:18 crc kubenswrapper[4947]: I1129 07:40:18.943414 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9s28b\" (UniqueName: \"kubernetes.io/projected/886ea3a6-2ccc-4e4e-8a77-25443b150f70-kube-api-access-9s28b\") pod \"886ea3a6-2ccc-4e4e-8a77-25443b150f70\" (UID: \"886ea3a6-2ccc-4e4e-8a77-25443b150f70\") " Nov 29 07:40:18 crc kubenswrapper[4947]: I1129 07:40:18.943491 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/886ea3a6-2ccc-4e4e-8a77-25443b150f70-log-httpd\") pod \"886ea3a6-2ccc-4e4e-8a77-25443b150f70\" (UID: \"886ea3a6-2ccc-4e4e-8a77-25443b150f70\") " Nov 29 07:40:18 crc kubenswrapper[4947]: I1129 07:40:18.944540 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/886ea3a6-2ccc-4e4e-8a77-25443b150f70-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "886ea3a6-2ccc-4e4e-8a77-25443b150f70" (UID: "886ea3a6-2ccc-4e4e-8a77-25443b150f70"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:40:18 crc kubenswrapper[4947]: I1129 07:40:18.946588 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/886ea3a6-2ccc-4e4e-8a77-25443b150f70-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "886ea3a6-2ccc-4e4e-8a77-25443b150f70" (UID: "886ea3a6-2ccc-4e4e-8a77-25443b150f70"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:40:18 crc kubenswrapper[4947]: I1129 07:40:18.950475 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/886ea3a6-2ccc-4e4e-8a77-25443b150f70-kube-api-access-9s28b" (OuterVolumeSpecName: "kube-api-access-9s28b") pod "886ea3a6-2ccc-4e4e-8a77-25443b150f70" (UID: "886ea3a6-2ccc-4e4e-8a77-25443b150f70"). InnerVolumeSpecName "kube-api-access-9s28b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:40:18 crc kubenswrapper[4947]: I1129 07:40:18.951613 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/886ea3a6-2ccc-4e4e-8a77-25443b150f70-scripts" (OuterVolumeSpecName: "scripts") pod "886ea3a6-2ccc-4e4e-8a77-25443b150f70" (UID: "886ea3a6-2ccc-4e4e-8a77-25443b150f70"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:18 crc kubenswrapper[4947]: I1129 07:40:18.987251 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/886ea3a6-2ccc-4e4e-8a77-25443b150f70-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "886ea3a6-2ccc-4e4e-8a77-25443b150f70" (UID: "886ea3a6-2ccc-4e4e-8a77-25443b150f70"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:18 crc kubenswrapper[4947]: I1129 07:40:18.999457 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/886ea3a6-2ccc-4e4e-8a77-25443b150f70-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "886ea3a6-2ccc-4e4e-8a77-25443b150f70" (UID: "886ea3a6-2ccc-4e4e-8a77-25443b150f70"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.018720 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-bf95bd4cd-8f5st" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.045528 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dec28edc-46ae-456a-9be2-ec56bdfd409f-horizon-secret-key\") pod \"dec28edc-46ae-456a-9be2-ec56bdfd409f\" (UID: \"dec28edc-46ae-456a-9be2-ec56bdfd409f\") " Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.045603 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dec28edc-46ae-456a-9be2-ec56bdfd409f-combined-ca-bundle\") pod \"dec28edc-46ae-456a-9be2-ec56bdfd409f\" (UID: \"dec28edc-46ae-456a-9be2-ec56bdfd409f\") " Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.045647 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dec28edc-46ae-456a-9be2-ec56bdfd409f-config-data\") pod \"dec28edc-46ae-456a-9be2-ec56bdfd409f\" (UID: \"dec28edc-46ae-456a-9be2-ec56bdfd409f\") " Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.045697 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zrzh\" (UniqueName: \"kubernetes.io/projected/dec28edc-46ae-456a-9be2-ec56bdfd409f-kube-api-access-8zrzh\") pod \"dec28edc-46ae-456a-9be2-ec56bdfd409f\" (UID: \"dec28edc-46ae-456a-9be2-ec56bdfd409f\") " Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.045732 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/dec28edc-46ae-456a-9be2-ec56bdfd409f-horizon-tls-certs\") pod \"dec28edc-46ae-456a-9be2-ec56bdfd409f\" (UID: \"dec28edc-46ae-456a-9be2-ec56bdfd409f\") " Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.045859 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dec28edc-46ae-456a-9be2-ec56bdfd409f-scripts\") pod \"dec28edc-46ae-456a-9be2-ec56bdfd409f\" (UID: \"dec28edc-46ae-456a-9be2-ec56bdfd409f\") " Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.045928 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dec28edc-46ae-456a-9be2-ec56bdfd409f-logs\") pod \"dec28edc-46ae-456a-9be2-ec56bdfd409f\" (UID: \"dec28edc-46ae-456a-9be2-ec56bdfd409f\") " Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.046414 4947 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/886ea3a6-2ccc-4e4e-8a77-25443b150f70-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.046426 4947 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/886ea3a6-2ccc-4e4e-8a77-25443b150f70-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.046435 4947 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/886ea3a6-2ccc-4e4e-8a77-25443b150f70-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.046446 4947 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/886ea3a6-2ccc-4e4e-8a77-25443b150f70-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.046455 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9s28b\" (UniqueName: \"kubernetes.io/projected/886ea3a6-2ccc-4e4e-8a77-25443b150f70-kube-api-access-9s28b\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.046463 4947 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/886ea3a6-2ccc-4e4e-8a77-25443b150f70-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.046777 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dec28edc-46ae-456a-9be2-ec56bdfd409f-logs" (OuterVolumeSpecName: "logs") pod "dec28edc-46ae-456a-9be2-ec56bdfd409f" (UID: "dec28edc-46ae-456a-9be2-ec56bdfd409f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.051374 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dec28edc-46ae-456a-9be2-ec56bdfd409f-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "dec28edc-46ae-456a-9be2-ec56bdfd409f" (UID: "dec28edc-46ae-456a-9be2-ec56bdfd409f"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.055145 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dec28edc-46ae-456a-9be2-ec56bdfd409f-kube-api-access-8zrzh" (OuterVolumeSpecName: "kube-api-access-8zrzh") pod "dec28edc-46ae-456a-9be2-ec56bdfd409f" (UID: "dec28edc-46ae-456a-9be2-ec56bdfd409f"). InnerVolumeSpecName "kube-api-access-8zrzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.055388 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/886ea3a6-2ccc-4e4e-8a77-25443b150f70-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "886ea3a6-2ccc-4e4e-8a77-25443b150f70" (UID: "886ea3a6-2ccc-4e4e-8a77-25443b150f70"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.066516 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/886ea3a6-2ccc-4e4e-8a77-25443b150f70-config-data" (OuterVolumeSpecName: "config-data") pod "886ea3a6-2ccc-4e4e-8a77-25443b150f70" (UID: "886ea3a6-2ccc-4e4e-8a77-25443b150f70"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.089389 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dec28edc-46ae-456a-9be2-ec56bdfd409f-scripts" (OuterVolumeSpecName: "scripts") pod "dec28edc-46ae-456a-9be2-ec56bdfd409f" (UID: "dec28edc-46ae-456a-9be2-ec56bdfd409f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.095909 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dec28edc-46ae-456a-9be2-ec56bdfd409f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dec28edc-46ae-456a-9be2-ec56bdfd409f" (UID: "dec28edc-46ae-456a-9be2-ec56bdfd409f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.098818 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dec28edc-46ae-456a-9be2-ec56bdfd409f-config-data" (OuterVolumeSpecName: "config-data") pod "dec28edc-46ae-456a-9be2-ec56bdfd409f" (UID: "dec28edc-46ae-456a-9be2-ec56bdfd409f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.125760 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dec28edc-46ae-456a-9be2-ec56bdfd409f-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "dec28edc-46ae-456a-9be2-ec56bdfd409f" (UID: "dec28edc-46ae-456a-9be2-ec56bdfd409f"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.150408 4947 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dec28edc-46ae-456a-9be2-ec56bdfd409f-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.150678 4947 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/886ea3a6-2ccc-4e4e-8a77-25443b150f70-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.150769 4947 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dec28edc-46ae-456a-9be2-ec56bdfd409f-logs\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.150854 4947 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dec28edc-46ae-456a-9be2-ec56bdfd409f-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.150942 4947 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dec28edc-46ae-456a-9be2-ec56bdfd409f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.151025 4947 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dec28edc-46ae-456a-9be2-ec56bdfd409f-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.151099 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zrzh\" (UniqueName: \"kubernetes.io/projected/dec28edc-46ae-456a-9be2-ec56bdfd409f-kube-api-access-8zrzh\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.151179 4947 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/dec28edc-46ae-456a-9be2-ec56bdfd409f-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.151285 4947 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/886ea3a6-2ccc-4e4e-8a77-25443b150f70-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.234174 4947 generic.go:334] "Generic (PLEG): container finished" podID="886ea3a6-2ccc-4e4e-8a77-25443b150f70" containerID="2f99e3f713cec876c0e2e064324931b42700bf08e8d3c0a33bbd7eddc644528c" exitCode=0 Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.234262 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"886ea3a6-2ccc-4e4e-8a77-25443b150f70","Type":"ContainerDied","Data":"2f99e3f713cec876c0e2e064324931b42700bf08e8d3c0a33bbd7eddc644528c"} Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.234297 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"886ea3a6-2ccc-4e4e-8a77-25443b150f70","Type":"ContainerDied","Data":"0e87ac222542e73020f06baf8e3822c4be3e6deb1aea7213f1dd1f102fe24884"} Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.234317 4947 scope.go:117] "RemoveContainer" containerID="da1e3f74b78a7a054a8d9daf4026308ddde2e2ff45148fa290809cc5316fb8a3" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.234446 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.238558 4947 generic.go:334] "Generic (PLEG): container finished" podID="dec28edc-46ae-456a-9be2-ec56bdfd409f" containerID="aef36c05b7b3247775c426899af1ec33afa2b063acb7b808f071eca790a6a14c" exitCode=137 Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.238630 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-bf95bd4cd-8f5st" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.238746 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bf95bd4cd-8f5st" event={"ID":"dec28edc-46ae-456a-9be2-ec56bdfd409f","Type":"ContainerDied","Data":"aef36c05b7b3247775c426899af1ec33afa2b063acb7b808f071eca790a6a14c"} Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.238837 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bf95bd4cd-8f5st" event={"ID":"dec28edc-46ae-456a-9be2-ec56bdfd409f","Type":"ContainerDied","Data":"e49fb151ce2885b6b1617adeae58403f34d78f3a0308f3441b476dc192ab8f39"} Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.270330 4947 scope.go:117] "RemoveContainer" containerID="7fa0add00a33907b1949889b39f2121f05c701f01ebfe543e680b84275d787e3" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.272389 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-bf95bd4cd-8f5st"] Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.291407 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-bf95bd4cd-8f5st"] Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.307307 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.318540 4947 scope.go:117] "RemoveContainer" containerID="68c9491ac39e84afb81e58387d080d44f49e1cce36c079d3c9fea1cd432061d7" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.327707 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.338831 4947 scope.go:117] "RemoveContainer" containerID="2f99e3f713cec876c0e2e064324931b42700bf08e8d3c0a33bbd7eddc644528c" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.341069 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:40:19 crc kubenswrapper[4947]: E1129 07:40:19.341550 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="886ea3a6-2ccc-4e4e-8a77-25443b150f70" containerName="sg-core" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.341575 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="886ea3a6-2ccc-4e4e-8a77-25443b150f70" containerName="sg-core" Nov 29 07:40:19 crc kubenswrapper[4947]: E1129 07:40:19.341614 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="886ea3a6-2ccc-4e4e-8a77-25443b150f70" containerName="ceilometer-notification-agent" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.341625 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="886ea3a6-2ccc-4e4e-8a77-25443b150f70" containerName="ceilometer-notification-agent" Nov 29 07:40:19 crc kubenswrapper[4947]: E1129 07:40:19.341643 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="886ea3a6-2ccc-4e4e-8a77-25443b150f70" containerName="ceilometer-central-agent" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.341652 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="886ea3a6-2ccc-4e4e-8a77-25443b150f70" containerName="ceilometer-central-agent" Nov 29 07:40:19 crc kubenswrapper[4947]: E1129 07:40:19.341669 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="886ea3a6-2ccc-4e4e-8a77-25443b150f70" containerName="proxy-httpd" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.341677 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="886ea3a6-2ccc-4e4e-8a77-25443b150f70" containerName="proxy-httpd" Nov 29 07:40:19 crc kubenswrapper[4947]: E1129 07:40:19.341690 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dec28edc-46ae-456a-9be2-ec56bdfd409f" containerName="horizon-log" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.341699 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="dec28edc-46ae-456a-9be2-ec56bdfd409f" containerName="horizon-log" Nov 29 07:40:19 crc kubenswrapper[4947]: E1129 07:40:19.341718 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dec28edc-46ae-456a-9be2-ec56bdfd409f" containerName="horizon" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.341728 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="dec28edc-46ae-456a-9be2-ec56bdfd409f" containerName="horizon" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.341988 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="886ea3a6-2ccc-4e4e-8a77-25443b150f70" containerName="ceilometer-notification-agent" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.342011 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="dec28edc-46ae-456a-9be2-ec56bdfd409f" containerName="horizon-log" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.342020 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="886ea3a6-2ccc-4e4e-8a77-25443b150f70" containerName="ceilometer-central-agent" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.342043 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="886ea3a6-2ccc-4e4e-8a77-25443b150f70" containerName="proxy-httpd" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.342055 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="dec28edc-46ae-456a-9be2-ec56bdfd409f" containerName="horizon" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.342071 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="886ea3a6-2ccc-4e4e-8a77-25443b150f70" containerName="sg-core" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.344315 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.346246 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.346352 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.346689 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.358614 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.390475 4947 scope.go:117] "RemoveContainer" containerID="da1e3f74b78a7a054a8d9daf4026308ddde2e2ff45148fa290809cc5316fb8a3" Nov 29 07:40:19 crc kubenswrapper[4947]: E1129 07:40:19.391047 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da1e3f74b78a7a054a8d9daf4026308ddde2e2ff45148fa290809cc5316fb8a3\": container with ID starting with da1e3f74b78a7a054a8d9daf4026308ddde2e2ff45148fa290809cc5316fb8a3 not found: ID does not exist" containerID="da1e3f74b78a7a054a8d9daf4026308ddde2e2ff45148fa290809cc5316fb8a3" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.391101 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da1e3f74b78a7a054a8d9daf4026308ddde2e2ff45148fa290809cc5316fb8a3"} err="failed to get container status \"da1e3f74b78a7a054a8d9daf4026308ddde2e2ff45148fa290809cc5316fb8a3\": rpc error: code = NotFound desc = could not find container \"da1e3f74b78a7a054a8d9daf4026308ddde2e2ff45148fa290809cc5316fb8a3\": container with ID starting with da1e3f74b78a7a054a8d9daf4026308ddde2e2ff45148fa290809cc5316fb8a3 not found: ID does not exist" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.391130 4947 scope.go:117] "RemoveContainer" containerID="7fa0add00a33907b1949889b39f2121f05c701f01ebfe543e680b84275d787e3" Nov 29 07:40:19 crc kubenswrapper[4947]: E1129 07:40:19.391496 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fa0add00a33907b1949889b39f2121f05c701f01ebfe543e680b84275d787e3\": container with ID starting with 7fa0add00a33907b1949889b39f2121f05c701f01ebfe543e680b84275d787e3 not found: ID does not exist" containerID="7fa0add00a33907b1949889b39f2121f05c701f01ebfe543e680b84275d787e3" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.391540 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fa0add00a33907b1949889b39f2121f05c701f01ebfe543e680b84275d787e3"} err="failed to get container status \"7fa0add00a33907b1949889b39f2121f05c701f01ebfe543e680b84275d787e3\": rpc error: code = NotFound desc = could not find container \"7fa0add00a33907b1949889b39f2121f05c701f01ebfe543e680b84275d787e3\": container with ID starting with 7fa0add00a33907b1949889b39f2121f05c701f01ebfe543e680b84275d787e3 not found: ID does not exist" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.391566 4947 scope.go:117] "RemoveContainer" containerID="68c9491ac39e84afb81e58387d080d44f49e1cce36c079d3c9fea1cd432061d7" Nov 29 07:40:19 crc kubenswrapper[4947]: E1129 07:40:19.391818 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68c9491ac39e84afb81e58387d080d44f49e1cce36c079d3c9fea1cd432061d7\": container with ID starting with 68c9491ac39e84afb81e58387d080d44f49e1cce36c079d3c9fea1cd432061d7 not found: ID does not exist" containerID="68c9491ac39e84afb81e58387d080d44f49e1cce36c079d3c9fea1cd432061d7" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.391845 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68c9491ac39e84afb81e58387d080d44f49e1cce36c079d3c9fea1cd432061d7"} err="failed to get container status \"68c9491ac39e84afb81e58387d080d44f49e1cce36c079d3c9fea1cd432061d7\": rpc error: code = NotFound desc = could not find container \"68c9491ac39e84afb81e58387d080d44f49e1cce36c079d3c9fea1cd432061d7\": container with ID starting with 68c9491ac39e84afb81e58387d080d44f49e1cce36c079d3c9fea1cd432061d7 not found: ID does not exist" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.391861 4947 scope.go:117] "RemoveContainer" containerID="2f99e3f713cec876c0e2e064324931b42700bf08e8d3c0a33bbd7eddc644528c" Nov 29 07:40:19 crc kubenswrapper[4947]: E1129 07:40:19.392095 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f99e3f713cec876c0e2e064324931b42700bf08e8d3c0a33bbd7eddc644528c\": container with ID starting with 2f99e3f713cec876c0e2e064324931b42700bf08e8d3c0a33bbd7eddc644528c not found: ID does not exist" containerID="2f99e3f713cec876c0e2e064324931b42700bf08e8d3c0a33bbd7eddc644528c" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.392127 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f99e3f713cec876c0e2e064324931b42700bf08e8d3c0a33bbd7eddc644528c"} err="failed to get container status \"2f99e3f713cec876c0e2e064324931b42700bf08e8d3c0a33bbd7eddc644528c\": rpc error: code = NotFound desc = could not find container \"2f99e3f713cec876c0e2e064324931b42700bf08e8d3c0a33bbd7eddc644528c\": container with ID starting with 2f99e3f713cec876c0e2e064324931b42700bf08e8d3c0a33bbd7eddc644528c not found: ID does not exist" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.392147 4947 scope.go:117] "RemoveContainer" containerID="9ae6aeca4958aaa8abddb6e01d2d0b4cd34a211b67647584071a9757a1733ad3" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.461911 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9rbv\" (UniqueName: \"kubernetes.io/projected/23c9606d-46f1-4079-a6b1-ecc87e1c99b1-kube-api-access-n9rbv\") pod \"ceilometer-0\" (UID: \"23c9606d-46f1-4079-a6b1-ecc87e1c99b1\") " pod="openstack/ceilometer-0" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.462082 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/23c9606d-46f1-4079-a6b1-ecc87e1c99b1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"23c9606d-46f1-4079-a6b1-ecc87e1c99b1\") " pod="openstack/ceilometer-0" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.462239 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/23c9606d-46f1-4079-a6b1-ecc87e1c99b1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"23c9606d-46f1-4079-a6b1-ecc87e1c99b1\") " pod="openstack/ceilometer-0" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.462380 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23c9606d-46f1-4079-a6b1-ecc87e1c99b1-scripts\") pod \"ceilometer-0\" (UID: \"23c9606d-46f1-4079-a6b1-ecc87e1c99b1\") " pod="openstack/ceilometer-0" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.462612 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23c9606d-46f1-4079-a6b1-ecc87e1c99b1-config-data\") pod \"ceilometer-0\" (UID: \"23c9606d-46f1-4079-a6b1-ecc87e1c99b1\") " pod="openstack/ceilometer-0" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.462650 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/23c9606d-46f1-4079-a6b1-ecc87e1c99b1-log-httpd\") pod \"ceilometer-0\" (UID: \"23c9606d-46f1-4079-a6b1-ecc87e1c99b1\") " pod="openstack/ceilometer-0" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.462672 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23c9606d-46f1-4079-a6b1-ecc87e1c99b1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"23c9606d-46f1-4079-a6b1-ecc87e1c99b1\") " pod="openstack/ceilometer-0" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.462797 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/23c9606d-46f1-4079-a6b1-ecc87e1c99b1-run-httpd\") pod \"ceilometer-0\" (UID: \"23c9606d-46f1-4079-a6b1-ecc87e1c99b1\") " pod="openstack/ceilometer-0" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.564310 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/23c9606d-46f1-4079-a6b1-ecc87e1c99b1-run-httpd\") pod \"ceilometer-0\" (UID: \"23c9606d-46f1-4079-a6b1-ecc87e1c99b1\") " pod="openstack/ceilometer-0" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.564444 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9rbv\" (UniqueName: \"kubernetes.io/projected/23c9606d-46f1-4079-a6b1-ecc87e1c99b1-kube-api-access-n9rbv\") pod \"ceilometer-0\" (UID: \"23c9606d-46f1-4079-a6b1-ecc87e1c99b1\") " pod="openstack/ceilometer-0" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.564495 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/23c9606d-46f1-4079-a6b1-ecc87e1c99b1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"23c9606d-46f1-4079-a6b1-ecc87e1c99b1\") " pod="openstack/ceilometer-0" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.564580 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/23c9606d-46f1-4079-a6b1-ecc87e1c99b1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"23c9606d-46f1-4079-a6b1-ecc87e1c99b1\") " pod="openstack/ceilometer-0" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.564993 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/23c9606d-46f1-4079-a6b1-ecc87e1c99b1-run-httpd\") pod \"ceilometer-0\" (UID: \"23c9606d-46f1-4079-a6b1-ecc87e1c99b1\") " pod="openstack/ceilometer-0" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.565565 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23c9606d-46f1-4079-a6b1-ecc87e1c99b1-scripts\") pod \"ceilometer-0\" (UID: \"23c9606d-46f1-4079-a6b1-ecc87e1c99b1\") " pod="openstack/ceilometer-0" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.565670 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23c9606d-46f1-4079-a6b1-ecc87e1c99b1-config-data\") pod \"ceilometer-0\" (UID: \"23c9606d-46f1-4079-a6b1-ecc87e1c99b1\") " pod="openstack/ceilometer-0" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.565689 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/23c9606d-46f1-4079-a6b1-ecc87e1c99b1-log-httpd\") pod \"ceilometer-0\" (UID: \"23c9606d-46f1-4079-a6b1-ecc87e1c99b1\") " pod="openstack/ceilometer-0" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.565706 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23c9606d-46f1-4079-a6b1-ecc87e1c99b1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"23c9606d-46f1-4079-a6b1-ecc87e1c99b1\") " pod="openstack/ceilometer-0" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.569544 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/23c9606d-46f1-4079-a6b1-ecc87e1c99b1-log-httpd\") pod \"ceilometer-0\" (UID: \"23c9606d-46f1-4079-a6b1-ecc87e1c99b1\") " pod="openstack/ceilometer-0" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.569841 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/23c9606d-46f1-4079-a6b1-ecc87e1c99b1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"23c9606d-46f1-4079-a6b1-ecc87e1c99b1\") " pod="openstack/ceilometer-0" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.570745 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23c9606d-46f1-4079-a6b1-ecc87e1c99b1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"23c9606d-46f1-4079-a6b1-ecc87e1c99b1\") " pod="openstack/ceilometer-0" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.570918 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23c9606d-46f1-4079-a6b1-ecc87e1c99b1-config-data\") pod \"ceilometer-0\" (UID: \"23c9606d-46f1-4079-a6b1-ecc87e1c99b1\") " pod="openstack/ceilometer-0" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.584039 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/23c9606d-46f1-4079-a6b1-ecc87e1c99b1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"23c9606d-46f1-4079-a6b1-ecc87e1c99b1\") " pod="openstack/ceilometer-0" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.585246 4947 scope.go:117] "RemoveContainer" containerID="aef36c05b7b3247775c426899af1ec33afa2b063acb7b808f071eca790a6a14c" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.586423 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23c9606d-46f1-4079-a6b1-ecc87e1c99b1-scripts\") pod \"ceilometer-0\" (UID: \"23c9606d-46f1-4079-a6b1-ecc87e1c99b1\") " pod="openstack/ceilometer-0" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.590973 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9rbv\" (UniqueName: \"kubernetes.io/projected/23c9606d-46f1-4079-a6b1-ecc87e1c99b1-kube-api-access-n9rbv\") pod \"ceilometer-0\" (UID: \"23c9606d-46f1-4079-a6b1-ecc87e1c99b1\") " pod="openstack/ceilometer-0" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.664133 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.678319 4947 scope.go:117] "RemoveContainer" containerID="9ae6aeca4958aaa8abddb6e01d2d0b4cd34a211b67647584071a9757a1733ad3" Nov 29 07:40:19 crc kubenswrapper[4947]: E1129 07:40:19.678829 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ae6aeca4958aaa8abddb6e01d2d0b4cd34a211b67647584071a9757a1733ad3\": container with ID starting with 9ae6aeca4958aaa8abddb6e01d2d0b4cd34a211b67647584071a9757a1733ad3 not found: ID does not exist" containerID="9ae6aeca4958aaa8abddb6e01d2d0b4cd34a211b67647584071a9757a1733ad3" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.678867 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ae6aeca4958aaa8abddb6e01d2d0b4cd34a211b67647584071a9757a1733ad3"} err="failed to get container status \"9ae6aeca4958aaa8abddb6e01d2d0b4cd34a211b67647584071a9757a1733ad3\": rpc error: code = NotFound desc = could not find container \"9ae6aeca4958aaa8abddb6e01d2d0b4cd34a211b67647584071a9757a1733ad3\": container with ID starting with 9ae6aeca4958aaa8abddb6e01d2d0b4cd34a211b67647584071a9757a1733ad3 not found: ID does not exist" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.678897 4947 scope.go:117] "RemoveContainer" containerID="aef36c05b7b3247775c426899af1ec33afa2b063acb7b808f071eca790a6a14c" Nov 29 07:40:19 crc kubenswrapper[4947]: E1129 07:40:19.679147 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aef36c05b7b3247775c426899af1ec33afa2b063acb7b808f071eca790a6a14c\": container with ID starting with aef36c05b7b3247775c426899af1ec33afa2b063acb7b808f071eca790a6a14c not found: ID does not exist" containerID="aef36c05b7b3247775c426899af1ec33afa2b063acb7b808f071eca790a6a14c" Nov 29 07:40:19 crc kubenswrapper[4947]: I1129 07:40:19.679176 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aef36c05b7b3247775c426899af1ec33afa2b063acb7b808f071eca790a6a14c"} err="failed to get container status \"aef36c05b7b3247775c426899af1ec33afa2b063acb7b808f071eca790a6a14c\": rpc error: code = NotFound desc = could not find container \"aef36c05b7b3247775c426899af1ec33afa2b063acb7b808f071eca790a6a14c\": container with ID starting with aef36c05b7b3247775c426899af1ec33afa2b063acb7b808f071eca790a6a14c not found: ID does not exist" Nov 29 07:40:20 crc kubenswrapper[4947]: I1129 07:40:20.048945 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Nov 29 07:40:20 crc kubenswrapper[4947]: I1129 07:40:20.104685 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Nov 29 07:40:20 crc kubenswrapper[4947]: I1129 07:40:20.169417 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:40:20 crc kubenswrapper[4947]: I1129 07:40:20.251587 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="d3b88e10-85c8-44e5-b382-7930353e7201" containerName="manila-scheduler" containerID="cri-o://854f6c5a7e8fdc6f7cffdebb9292ff392b8e94cfee13cf051371f79e0a671848" gracePeriod=30 Nov 29 07:40:20 crc kubenswrapper[4947]: I1129 07:40:20.251688 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="d3b88e10-85c8-44e5-b382-7930353e7201" containerName="probe" containerID="cri-o://4e2ed1794bafc7102002897561ba63320ef019ca3a755ef510bd2022af0789a8" gracePeriod=30 Nov 29 07:40:20 crc kubenswrapper[4947]: W1129 07:40:20.533193 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod23c9606d_46f1_4079_a6b1_ecc87e1c99b1.slice/crio-7e079c5b6f4f03ffb2a260ba8d7dafeb20a826e63b4d933ec12ea1f708421e51 WatchSource:0}: Error finding container 7e079c5b6f4f03ffb2a260ba8d7dafeb20a826e63b4d933ec12ea1f708421e51: Status 404 returned error can't find the container with id 7e079c5b6f4f03ffb2a260ba8d7dafeb20a826e63b4d933ec12ea1f708421e51 Nov 29 07:40:21 crc kubenswrapper[4947]: I1129 07:40:21.190422 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="886ea3a6-2ccc-4e4e-8a77-25443b150f70" path="/var/lib/kubelet/pods/886ea3a6-2ccc-4e4e-8a77-25443b150f70/volumes" Nov 29 07:40:21 crc kubenswrapper[4947]: I1129 07:40:21.191985 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dec28edc-46ae-456a-9be2-ec56bdfd409f" path="/var/lib/kubelet/pods/dec28edc-46ae-456a-9be2-ec56bdfd409f/volumes" Nov 29 07:40:21 crc kubenswrapper[4947]: I1129 07:40:21.271765 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"23c9606d-46f1-4079-a6b1-ecc87e1c99b1","Type":"ContainerStarted","Data":"7e079c5b6f4f03ffb2a260ba8d7dafeb20a826e63b4d933ec12ea1f708421e51"} Nov 29 07:40:21 crc kubenswrapper[4947]: I1129 07:40:21.274332 4947 generic.go:334] "Generic (PLEG): container finished" podID="d3b88e10-85c8-44e5-b382-7930353e7201" containerID="4e2ed1794bafc7102002897561ba63320ef019ca3a755ef510bd2022af0789a8" exitCode=0 Nov 29 07:40:21 crc kubenswrapper[4947]: I1129 07:40:21.274390 4947 generic.go:334] "Generic (PLEG): container finished" podID="d3b88e10-85c8-44e5-b382-7930353e7201" containerID="854f6c5a7e8fdc6f7cffdebb9292ff392b8e94cfee13cf051371f79e0a671848" exitCode=0 Nov 29 07:40:21 crc kubenswrapper[4947]: I1129 07:40:21.274421 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"d3b88e10-85c8-44e5-b382-7930353e7201","Type":"ContainerDied","Data":"4e2ed1794bafc7102002897561ba63320ef019ca3a755ef510bd2022af0789a8"} Nov 29 07:40:21 crc kubenswrapper[4947]: I1129 07:40:21.274461 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"d3b88e10-85c8-44e5-b382-7930353e7201","Type":"ContainerDied","Data":"854f6c5a7e8fdc6f7cffdebb9292ff392b8e94cfee13cf051371f79e0a671848"} Nov 29 07:40:22 crc kubenswrapper[4947]: I1129 07:40:22.097909 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Nov 29 07:40:22 crc kubenswrapper[4947]: I1129 07:40:22.138281 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3b88e10-85c8-44e5-b382-7930353e7201-scripts\") pod \"d3b88e10-85c8-44e5-b382-7930353e7201\" (UID: \"d3b88e10-85c8-44e5-b382-7930353e7201\") " Nov 29 07:40:22 crc kubenswrapper[4947]: I1129 07:40:22.138618 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3b88e10-85c8-44e5-b382-7930353e7201-config-data\") pod \"d3b88e10-85c8-44e5-b382-7930353e7201\" (UID: \"d3b88e10-85c8-44e5-b382-7930353e7201\") " Nov 29 07:40:22 crc kubenswrapper[4947]: I1129 07:40:22.138881 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3b88e10-85c8-44e5-b382-7930353e7201-combined-ca-bundle\") pod \"d3b88e10-85c8-44e5-b382-7930353e7201\" (UID: \"d3b88e10-85c8-44e5-b382-7930353e7201\") " Nov 29 07:40:22 crc kubenswrapper[4947]: I1129 07:40:22.138951 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d3b88e10-85c8-44e5-b382-7930353e7201-config-data-custom\") pod \"d3b88e10-85c8-44e5-b382-7930353e7201\" (UID: \"d3b88e10-85c8-44e5-b382-7930353e7201\") " Nov 29 07:40:22 crc kubenswrapper[4947]: I1129 07:40:22.139015 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d3b88e10-85c8-44e5-b382-7930353e7201-etc-machine-id\") pod \"d3b88e10-85c8-44e5-b382-7930353e7201\" (UID: \"d3b88e10-85c8-44e5-b382-7930353e7201\") " Nov 29 07:40:22 crc kubenswrapper[4947]: I1129 07:40:22.139126 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tns6k\" (UniqueName: \"kubernetes.io/projected/d3b88e10-85c8-44e5-b382-7930353e7201-kube-api-access-tns6k\") pod \"d3b88e10-85c8-44e5-b382-7930353e7201\" (UID: \"d3b88e10-85c8-44e5-b382-7930353e7201\") " Nov 29 07:40:22 crc kubenswrapper[4947]: I1129 07:40:22.139423 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3b88e10-85c8-44e5-b382-7930353e7201-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d3b88e10-85c8-44e5-b382-7930353e7201" (UID: "d3b88e10-85c8-44e5-b382-7930353e7201"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:40:22 crc kubenswrapper[4947]: I1129 07:40:22.139783 4947 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d3b88e10-85c8-44e5-b382-7930353e7201-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:22 crc kubenswrapper[4947]: I1129 07:40:22.145491 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3b88e10-85c8-44e5-b382-7930353e7201-scripts" (OuterVolumeSpecName: "scripts") pod "d3b88e10-85c8-44e5-b382-7930353e7201" (UID: "d3b88e10-85c8-44e5-b382-7930353e7201"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:22 crc kubenswrapper[4947]: I1129 07:40:22.168479 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3b88e10-85c8-44e5-b382-7930353e7201-kube-api-access-tns6k" (OuterVolumeSpecName: "kube-api-access-tns6k") pod "d3b88e10-85c8-44e5-b382-7930353e7201" (UID: "d3b88e10-85c8-44e5-b382-7930353e7201"). InnerVolumeSpecName "kube-api-access-tns6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:40:22 crc kubenswrapper[4947]: I1129 07:40:22.171566 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3b88e10-85c8-44e5-b382-7930353e7201-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d3b88e10-85c8-44e5-b382-7930353e7201" (UID: "d3b88e10-85c8-44e5-b382-7930353e7201"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:22 crc kubenswrapper[4947]: I1129 07:40:22.180097 4947 scope.go:117] "RemoveContainer" containerID="befe8bc1f518b72b2765c4bbae633eaff2671198765b803461fe977b3f76f166" Nov 29 07:40:22 crc kubenswrapper[4947]: E1129 07:40:22.180533 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:40:22 crc kubenswrapper[4947]: I1129 07:40:22.242858 4947 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d3b88e10-85c8-44e5-b382-7930353e7201-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:22 crc kubenswrapper[4947]: I1129 07:40:22.242899 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tns6k\" (UniqueName: \"kubernetes.io/projected/d3b88e10-85c8-44e5-b382-7930353e7201-kube-api-access-tns6k\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:22 crc kubenswrapper[4947]: I1129 07:40:22.242911 4947 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3b88e10-85c8-44e5-b382-7930353e7201-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:22 crc kubenswrapper[4947]: I1129 07:40:22.296685 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"23c9606d-46f1-4079-a6b1-ecc87e1c99b1","Type":"ContainerStarted","Data":"353507ead9d28955d0a3de5d2a191ba5d5cc15131710e3131585d53b72f642c9"} Nov 29 07:40:22 crc kubenswrapper[4947]: I1129 07:40:22.301186 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"d3b88e10-85c8-44e5-b382-7930353e7201","Type":"ContainerDied","Data":"c450bd48dbc42b73fc1ec64a29c8ae6eda1376c1720d52bb8fea0bd44432db19"} Nov 29 07:40:22 crc kubenswrapper[4947]: I1129 07:40:22.302306 4947 scope.go:117] "RemoveContainer" containerID="4e2ed1794bafc7102002897561ba63320ef019ca3a755ef510bd2022af0789a8" Nov 29 07:40:22 crc kubenswrapper[4947]: I1129 07:40:22.301264 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Nov 29 07:40:22 crc kubenswrapper[4947]: I1129 07:40:22.354531 4947 scope.go:117] "RemoveContainer" containerID="854f6c5a7e8fdc6f7cffdebb9292ff392b8e94cfee13cf051371f79e0a671848" Nov 29 07:40:22 crc kubenswrapper[4947]: I1129 07:40:22.738150 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3b88e10-85c8-44e5-b382-7930353e7201-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d3b88e10-85c8-44e5-b382-7930353e7201" (UID: "d3b88e10-85c8-44e5-b382-7930353e7201"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:22 crc kubenswrapper[4947]: I1129 07:40:22.766739 4947 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3b88e10-85c8-44e5-b382-7930353e7201-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:22 crc kubenswrapper[4947]: I1129 07:40:22.786457 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3b88e10-85c8-44e5-b382-7930353e7201-config-data" (OuterVolumeSpecName: "config-data") pod "d3b88e10-85c8-44e5-b382-7930353e7201" (UID: "d3b88e10-85c8-44e5-b382-7930353e7201"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:22 crc kubenswrapper[4947]: I1129 07:40:22.868584 4947 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3b88e10-85c8-44e5-b382-7930353e7201-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:22 crc kubenswrapper[4947]: I1129 07:40:22.956436 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Nov 29 07:40:22 crc kubenswrapper[4947]: I1129 07:40:22.966469 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-scheduler-0"] Nov 29 07:40:22 crc kubenswrapper[4947]: I1129 07:40:22.982632 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Nov 29 07:40:22 crc kubenswrapper[4947]: E1129 07:40:22.983084 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3b88e10-85c8-44e5-b382-7930353e7201" containerName="probe" Nov 29 07:40:22 crc kubenswrapper[4947]: I1129 07:40:22.983104 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3b88e10-85c8-44e5-b382-7930353e7201" containerName="probe" Nov 29 07:40:22 crc kubenswrapper[4947]: E1129 07:40:22.983119 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3b88e10-85c8-44e5-b382-7930353e7201" containerName="manila-scheduler" Nov 29 07:40:22 crc kubenswrapper[4947]: I1129 07:40:22.983126 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3b88e10-85c8-44e5-b382-7930353e7201" containerName="manila-scheduler" Nov 29 07:40:22 crc kubenswrapper[4947]: I1129 07:40:22.983332 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3b88e10-85c8-44e5-b382-7930353e7201" containerName="manila-scheduler" Nov 29 07:40:22 crc kubenswrapper[4947]: I1129 07:40:22.983362 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3b88e10-85c8-44e5-b382-7930353e7201" containerName="probe" Nov 29 07:40:22 crc kubenswrapper[4947]: I1129 07:40:22.984541 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Nov 29 07:40:22 crc kubenswrapper[4947]: I1129 07:40:22.987027 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Nov 29 07:40:22 crc kubenswrapper[4947]: I1129 07:40:22.997346 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Nov 29 07:40:23 crc kubenswrapper[4947]: I1129 07:40:23.072545 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f3eed66-4789-4722-91df-8260e5af75a6-config-data\") pod \"manila-scheduler-0\" (UID: \"4f3eed66-4789-4722-91df-8260e5af75a6\") " pod="openstack/manila-scheduler-0" Nov 29 07:40:23 crc kubenswrapper[4947]: I1129 07:40:23.072667 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4f3eed66-4789-4722-91df-8260e5af75a6-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"4f3eed66-4789-4722-91df-8260e5af75a6\") " pod="openstack/manila-scheduler-0" Nov 29 07:40:23 crc kubenswrapper[4947]: I1129 07:40:23.072729 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f3eed66-4789-4722-91df-8260e5af75a6-scripts\") pod \"manila-scheduler-0\" (UID: \"4f3eed66-4789-4722-91df-8260e5af75a6\") " pod="openstack/manila-scheduler-0" Nov 29 07:40:23 crc kubenswrapper[4947]: I1129 07:40:23.074067 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4f3eed66-4789-4722-91df-8260e5af75a6-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"4f3eed66-4789-4722-91df-8260e5af75a6\") " pod="openstack/manila-scheduler-0" Nov 29 07:40:23 crc kubenswrapper[4947]: I1129 07:40:23.074156 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxlr4\" (UniqueName: \"kubernetes.io/projected/4f3eed66-4789-4722-91df-8260e5af75a6-kube-api-access-sxlr4\") pod \"manila-scheduler-0\" (UID: \"4f3eed66-4789-4722-91df-8260e5af75a6\") " pod="openstack/manila-scheduler-0" Nov 29 07:40:23 crc kubenswrapper[4947]: I1129 07:40:23.074299 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f3eed66-4789-4722-91df-8260e5af75a6-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"4f3eed66-4789-4722-91df-8260e5af75a6\") " pod="openstack/manila-scheduler-0" Nov 29 07:40:23 crc kubenswrapper[4947]: I1129 07:40:23.175834 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f3eed66-4789-4722-91df-8260e5af75a6-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"4f3eed66-4789-4722-91df-8260e5af75a6\") " pod="openstack/manila-scheduler-0" Nov 29 07:40:23 crc kubenswrapper[4947]: I1129 07:40:23.175892 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f3eed66-4789-4722-91df-8260e5af75a6-config-data\") pod \"manila-scheduler-0\" (UID: \"4f3eed66-4789-4722-91df-8260e5af75a6\") " pod="openstack/manila-scheduler-0" Nov 29 07:40:23 crc kubenswrapper[4947]: I1129 07:40:23.175945 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4f3eed66-4789-4722-91df-8260e5af75a6-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"4f3eed66-4789-4722-91df-8260e5af75a6\") " pod="openstack/manila-scheduler-0" Nov 29 07:40:23 crc kubenswrapper[4947]: I1129 07:40:23.175990 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f3eed66-4789-4722-91df-8260e5af75a6-scripts\") pod \"manila-scheduler-0\" (UID: \"4f3eed66-4789-4722-91df-8260e5af75a6\") " pod="openstack/manila-scheduler-0" Nov 29 07:40:23 crc kubenswrapper[4947]: I1129 07:40:23.176062 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4f3eed66-4789-4722-91df-8260e5af75a6-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"4f3eed66-4789-4722-91df-8260e5af75a6\") " pod="openstack/manila-scheduler-0" Nov 29 07:40:23 crc kubenswrapper[4947]: I1129 07:40:23.176104 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4f3eed66-4789-4722-91df-8260e5af75a6-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"4f3eed66-4789-4722-91df-8260e5af75a6\") " pod="openstack/manila-scheduler-0" Nov 29 07:40:23 crc kubenswrapper[4947]: I1129 07:40:23.176118 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxlr4\" (UniqueName: \"kubernetes.io/projected/4f3eed66-4789-4722-91df-8260e5af75a6-kube-api-access-sxlr4\") pod \"manila-scheduler-0\" (UID: \"4f3eed66-4789-4722-91df-8260e5af75a6\") " pod="openstack/manila-scheduler-0" Nov 29 07:40:23 crc kubenswrapper[4947]: I1129 07:40:23.183047 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f3eed66-4789-4722-91df-8260e5af75a6-scripts\") pod \"manila-scheduler-0\" (UID: \"4f3eed66-4789-4722-91df-8260e5af75a6\") " pod="openstack/manila-scheduler-0" Nov 29 07:40:23 crc kubenswrapper[4947]: I1129 07:40:23.183265 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4f3eed66-4789-4722-91df-8260e5af75a6-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"4f3eed66-4789-4722-91df-8260e5af75a6\") " pod="openstack/manila-scheduler-0" Nov 29 07:40:23 crc kubenswrapper[4947]: I1129 07:40:23.183265 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f3eed66-4789-4722-91df-8260e5af75a6-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"4f3eed66-4789-4722-91df-8260e5af75a6\") " pod="openstack/manila-scheduler-0" Nov 29 07:40:23 crc kubenswrapper[4947]: I1129 07:40:23.183369 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f3eed66-4789-4722-91df-8260e5af75a6-config-data\") pod \"manila-scheduler-0\" (UID: \"4f3eed66-4789-4722-91df-8260e5af75a6\") " pod="openstack/manila-scheduler-0" Nov 29 07:40:23 crc kubenswrapper[4947]: I1129 07:40:23.197343 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3b88e10-85c8-44e5-b382-7930353e7201" path="/var/lib/kubelet/pods/d3b88e10-85c8-44e5-b382-7930353e7201/volumes" Nov 29 07:40:23 crc kubenswrapper[4947]: I1129 07:40:23.205164 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxlr4\" (UniqueName: \"kubernetes.io/projected/4f3eed66-4789-4722-91df-8260e5af75a6-kube-api-access-sxlr4\") pod \"manila-scheduler-0\" (UID: \"4f3eed66-4789-4722-91df-8260e5af75a6\") " pod="openstack/manila-scheduler-0" Nov 29 07:40:23 crc kubenswrapper[4947]: I1129 07:40:23.313750 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"23c9606d-46f1-4079-a6b1-ecc87e1c99b1","Type":"ContainerStarted","Data":"ca002d492a306432cbf86d684f1aed43af52cf8eecc75f9d0317ccf43cc4ab27"} Nov 29 07:40:23 crc kubenswrapper[4947]: I1129 07:40:23.371332 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Nov 29 07:40:23 crc kubenswrapper[4947]: I1129 07:40:23.872692 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Nov 29 07:40:24 crc kubenswrapper[4947]: I1129 07:40:24.330489 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"23c9606d-46f1-4079-a6b1-ecc87e1c99b1","Type":"ContainerStarted","Data":"7c9d50ea45a4ebe1ac5f70c72b9600b9d23b16b928f510190c6eb461b85e6100"} Nov 29 07:40:24 crc kubenswrapper[4947]: I1129 07:40:24.332927 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"4f3eed66-4789-4722-91df-8260e5af75a6","Type":"ContainerStarted","Data":"232feb42c41edb7e4a2d40915c85336c7d1db31ccbceaae2d825715ed748dbe3"} Nov 29 07:40:24 crc kubenswrapper[4947]: I1129 07:40:24.332994 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"4f3eed66-4789-4722-91df-8260e5af75a6","Type":"ContainerStarted","Data":"912cbbea7abe93c6f2987fb79e4207418423985f54e50256a9a59c1df5146204"} Nov 29 07:40:25 crc kubenswrapper[4947]: I1129 07:40:25.017695 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Nov 29 07:40:25 crc kubenswrapper[4947]: I1129 07:40:25.345411 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"4f3eed66-4789-4722-91df-8260e5af75a6","Type":"ContainerStarted","Data":"308a41d5e5038d9346b721df0b0c3562cdd1cd4b2f4f7859184729e6b7581ba0"} Nov 29 07:40:25 crc kubenswrapper[4947]: I1129 07:40:25.385397 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=3.385374823 podStartE2EDuration="3.385374823s" podCreationTimestamp="2025-11-29 07:40:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:40:25.367161561 +0000 UTC m=+3976.411543642" watchObservedRunningTime="2025-11-29 07:40:25.385374823 +0000 UTC m=+3976.429756904" Nov 29 07:40:29 crc kubenswrapper[4947]: I1129 07:40:29.443940 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"23c9606d-46f1-4079-a6b1-ecc87e1c99b1","Type":"ContainerStarted","Data":"3430734909daac4e0dfbb7573b8c75e4b354b1ff633b973af30d41053bdbe30f"} Nov 29 07:40:29 crc kubenswrapper[4947]: I1129 07:40:29.445628 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 29 07:40:29 crc kubenswrapper[4947]: I1129 07:40:29.905723 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Nov 29 07:40:29 crc kubenswrapper[4947]: I1129 07:40:29.930560 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=6.018879395 podStartE2EDuration="10.930539274s" podCreationTimestamp="2025-11-29 07:40:19 +0000 UTC" firstStartedPulling="2025-11-29 07:40:20.536570285 +0000 UTC m=+3971.580952366" lastFinishedPulling="2025-11-29 07:40:25.448230164 +0000 UTC m=+3976.492612245" observedRunningTime="2025-11-29 07:40:29.467790088 +0000 UTC m=+3980.512172169" watchObservedRunningTime="2025-11-29 07:40:29.930539274 +0000 UTC m=+3980.974921355" Nov 29 07:40:29 crc kubenswrapper[4947]: I1129 07:40:29.949676 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Nov 29 07:40:30 crc kubenswrapper[4947]: I1129 07:40:30.457802 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="fca98943-b0c1-461f-91fb-56f3f476810c" containerName="manila-share" containerID="cri-o://e8d270e99e904d3cb1a8c1af4149573334d284c0e5558a850313cc0e3617d53a" gracePeriod=30 Nov 29 07:40:30 crc kubenswrapper[4947]: I1129 07:40:30.458731 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="fca98943-b0c1-461f-91fb-56f3f476810c" containerName="probe" containerID="cri-o://fcc5c92c546d9e693b73593447f0137669ce5ca6e83e1d8e52428a96ca30302b" gracePeriod=30 Nov 29 07:40:31 crc kubenswrapper[4947]: I1129 07:40:31.467382 4947 generic.go:334] "Generic (PLEG): container finished" podID="fca98943-b0c1-461f-91fb-56f3f476810c" containerID="fcc5c92c546d9e693b73593447f0137669ce5ca6e83e1d8e52428a96ca30302b" exitCode=0 Nov 29 07:40:31 crc kubenswrapper[4947]: I1129 07:40:31.467711 4947 generic.go:334] "Generic (PLEG): container finished" podID="fca98943-b0c1-461f-91fb-56f3f476810c" containerID="e8d270e99e904d3cb1a8c1af4149573334d284c0e5558a850313cc0e3617d53a" exitCode=1 Nov 29 07:40:31 crc kubenswrapper[4947]: I1129 07:40:31.467473 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"fca98943-b0c1-461f-91fb-56f3f476810c","Type":"ContainerDied","Data":"fcc5c92c546d9e693b73593447f0137669ce5ca6e83e1d8e52428a96ca30302b"} Nov 29 07:40:31 crc kubenswrapper[4947]: I1129 07:40:31.467825 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"fca98943-b0c1-461f-91fb-56f3f476810c","Type":"ContainerDied","Data":"e8d270e99e904d3cb1a8c1af4149573334d284c0e5558a850313cc0e3617d53a"} Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.096490 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.210299 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fca98943-b0c1-461f-91fb-56f3f476810c-etc-machine-id\") pod \"fca98943-b0c1-461f-91fb-56f3f476810c\" (UID: \"fca98943-b0c1-461f-91fb-56f3f476810c\") " Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.210392 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fca98943-b0c1-461f-91fb-56f3f476810c-scripts\") pod \"fca98943-b0c1-461f-91fb-56f3f476810c\" (UID: \"fca98943-b0c1-461f-91fb-56f3f476810c\") " Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.210422 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhs77\" (UniqueName: \"kubernetes.io/projected/fca98943-b0c1-461f-91fb-56f3f476810c-kube-api-access-mhs77\") pod \"fca98943-b0c1-461f-91fb-56f3f476810c\" (UID: \"fca98943-b0c1-461f-91fb-56f3f476810c\") " Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.210420 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fca98943-b0c1-461f-91fb-56f3f476810c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "fca98943-b0c1-461f-91fb-56f3f476810c" (UID: "fca98943-b0c1-461f-91fb-56f3f476810c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.210494 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fca98943-b0c1-461f-91fb-56f3f476810c-ceph\") pod \"fca98943-b0c1-461f-91fb-56f3f476810c\" (UID: \"fca98943-b0c1-461f-91fb-56f3f476810c\") " Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.210548 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fca98943-b0c1-461f-91fb-56f3f476810c-combined-ca-bundle\") pod \"fca98943-b0c1-461f-91fb-56f3f476810c\" (UID: \"fca98943-b0c1-461f-91fb-56f3f476810c\") " Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.210569 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fca98943-b0c1-461f-91fb-56f3f476810c-config-data-custom\") pod \"fca98943-b0c1-461f-91fb-56f3f476810c\" (UID: \"fca98943-b0c1-461f-91fb-56f3f476810c\") " Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.210590 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fca98943-b0c1-461f-91fb-56f3f476810c-config-data\") pod \"fca98943-b0c1-461f-91fb-56f3f476810c\" (UID: \"fca98943-b0c1-461f-91fb-56f3f476810c\") " Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.210640 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/fca98943-b0c1-461f-91fb-56f3f476810c-var-lib-manila\") pod \"fca98943-b0c1-461f-91fb-56f3f476810c\" (UID: \"fca98943-b0c1-461f-91fb-56f3f476810c\") " Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.211126 4947 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fca98943-b0c1-461f-91fb-56f3f476810c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.211173 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fca98943-b0c1-461f-91fb-56f3f476810c-var-lib-manila" (OuterVolumeSpecName: "var-lib-manila") pod "fca98943-b0c1-461f-91fb-56f3f476810c" (UID: "fca98943-b0c1-461f-91fb-56f3f476810c"). InnerVolumeSpecName "var-lib-manila". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.231299 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fca98943-b0c1-461f-91fb-56f3f476810c-kube-api-access-mhs77" (OuterVolumeSpecName: "kube-api-access-mhs77") pod "fca98943-b0c1-461f-91fb-56f3f476810c" (UID: "fca98943-b0c1-461f-91fb-56f3f476810c"). InnerVolumeSpecName "kube-api-access-mhs77". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.231532 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fca98943-b0c1-461f-91fb-56f3f476810c-scripts" (OuterVolumeSpecName: "scripts") pod "fca98943-b0c1-461f-91fb-56f3f476810c" (UID: "fca98943-b0c1-461f-91fb-56f3f476810c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.231943 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fca98943-b0c1-461f-91fb-56f3f476810c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fca98943-b0c1-461f-91fb-56f3f476810c" (UID: "fca98943-b0c1-461f-91fb-56f3f476810c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.232153 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fca98943-b0c1-461f-91fb-56f3f476810c-ceph" (OuterVolumeSpecName: "ceph") pod "fca98943-b0c1-461f-91fb-56f3f476810c" (UID: "fca98943-b0c1-461f-91fb-56f3f476810c"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.277467 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fca98943-b0c1-461f-91fb-56f3f476810c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fca98943-b0c1-461f-91fb-56f3f476810c" (UID: "fca98943-b0c1-461f-91fb-56f3f476810c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.313581 4947 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fca98943-b0c1-461f-91fb-56f3f476810c-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.313633 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhs77\" (UniqueName: \"kubernetes.io/projected/fca98943-b0c1-461f-91fb-56f3f476810c-kube-api-access-mhs77\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.313650 4947 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fca98943-b0c1-461f-91fb-56f3f476810c-ceph\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.313660 4947 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fca98943-b0c1-461f-91fb-56f3f476810c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.313674 4947 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fca98943-b0c1-461f-91fb-56f3f476810c-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.313687 4947 reconciler_common.go:293] "Volume detached for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/fca98943-b0c1-461f-91fb-56f3f476810c-var-lib-manila\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.336788 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fca98943-b0c1-461f-91fb-56f3f476810c-config-data" (OuterVolumeSpecName: "config-data") pod "fca98943-b0c1-461f-91fb-56f3f476810c" (UID: "fca98943-b0c1-461f-91fb-56f3f476810c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.415965 4947 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fca98943-b0c1-461f-91fb-56f3f476810c-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.479548 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"fca98943-b0c1-461f-91fb-56f3f476810c","Type":"ContainerDied","Data":"41e631925b09eb92d0d1359728dad7f81099c95a05da334273fb29ab54fdd000"} Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.479628 4947 scope.go:117] "RemoveContainer" containerID="fcc5c92c546d9e693b73593447f0137669ce5ca6e83e1d8e52428a96ca30302b" Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.479797 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.515658 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.517555 4947 scope.go:117] "RemoveContainer" containerID="e8d270e99e904d3cb1a8c1af4149573334d284c0e5558a850313cc0e3617d53a" Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.530094 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-share-share1-0"] Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.550343 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Nov 29 07:40:32 crc kubenswrapper[4947]: E1129 07:40:32.550918 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fca98943-b0c1-461f-91fb-56f3f476810c" containerName="manila-share" Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.550943 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="fca98943-b0c1-461f-91fb-56f3f476810c" containerName="manila-share" Nov 29 07:40:32 crc kubenswrapper[4947]: E1129 07:40:32.550978 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fca98943-b0c1-461f-91fb-56f3f476810c" containerName="probe" Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.550986 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="fca98943-b0c1-461f-91fb-56f3f476810c" containerName="probe" Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.551212 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="fca98943-b0c1-461f-91fb-56f3f476810c" containerName="manila-share" Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.551263 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="fca98943-b0c1-461f-91fb-56f3f476810c" containerName="probe" Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.558730 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.561910 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.585748 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.619924 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcb6f32f-e314-410e-8902-c7812cf9bbdc-config-data\") pod \"manila-share-share1-0\" (UID: \"fcb6f32f-e314-410e-8902-c7812cf9bbdc\") " pod="openstack/manila-share-share1-0" Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.620064 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcb6f32f-e314-410e-8902-c7812cf9bbdc-scripts\") pod \"manila-share-share1-0\" (UID: \"fcb6f32f-e314-410e-8902-c7812cf9bbdc\") " pod="openstack/manila-share-share1-0" Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.620094 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fcb6f32f-e314-410e-8902-c7812cf9bbdc-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"fcb6f32f-e314-410e-8902-c7812cf9bbdc\") " pod="openstack/manila-share-share1-0" Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.620117 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/fcb6f32f-e314-410e-8902-c7812cf9bbdc-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"fcb6f32f-e314-410e-8902-c7812cf9bbdc\") " pod="openstack/manila-share-share1-0" Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.620147 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fcb6f32f-e314-410e-8902-c7812cf9bbdc-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"fcb6f32f-e314-410e-8902-c7812cf9bbdc\") " pod="openstack/manila-share-share1-0" Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.620198 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcb6f32f-e314-410e-8902-c7812cf9bbdc-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"fcb6f32f-e314-410e-8902-c7812cf9bbdc\") " pod="openstack/manila-share-share1-0" Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.620631 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g28m\" (UniqueName: \"kubernetes.io/projected/fcb6f32f-e314-410e-8902-c7812cf9bbdc-kube-api-access-6g28m\") pod \"manila-share-share1-0\" (UID: \"fcb6f32f-e314-410e-8902-c7812cf9bbdc\") " pod="openstack/manila-share-share1-0" Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.620795 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fcb6f32f-e314-410e-8902-c7812cf9bbdc-ceph\") pod \"manila-share-share1-0\" (UID: \"fcb6f32f-e314-410e-8902-c7812cf9bbdc\") " pod="openstack/manila-share-share1-0" Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.723150 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6g28m\" (UniqueName: \"kubernetes.io/projected/fcb6f32f-e314-410e-8902-c7812cf9bbdc-kube-api-access-6g28m\") pod \"manila-share-share1-0\" (UID: \"fcb6f32f-e314-410e-8902-c7812cf9bbdc\") " pod="openstack/manila-share-share1-0" Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.723318 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fcb6f32f-e314-410e-8902-c7812cf9bbdc-ceph\") pod \"manila-share-share1-0\" (UID: \"fcb6f32f-e314-410e-8902-c7812cf9bbdc\") " pod="openstack/manila-share-share1-0" Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.724002 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcb6f32f-e314-410e-8902-c7812cf9bbdc-config-data\") pod \"manila-share-share1-0\" (UID: \"fcb6f32f-e314-410e-8902-c7812cf9bbdc\") " pod="openstack/manila-share-share1-0" Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.724253 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcb6f32f-e314-410e-8902-c7812cf9bbdc-scripts\") pod \"manila-share-share1-0\" (UID: \"fcb6f32f-e314-410e-8902-c7812cf9bbdc\") " pod="openstack/manila-share-share1-0" Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.724286 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fcb6f32f-e314-410e-8902-c7812cf9bbdc-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"fcb6f32f-e314-410e-8902-c7812cf9bbdc\") " pod="openstack/manila-share-share1-0" Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.724661 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/fcb6f32f-e314-410e-8902-c7812cf9bbdc-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"fcb6f32f-e314-410e-8902-c7812cf9bbdc\") " pod="openstack/manila-share-share1-0" Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.724752 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fcb6f32f-e314-410e-8902-c7812cf9bbdc-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"fcb6f32f-e314-410e-8902-c7812cf9bbdc\") " pod="openstack/manila-share-share1-0" Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.724791 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcb6f32f-e314-410e-8902-c7812cf9bbdc-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"fcb6f32f-e314-410e-8902-c7812cf9bbdc\") " pod="openstack/manila-share-share1-0" Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.724843 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/fcb6f32f-e314-410e-8902-c7812cf9bbdc-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"fcb6f32f-e314-410e-8902-c7812cf9bbdc\") " pod="openstack/manila-share-share1-0" Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.724884 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fcb6f32f-e314-410e-8902-c7812cf9bbdc-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"fcb6f32f-e314-410e-8902-c7812cf9bbdc\") " pod="openstack/manila-share-share1-0" Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.731234 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcb6f32f-e314-410e-8902-c7812cf9bbdc-scripts\") pod \"manila-share-share1-0\" (UID: \"fcb6f32f-e314-410e-8902-c7812cf9bbdc\") " pod="openstack/manila-share-share1-0" Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.731283 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fcb6f32f-e314-410e-8902-c7812cf9bbdc-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"fcb6f32f-e314-410e-8902-c7812cf9bbdc\") " pod="openstack/manila-share-share1-0" Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.731290 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fcb6f32f-e314-410e-8902-c7812cf9bbdc-ceph\") pod \"manila-share-share1-0\" (UID: \"fcb6f32f-e314-410e-8902-c7812cf9bbdc\") " pod="openstack/manila-share-share1-0" Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.731460 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcb6f32f-e314-410e-8902-c7812cf9bbdc-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"fcb6f32f-e314-410e-8902-c7812cf9bbdc\") " pod="openstack/manila-share-share1-0" Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.731502 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcb6f32f-e314-410e-8902-c7812cf9bbdc-config-data\") pod \"manila-share-share1-0\" (UID: \"fcb6f32f-e314-410e-8902-c7812cf9bbdc\") " pod="openstack/manila-share-share1-0" Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.743764 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6g28m\" (UniqueName: \"kubernetes.io/projected/fcb6f32f-e314-410e-8902-c7812cf9bbdc-kube-api-access-6g28m\") pod \"manila-share-share1-0\" (UID: \"fcb6f32f-e314-410e-8902-c7812cf9bbdc\") " pod="openstack/manila-share-share1-0" Nov 29 07:40:32 crc kubenswrapper[4947]: I1129 07:40:32.882685 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Nov 29 07:40:33 crc kubenswrapper[4947]: I1129 07:40:33.193579 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fca98943-b0c1-461f-91fb-56f3f476810c" path="/var/lib/kubelet/pods/fca98943-b0c1-461f-91fb-56f3f476810c/volumes" Nov 29 07:40:33 crc kubenswrapper[4947]: I1129 07:40:33.372341 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Nov 29 07:40:33 crc kubenswrapper[4947]: I1129 07:40:33.471629 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Nov 29 07:40:34 crc kubenswrapper[4947]: I1129 07:40:34.507263 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"fcb6f32f-e314-410e-8902-c7812cf9bbdc","Type":"ContainerStarted","Data":"56e311b2c2f7546bc4124ff577441ee56342e4173633bdbdfa76596cc7d5eb30"} Nov 29 07:40:34 crc kubenswrapper[4947]: I1129 07:40:34.507982 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"fcb6f32f-e314-410e-8902-c7812cf9bbdc","Type":"ContainerStarted","Data":"1dc23019a7531380880da4a63421765624eae04eb3be6f5079702f1960695095"} Nov 29 07:40:35 crc kubenswrapper[4947]: I1129 07:40:35.523031 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"fcb6f32f-e314-410e-8902-c7812cf9bbdc","Type":"ContainerStarted","Data":"37822bf4ec2c481ded8ca93f8dc75e921c89ffb33228adbc20772c773b003d93"} Nov 29 07:40:35 crc kubenswrapper[4947]: I1129 07:40:35.552839 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=3.552806024 podStartE2EDuration="3.552806024s" podCreationTimestamp="2025-11-29 07:40:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:40:35.544150005 +0000 UTC m=+3986.588532116" watchObservedRunningTime="2025-11-29 07:40:35.552806024 +0000 UTC m=+3986.597188105" Nov 29 07:40:37 crc kubenswrapper[4947]: I1129 07:40:37.179466 4947 scope.go:117] "RemoveContainer" containerID="befe8bc1f518b72b2765c4bbae633eaff2671198765b803461fe977b3f76f166" Nov 29 07:40:37 crc kubenswrapper[4947]: E1129 07:40:37.180126 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:40:42 crc kubenswrapper[4947]: I1129 07:40:42.882773 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Nov 29 07:40:45 crc kubenswrapper[4947]: I1129 07:40:45.035711 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Nov 29 07:40:48 crc kubenswrapper[4947]: I1129 07:40:48.178872 4947 scope.go:117] "RemoveContainer" containerID="befe8bc1f518b72b2765c4bbae633eaff2671198765b803461fe977b3f76f166" Nov 29 07:40:48 crc kubenswrapper[4947]: E1129 07:40:48.180055 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:40:49 crc kubenswrapper[4947]: I1129 07:40:49.672185 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 29 07:40:54 crc kubenswrapper[4947]: I1129 07:40:54.547972 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Nov 29 07:40:59 crc kubenswrapper[4947]: I1129 07:40:59.187885 4947 scope.go:117] "RemoveContainer" containerID="befe8bc1f518b72b2765c4bbae633eaff2671198765b803461fe977b3f76f166" Nov 29 07:40:59 crc kubenswrapper[4947]: E1129 07:40:59.189412 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:41:13 crc kubenswrapper[4947]: I1129 07:41:13.184140 4947 scope.go:117] "RemoveContainer" containerID="befe8bc1f518b72b2765c4bbae633eaff2671198765b803461fe977b3f76f166" Nov 29 07:41:13 crc kubenswrapper[4947]: E1129 07:41:13.186290 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:41:24 crc kubenswrapper[4947]: I1129 07:41:24.179135 4947 scope.go:117] "RemoveContainer" containerID="befe8bc1f518b72b2765c4bbae633eaff2671198765b803461fe977b3f76f166" Nov 29 07:41:24 crc kubenswrapper[4947]: E1129 07:41:24.179975 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:41:38 crc kubenswrapper[4947]: I1129 07:41:38.179518 4947 scope.go:117] "RemoveContainer" containerID="befe8bc1f518b72b2765c4bbae633eaff2671198765b803461fe977b3f76f166" Nov 29 07:41:38 crc kubenswrapper[4947]: E1129 07:41:38.180448 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:41:50 crc kubenswrapper[4947]: I1129 07:41:50.179087 4947 scope.go:117] "RemoveContainer" containerID="befe8bc1f518b72b2765c4bbae633eaff2671198765b803461fe977b3f76f166" Nov 29 07:41:50 crc kubenswrapper[4947]: E1129 07:41:50.180117 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:42:02 crc kubenswrapper[4947]: I1129 07:42:02.179028 4947 scope.go:117] "RemoveContainer" containerID="befe8bc1f518b72b2765c4bbae633eaff2671198765b803461fe977b3f76f166" Nov 29 07:42:02 crc kubenswrapper[4947]: E1129 07:42:02.179899 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:42:03 crc kubenswrapper[4947]: I1129 07:42:03.858185 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Nov 29 07:42:03 crc kubenswrapper[4947]: I1129 07:42:03.860276 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 29 07:42:03 crc kubenswrapper[4947]: I1129 07:42:03.863094 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-zh8zj" Nov 29 07:42:03 crc kubenswrapper[4947]: I1129 07:42:03.864435 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 29 07:42:03 crc kubenswrapper[4947]: I1129 07:42:03.864435 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Nov 29 07:42:03 crc kubenswrapper[4947]: I1129 07:42:03.871272 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 29 07:42:03 crc kubenswrapper[4947]: I1129 07:42:03.874775 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Nov 29 07:42:04 crc kubenswrapper[4947]: I1129 07:42:04.015630 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/6adb2028-a62e-456d-8863-55da513e78f2-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"6adb2028-a62e-456d-8863-55da513e78f2\") " pod="openstack/tempest-tests-tempest" Nov 29 07:42:04 crc kubenswrapper[4947]: I1129 07:42:04.015722 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/6adb2028-a62e-456d-8863-55da513e78f2-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"6adb2028-a62e-456d-8863-55da513e78f2\") " pod="openstack/tempest-tests-tempest" Nov 29 07:42:04 crc kubenswrapper[4947]: I1129 07:42:04.015760 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"6adb2028-a62e-456d-8863-55da513e78f2\") " pod="openstack/tempest-tests-tempest" Nov 29 07:42:04 crc kubenswrapper[4947]: I1129 07:42:04.015801 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6adb2028-a62e-456d-8863-55da513e78f2-config-data\") pod \"tempest-tests-tempest\" (UID: \"6adb2028-a62e-456d-8863-55da513e78f2\") " pod="openstack/tempest-tests-tempest" Nov 29 07:42:04 crc kubenswrapper[4947]: I1129 07:42:04.015856 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/6adb2028-a62e-456d-8863-55da513e78f2-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"6adb2028-a62e-456d-8863-55da513e78f2\") " pod="openstack/tempest-tests-tempest" Nov 29 07:42:04 crc kubenswrapper[4947]: I1129 07:42:04.015903 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6adb2028-a62e-456d-8863-55da513e78f2-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"6adb2028-a62e-456d-8863-55da513e78f2\") " pod="openstack/tempest-tests-tempest" Nov 29 07:42:04 crc kubenswrapper[4947]: I1129 07:42:04.015938 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf875\" (UniqueName: \"kubernetes.io/projected/6adb2028-a62e-456d-8863-55da513e78f2-kube-api-access-xf875\") pod \"tempest-tests-tempest\" (UID: \"6adb2028-a62e-456d-8863-55da513e78f2\") " pod="openstack/tempest-tests-tempest" Nov 29 07:42:04 crc kubenswrapper[4947]: I1129 07:42:04.015968 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6adb2028-a62e-456d-8863-55da513e78f2-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"6adb2028-a62e-456d-8863-55da513e78f2\") " pod="openstack/tempest-tests-tempest" Nov 29 07:42:04 crc kubenswrapper[4947]: I1129 07:42:04.016039 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6adb2028-a62e-456d-8863-55da513e78f2-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"6adb2028-a62e-456d-8863-55da513e78f2\") " pod="openstack/tempest-tests-tempest" Nov 29 07:42:04 crc kubenswrapper[4947]: I1129 07:42:04.118408 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/6adb2028-a62e-456d-8863-55da513e78f2-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"6adb2028-a62e-456d-8863-55da513e78f2\") " pod="openstack/tempest-tests-tempest" Nov 29 07:42:04 crc kubenswrapper[4947]: I1129 07:42:04.118532 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"6adb2028-a62e-456d-8863-55da513e78f2\") " pod="openstack/tempest-tests-tempest" Nov 29 07:42:04 crc kubenswrapper[4947]: I1129 07:42:04.118633 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6adb2028-a62e-456d-8863-55da513e78f2-config-data\") pod \"tempest-tests-tempest\" (UID: \"6adb2028-a62e-456d-8863-55da513e78f2\") " pod="openstack/tempest-tests-tempest" Nov 29 07:42:04 crc kubenswrapper[4947]: I1129 07:42:04.118754 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/6adb2028-a62e-456d-8863-55da513e78f2-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"6adb2028-a62e-456d-8863-55da513e78f2\") " pod="openstack/tempest-tests-tempest" Nov 29 07:42:04 crc kubenswrapper[4947]: I1129 07:42:04.118849 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/6adb2028-a62e-456d-8863-55da513e78f2-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"6adb2028-a62e-456d-8863-55da513e78f2\") " pod="openstack/tempest-tests-tempest" Nov 29 07:42:04 crc kubenswrapper[4947]: I1129 07:42:04.118882 4947 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"6adb2028-a62e-456d-8863-55da513e78f2\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/tempest-tests-tempest" Nov 29 07:42:04 crc kubenswrapper[4947]: I1129 07:42:04.118880 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6adb2028-a62e-456d-8863-55da513e78f2-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"6adb2028-a62e-456d-8863-55da513e78f2\") " pod="openstack/tempest-tests-tempest" Nov 29 07:42:04 crc kubenswrapper[4947]: I1129 07:42:04.119007 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xf875\" (UniqueName: \"kubernetes.io/projected/6adb2028-a62e-456d-8863-55da513e78f2-kube-api-access-xf875\") pod \"tempest-tests-tempest\" (UID: \"6adb2028-a62e-456d-8863-55da513e78f2\") " pod="openstack/tempest-tests-tempest" Nov 29 07:42:04 crc kubenswrapper[4947]: I1129 07:42:04.119061 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6adb2028-a62e-456d-8863-55da513e78f2-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"6adb2028-a62e-456d-8863-55da513e78f2\") " pod="openstack/tempest-tests-tempest" Nov 29 07:42:04 crc kubenswrapper[4947]: I1129 07:42:04.119174 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6adb2028-a62e-456d-8863-55da513e78f2-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"6adb2028-a62e-456d-8863-55da513e78f2\") " pod="openstack/tempest-tests-tempest" Nov 29 07:42:04 crc kubenswrapper[4947]: I1129 07:42:04.119324 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/6adb2028-a62e-456d-8863-55da513e78f2-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"6adb2028-a62e-456d-8863-55da513e78f2\") " pod="openstack/tempest-tests-tempest" Nov 29 07:42:04 crc kubenswrapper[4947]: I1129 07:42:04.119755 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/6adb2028-a62e-456d-8863-55da513e78f2-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"6adb2028-a62e-456d-8863-55da513e78f2\") " pod="openstack/tempest-tests-tempest" Nov 29 07:42:04 crc kubenswrapper[4947]: I1129 07:42:04.120394 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6adb2028-a62e-456d-8863-55da513e78f2-config-data\") pod \"tempest-tests-tempest\" (UID: \"6adb2028-a62e-456d-8863-55da513e78f2\") " pod="openstack/tempest-tests-tempest" Nov 29 07:42:04 crc kubenswrapper[4947]: I1129 07:42:04.121350 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6adb2028-a62e-456d-8863-55da513e78f2-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"6adb2028-a62e-456d-8863-55da513e78f2\") " pod="openstack/tempest-tests-tempest" Nov 29 07:42:04 crc kubenswrapper[4947]: I1129 07:42:04.128611 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6adb2028-a62e-456d-8863-55da513e78f2-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"6adb2028-a62e-456d-8863-55da513e78f2\") " pod="openstack/tempest-tests-tempest" Nov 29 07:42:04 crc kubenswrapper[4947]: I1129 07:42:04.130112 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/6adb2028-a62e-456d-8863-55da513e78f2-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"6adb2028-a62e-456d-8863-55da513e78f2\") " pod="openstack/tempest-tests-tempest" Nov 29 07:42:04 crc kubenswrapper[4947]: I1129 07:42:04.131287 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6adb2028-a62e-456d-8863-55da513e78f2-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"6adb2028-a62e-456d-8863-55da513e78f2\") " pod="openstack/tempest-tests-tempest" Nov 29 07:42:04 crc kubenswrapper[4947]: I1129 07:42:04.137815 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf875\" (UniqueName: \"kubernetes.io/projected/6adb2028-a62e-456d-8863-55da513e78f2-kube-api-access-xf875\") pod \"tempest-tests-tempest\" (UID: \"6adb2028-a62e-456d-8863-55da513e78f2\") " pod="openstack/tempest-tests-tempest" Nov 29 07:42:04 crc kubenswrapper[4947]: I1129 07:42:04.154164 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"6adb2028-a62e-456d-8863-55da513e78f2\") " pod="openstack/tempest-tests-tempest" Nov 29 07:42:04 crc kubenswrapper[4947]: I1129 07:42:04.187894 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 29 07:42:04 crc kubenswrapper[4947]: I1129 07:42:04.754314 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 29 07:42:05 crc kubenswrapper[4947]: I1129 07:42:05.466566 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"6adb2028-a62e-456d-8863-55da513e78f2","Type":"ContainerStarted","Data":"f3787fb4af1bfb24e6726dc546708bf62500f43e14803e2dbbbc591f52f1169a"} Nov 29 07:42:14 crc kubenswrapper[4947]: I1129 07:42:14.181938 4947 scope.go:117] "RemoveContainer" containerID="befe8bc1f518b72b2765c4bbae633eaff2671198765b803461fe977b3f76f166" Nov 29 07:42:14 crc kubenswrapper[4947]: E1129 07:42:14.184908 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:42:26 crc kubenswrapper[4947]: I1129 07:42:26.179606 4947 scope.go:117] "RemoveContainer" containerID="befe8bc1f518b72b2765c4bbae633eaff2671198765b803461fe977b3f76f166" Nov 29 07:42:26 crc kubenswrapper[4947]: E1129 07:42:26.180714 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:42:37 crc kubenswrapper[4947]: I1129 07:42:37.183494 4947 scope.go:117] "RemoveContainer" containerID="befe8bc1f518b72b2765c4bbae633eaff2671198765b803461fe977b3f76f166" Nov 29 07:42:37 crc kubenswrapper[4947]: E1129 07:42:37.184492 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:42:52 crc kubenswrapper[4947]: I1129 07:42:52.179963 4947 scope.go:117] "RemoveContainer" containerID="befe8bc1f518b72b2765c4bbae633eaff2671198765b803461fe977b3f76f166" Nov 29 07:42:52 crc kubenswrapper[4947]: E1129 07:42:52.181066 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:42:59 crc kubenswrapper[4947]: E1129 07:42:59.058228 4947 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Nov 29 07:42:59 crc kubenswrapper[4947]: E1129 07:42:59.059150 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xf875,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(6adb2028-a62e-456d-8863-55da513e78f2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:42:59 crc kubenswrapper[4947]: E1129 07:42:59.060896 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="6adb2028-a62e-456d-8863-55da513e78f2" Nov 29 07:43:00 crc kubenswrapper[4947]: E1129 07:43:00.081616 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="6adb2028-a62e-456d-8863-55da513e78f2" Nov 29 07:43:05 crc kubenswrapper[4947]: I1129 07:43:05.179664 4947 scope.go:117] "RemoveContainer" containerID="befe8bc1f518b72b2765c4bbae633eaff2671198765b803461fe977b3f76f166" Nov 29 07:43:05 crc kubenswrapper[4947]: E1129 07:43:05.180778 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:43:15 crc kubenswrapper[4947]: I1129 07:43:15.374452 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 29 07:43:17 crc kubenswrapper[4947]: I1129 07:43:17.179517 4947 scope.go:117] "RemoveContainer" containerID="befe8bc1f518b72b2765c4bbae633eaff2671198765b803461fe977b3f76f166" Nov 29 07:43:17 crc kubenswrapper[4947]: E1129 07:43:17.180647 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:43:17 crc kubenswrapper[4947]: I1129 07:43:17.244822 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"6adb2028-a62e-456d-8863-55da513e78f2","Type":"ContainerStarted","Data":"1bc5fd6cddbd2b45bc2863ddc970d5e883133108f1907a71d34f990f81884de8"} Nov 29 07:43:17 crc kubenswrapper[4947]: I1129 07:43:17.267559 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.672896649 podStartE2EDuration="1m15.267532591s" podCreationTimestamp="2025-11-29 07:42:02 +0000 UTC" firstStartedPulling="2025-11-29 07:42:04.776448567 +0000 UTC m=+4075.820830648" lastFinishedPulling="2025-11-29 07:43:15.371084509 +0000 UTC m=+4146.415466590" observedRunningTime="2025-11-29 07:43:17.260873993 +0000 UTC m=+4148.305256094" watchObservedRunningTime="2025-11-29 07:43:17.267532591 +0000 UTC m=+4148.311914712" Nov 29 07:43:28 crc kubenswrapper[4947]: I1129 07:43:28.179485 4947 scope.go:117] "RemoveContainer" containerID="befe8bc1f518b72b2765c4bbae633eaff2671198765b803461fe977b3f76f166" Nov 29 07:43:29 crc kubenswrapper[4947]: I1129 07:43:29.364533 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" event={"ID":"5f4d791f-bb61-4aaa-a09c-3007b59645a7","Type":"ContainerStarted","Data":"386fed2bf9e7112ee72ce8a6463b65c3b8de94f73faee1eae597cec2ca1d03cc"} Nov 29 07:43:31 crc kubenswrapper[4947]: I1129 07:43:31.030001 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-s9k64"] Nov 29 07:43:31 crc kubenswrapper[4947]: I1129 07:43:31.034653 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s9k64" Nov 29 07:43:31 crc kubenswrapper[4947]: I1129 07:43:31.049251 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s9k64"] Nov 29 07:43:31 crc kubenswrapper[4947]: I1129 07:43:31.199878 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55a10382-1dac-420c-912e-48afb52d0c26-catalog-content\") pod \"certified-operators-s9k64\" (UID: \"55a10382-1dac-420c-912e-48afb52d0c26\") " pod="openshift-marketplace/certified-operators-s9k64" Nov 29 07:43:31 crc kubenswrapper[4947]: I1129 07:43:31.200373 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55a10382-1dac-420c-912e-48afb52d0c26-utilities\") pod \"certified-operators-s9k64\" (UID: \"55a10382-1dac-420c-912e-48afb52d0c26\") " pod="openshift-marketplace/certified-operators-s9k64" Nov 29 07:43:31 crc kubenswrapper[4947]: I1129 07:43:31.200438 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll7mv\" (UniqueName: \"kubernetes.io/projected/55a10382-1dac-420c-912e-48afb52d0c26-kube-api-access-ll7mv\") pod \"certified-operators-s9k64\" (UID: \"55a10382-1dac-420c-912e-48afb52d0c26\") " pod="openshift-marketplace/certified-operators-s9k64" Nov 29 07:43:31 crc kubenswrapper[4947]: I1129 07:43:31.302571 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55a10382-1dac-420c-912e-48afb52d0c26-utilities\") pod \"certified-operators-s9k64\" (UID: \"55a10382-1dac-420c-912e-48afb52d0c26\") " pod="openshift-marketplace/certified-operators-s9k64" Nov 29 07:43:31 crc kubenswrapper[4947]: I1129 07:43:31.302650 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll7mv\" (UniqueName: \"kubernetes.io/projected/55a10382-1dac-420c-912e-48afb52d0c26-kube-api-access-ll7mv\") pod \"certified-operators-s9k64\" (UID: \"55a10382-1dac-420c-912e-48afb52d0c26\") " pod="openshift-marketplace/certified-operators-s9k64" Nov 29 07:43:31 crc kubenswrapper[4947]: I1129 07:43:31.302749 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55a10382-1dac-420c-912e-48afb52d0c26-catalog-content\") pod \"certified-operators-s9k64\" (UID: \"55a10382-1dac-420c-912e-48afb52d0c26\") " pod="openshift-marketplace/certified-operators-s9k64" Nov 29 07:43:31 crc kubenswrapper[4947]: I1129 07:43:31.303236 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55a10382-1dac-420c-912e-48afb52d0c26-utilities\") pod \"certified-operators-s9k64\" (UID: \"55a10382-1dac-420c-912e-48afb52d0c26\") " pod="openshift-marketplace/certified-operators-s9k64" Nov 29 07:43:31 crc kubenswrapper[4947]: I1129 07:43:31.303301 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55a10382-1dac-420c-912e-48afb52d0c26-catalog-content\") pod \"certified-operators-s9k64\" (UID: \"55a10382-1dac-420c-912e-48afb52d0c26\") " pod="openshift-marketplace/certified-operators-s9k64" Nov 29 07:43:31 crc kubenswrapper[4947]: I1129 07:43:31.323130 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll7mv\" (UniqueName: \"kubernetes.io/projected/55a10382-1dac-420c-912e-48afb52d0c26-kube-api-access-ll7mv\") pod \"certified-operators-s9k64\" (UID: \"55a10382-1dac-420c-912e-48afb52d0c26\") " pod="openshift-marketplace/certified-operators-s9k64" Nov 29 07:43:31 crc kubenswrapper[4947]: I1129 07:43:31.360042 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s9k64" Nov 29 07:43:31 crc kubenswrapper[4947]: I1129 07:43:31.938201 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s9k64"] Nov 29 07:43:32 crc kubenswrapper[4947]: I1129 07:43:32.399449 4947 generic.go:334] "Generic (PLEG): container finished" podID="55a10382-1dac-420c-912e-48afb52d0c26" containerID="4214d3a9e2566d640257ea17f352d9b43d2c3a9228541ccbd3ec2c1c7b838a8f" exitCode=0 Nov 29 07:43:32 crc kubenswrapper[4947]: I1129 07:43:32.399565 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s9k64" event={"ID":"55a10382-1dac-420c-912e-48afb52d0c26","Type":"ContainerDied","Data":"4214d3a9e2566d640257ea17f352d9b43d2c3a9228541ccbd3ec2c1c7b838a8f"} Nov 29 07:43:32 crc kubenswrapper[4947]: I1129 07:43:32.399830 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s9k64" event={"ID":"55a10382-1dac-420c-912e-48afb52d0c26","Type":"ContainerStarted","Data":"a4862dfcee6b2471e6251816f0e4d4b6510891d56fc7a8a5eb5013fb2a0fcecb"} Nov 29 07:43:34 crc kubenswrapper[4947]: I1129 07:43:34.425049 4947 generic.go:334] "Generic (PLEG): container finished" podID="55a10382-1dac-420c-912e-48afb52d0c26" containerID="aff48e45f9019219314cfaea62afb5b2b82262bb597d222cfbd2c5cb020f6613" exitCode=0 Nov 29 07:43:34 crc kubenswrapper[4947]: I1129 07:43:34.425642 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s9k64" event={"ID":"55a10382-1dac-420c-912e-48afb52d0c26","Type":"ContainerDied","Data":"aff48e45f9019219314cfaea62afb5b2b82262bb597d222cfbd2c5cb020f6613"} Nov 29 07:43:35 crc kubenswrapper[4947]: I1129 07:43:35.437504 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s9k64" event={"ID":"55a10382-1dac-420c-912e-48afb52d0c26","Type":"ContainerStarted","Data":"0221df0e523302d68273396173bacc0433bd5dd17dfec96f0eb6fc89ff4b9528"} Nov 29 07:43:35 crc kubenswrapper[4947]: I1129 07:43:35.462073 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-s9k64" podStartSLOduration=1.835715084 podStartE2EDuration="4.462047566s" podCreationTimestamp="2025-11-29 07:43:31 +0000 UTC" firstStartedPulling="2025-11-29 07:43:32.402317602 +0000 UTC m=+4163.446699683" lastFinishedPulling="2025-11-29 07:43:35.028650074 +0000 UTC m=+4166.073032165" observedRunningTime="2025-11-29 07:43:35.456127876 +0000 UTC m=+4166.500509987" watchObservedRunningTime="2025-11-29 07:43:35.462047566 +0000 UTC m=+4166.506429657" Nov 29 07:43:35 crc kubenswrapper[4947]: I1129 07:43:35.837136 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-48xt6"] Nov 29 07:43:35 crc kubenswrapper[4947]: I1129 07:43:35.839946 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-48xt6" Nov 29 07:43:35 crc kubenswrapper[4947]: I1129 07:43:35.850302 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-48xt6"] Nov 29 07:43:36 crc kubenswrapper[4947]: I1129 07:43:36.007131 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a22c650-bb4f-41d6-ac06-f53d84478d28-utilities\") pod \"redhat-operators-48xt6\" (UID: \"7a22c650-bb4f-41d6-ac06-f53d84478d28\") " pod="openshift-marketplace/redhat-operators-48xt6" Nov 29 07:43:36 crc kubenswrapper[4947]: I1129 07:43:36.007281 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqbst\" (UniqueName: \"kubernetes.io/projected/7a22c650-bb4f-41d6-ac06-f53d84478d28-kube-api-access-lqbst\") pod \"redhat-operators-48xt6\" (UID: \"7a22c650-bb4f-41d6-ac06-f53d84478d28\") " pod="openshift-marketplace/redhat-operators-48xt6" Nov 29 07:43:36 crc kubenswrapper[4947]: I1129 07:43:36.007323 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a22c650-bb4f-41d6-ac06-f53d84478d28-catalog-content\") pod \"redhat-operators-48xt6\" (UID: \"7a22c650-bb4f-41d6-ac06-f53d84478d28\") " pod="openshift-marketplace/redhat-operators-48xt6" Nov 29 07:43:36 crc kubenswrapper[4947]: I1129 07:43:36.108708 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqbst\" (UniqueName: \"kubernetes.io/projected/7a22c650-bb4f-41d6-ac06-f53d84478d28-kube-api-access-lqbst\") pod \"redhat-operators-48xt6\" (UID: \"7a22c650-bb4f-41d6-ac06-f53d84478d28\") " pod="openshift-marketplace/redhat-operators-48xt6" Nov 29 07:43:36 crc kubenswrapper[4947]: I1129 07:43:36.108797 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a22c650-bb4f-41d6-ac06-f53d84478d28-catalog-content\") pod \"redhat-operators-48xt6\" (UID: \"7a22c650-bb4f-41d6-ac06-f53d84478d28\") " pod="openshift-marketplace/redhat-operators-48xt6" Nov 29 07:43:36 crc kubenswrapper[4947]: I1129 07:43:36.108878 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a22c650-bb4f-41d6-ac06-f53d84478d28-utilities\") pod \"redhat-operators-48xt6\" (UID: \"7a22c650-bb4f-41d6-ac06-f53d84478d28\") " pod="openshift-marketplace/redhat-operators-48xt6" Nov 29 07:43:36 crc kubenswrapper[4947]: I1129 07:43:36.109679 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a22c650-bb4f-41d6-ac06-f53d84478d28-catalog-content\") pod \"redhat-operators-48xt6\" (UID: \"7a22c650-bb4f-41d6-ac06-f53d84478d28\") " pod="openshift-marketplace/redhat-operators-48xt6" Nov 29 07:43:36 crc kubenswrapper[4947]: I1129 07:43:36.109682 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a22c650-bb4f-41d6-ac06-f53d84478d28-utilities\") pod \"redhat-operators-48xt6\" (UID: \"7a22c650-bb4f-41d6-ac06-f53d84478d28\") " pod="openshift-marketplace/redhat-operators-48xt6" Nov 29 07:43:36 crc kubenswrapper[4947]: I1129 07:43:36.128838 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqbst\" (UniqueName: \"kubernetes.io/projected/7a22c650-bb4f-41d6-ac06-f53d84478d28-kube-api-access-lqbst\") pod \"redhat-operators-48xt6\" (UID: \"7a22c650-bb4f-41d6-ac06-f53d84478d28\") " pod="openshift-marketplace/redhat-operators-48xt6" Nov 29 07:43:36 crc kubenswrapper[4947]: I1129 07:43:36.161941 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-48xt6" Nov 29 07:43:36 crc kubenswrapper[4947]: I1129 07:43:36.680904 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-48xt6"] Nov 29 07:43:37 crc kubenswrapper[4947]: I1129 07:43:37.459369 4947 generic.go:334] "Generic (PLEG): container finished" podID="7a22c650-bb4f-41d6-ac06-f53d84478d28" containerID="94f2c02b3294bd0f211fa291e7369e8426bdda8497d2f4a116b9891beb92feb4" exitCode=0 Nov 29 07:43:37 crc kubenswrapper[4947]: I1129 07:43:37.459430 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-48xt6" event={"ID":"7a22c650-bb4f-41d6-ac06-f53d84478d28","Type":"ContainerDied","Data":"94f2c02b3294bd0f211fa291e7369e8426bdda8497d2f4a116b9891beb92feb4"} Nov 29 07:43:37 crc kubenswrapper[4947]: I1129 07:43:37.460020 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-48xt6" event={"ID":"7a22c650-bb4f-41d6-ac06-f53d84478d28","Type":"ContainerStarted","Data":"02e203edf488d77831cf26ffaa42c6711c76ae4340e8ba9cc43029825b255e06"} Nov 29 07:43:39 crc kubenswrapper[4947]: I1129 07:43:39.553561 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-48xt6" event={"ID":"7a22c650-bb4f-41d6-ac06-f53d84478d28","Type":"ContainerStarted","Data":"ba0a2b6e81453a0110c189a2782bd996c3ca989885031a87aa1d9406841edca1"} Nov 29 07:43:41 crc kubenswrapper[4947]: I1129 07:43:41.361098 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-s9k64" Nov 29 07:43:41 crc kubenswrapper[4947]: I1129 07:43:41.363304 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-s9k64" Nov 29 07:43:41 crc kubenswrapper[4947]: I1129 07:43:41.441774 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-s9k64" Nov 29 07:43:41 crc kubenswrapper[4947]: I1129 07:43:41.574671 4947 generic.go:334] "Generic (PLEG): container finished" podID="7a22c650-bb4f-41d6-ac06-f53d84478d28" containerID="ba0a2b6e81453a0110c189a2782bd996c3ca989885031a87aa1d9406841edca1" exitCode=0 Nov 29 07:43:41 crc kubenswrapper[4947]: I1129 07:43:41.574756 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-48xt6" event={"ID":"7a22c650-bb4f-41d6-ac06-f53d84478d28","Type":"ContainerDied","Data":"ba0a2b6e81453a0110c189a2782bd996c3ca989885031a87aa1d9406841edca1"} Nov 29 07:43:41 crc kubenswrapper[4947]: I1129 07:43:41.628990 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-s9k64" Nov 29 07:43:44 crc kubenswrapper[4947]: I1129 07:43:44.026360 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s9k64"] Nov 29 07:43:44 crc kubenswrapper[4947]: I1129 07:43:44.027107 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-s9k64" podUID="55a10382-1dac-420c-912e-48afb52d0c26" containerName="registry-server" containerID="cri-o://0221df0e523302d68273396173bacc0433bd5dd17dfec96f0eb6fc89ff4b9528" gracePeriod=2 Nov 29 07:43:45 crc kubenswrapper[4947]: I1129 07:43:45.623136 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-48xt6" event={"ID":"7a22c650-bb4f-41d6-ac06-f53d84478d28","Type":"ContainerStarted","Data":"9d00a62aa633fcefd8cccc4fdc3ee134d65d80e84787b2c8f3b6ad82d7e169bb"} Nov 29 07:43:45 crc kubenswrapper[4947]: I1129 07:43:45.628849 4947 generic.go:334] "Generic (PLEG): container finished" podID="55a10382-1dac-420c-912e-48afb52d0c26" containerID="0221df0e523302d68273396173bacc0433bd5dd17dfec96f0eb6fc89ff4b9528" exitCode=0 Nov 29 07:43:45 crc kubenswrapper[4947]: I1129 07:43:45.628937 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s9k64" event={"ID":"55a10382-1dac-420c-912e-48afb52d0c26","Type":"ContainerDied","Data":"0221df0e523302d68273396173bacc0433bd5dd17dfec96f0eb6fc89ff4b9528"} Nov 29 07:43:45 crc kubenswrapper[4947]: I1129 07:43:45.628991 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s9k64" event={"ID":"55a10382-1dac-420c-912e-48afb52d0c26","Type":"ContainerDied","Data":"a4862dfcee6b2471e6251816f0e4d4b6510891d56fc7a8a5eb5013fb2a0fcecb"} Nov 29 07:43:45 crc kubenswrapper[4947]: I1129 07:43:45.629004 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4862dfcee6b2471e6251816f0e4d4b6510891d56fc7a8a5eb5013fb2a0fcecb" Nov 29 07:43:45 crc kubenswrapper[4947]: I1129 07:43:45.660168 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s9k64" Nov 29 07:43:45 crc kubenswrapper[4947]: I1129 07:43:45.712878 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-48xt6" podStartSLOduration=3.057763053 podStartE2EDuration="10.712854179s" podCreationTimestamp="2025-11-29 07:43:35 +0000 UTC" firstStartedPulling="2025-11-29 07:43:37.462389149 +0000 UTC m=+4168.506771220" lastFinishedPulling="2025-11-29 07:43:45.117480265 +0000 UTC m=+4176.161862346" observedRunningTime="2025-11-29 07:43:45.679259868 +0000 UTC m=+4176.723641959" watchObservedRunningTime="2025-11-29 07:43:45.712854179 +0000 UTC m=+4176.757236260" Nov 29 07:43:45 crc kubenswrapper[4947]: I1129 07:43:45.786789 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55a10382-1dac-420c-912e-48afb52d0c26-catalog-content\") pod \"55a10382-1dac-420c-912e-48afb52d0c26\" (UID: \"55a10382-1dac-420c-912e-48afb52d0c26\") " Nov 29 07:43:45 crc kubenswrapper[4947]: I1129 07:43:45.786984 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ll7mv\" (UniqueName: \"kubernetes.io/projected/55a10382-1dac-420c-912e-48afb52d0c26-kube-api-access-ll7mv\") pod \"55a10382-1dac-420c-912e-48afb52d0c26\" (UID: \"55a10382-1dac-420c-912e-48afb52d0c26\") " Nov 29 07:43:45 crc kubenswrapper[4947]: I1129 07:43:45.787240 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55a10382-1dac-420c-912e-48afb52d0c26-utilities\") pod \"55a10382-1dac-420c-912e-48afb52d0c26\" (UID: \"55a10382-1dac-420c-912e-48afb52d0c26\") " Nov 29 07:43:45 crc kubenswrapper[4947]: I1129 07:43:45.787976 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55a10382-1dac-420c-912e-48afb52d0c26-utilities" (OuterVolumeSpecName: "utilities") pod "55a10382-1dac-420c-912e-48afb52d0c26" (UID: "55a10382-1dac-420c-912e-48afb52d0c26"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:43:45 crc kubenswrapper[4947]: I1129 07:43:45.793649 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55a10382-1dac-420c-912e-48afb52d0c26-kube-api-access-ll7mv" (OuterVolumeSpecName: "kube-api-access-ll7mv") pod "55a10382-1dac-420c-912e-48afb52d0c26" (UID: "55a10382-1dac-420c-912e-48afb52d0c26"). InnerVolumeSpecName "kube-api-access-ll7mv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:43:45 crc kubenswrapper[4947]: I1129 07:43:45.830774 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55a10382-1dac-420c-912e-48afb52d0c26-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "55a10382-1dac-420c-912e-48afb52d0c26" (UID: "55a10382-1dac-420c-912e-48afb52d0c26"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:43:45 crc kubenswrapper[4947]: I1129 07:43:45.890143 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ll7mv\" (UniqueName: \"kubernetes.io/projected/55a10382-1dac-420c-912e-48afb52d0c26-kube-api-access-ll7mv\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:45 crc kubenswrapper[4947]: I1129 07:43:45.890559 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55a10382-1dac-420c-912e-48afb52d0c26-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:45 crc kubenswrapper[4947]: I1129 07:43:45.890598 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55a10382-1dac-420c-912e-48afb52d0c26-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:46 crc kubenswrapper[4947]: I1129 07:43:46.162545 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-48xt6" Nov 29 07:43:46 crc kubenswrapper[4947]: I1129 07:43:46.163849 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-48xt6" Nov 29 07:43:46 crc kubenswrapper[4947]: I1129 07:43:46.638895 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s9k64" Nov 29 07:43:46 crc kubenswrapper[4947]: I1129 07:43:46.674199 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s9k64"] Nov 29 07:43:46 crc kubenswrapper[4947]: I1129 07:43:46.685626 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-s9k64"] Nov 29 07:43:47 crc kubenswrapper[4947]: I1129 07:43:47.190492 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55a10382-1dac-420c-912e-48afb52d0c26" path="/var/lib/kubelet/pods/55a10382-1dac-420c-912e-48afb52d0c26/volumes" Nov 29 07:43:47 crc kubenswrapper[4947]: I1129 07:43:47.215661 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-48xt6" podUID="7a22c650-bb4f-41d6-ac06-f53d84478d28" containerName="registry-server" probeResult="failure" output=< Nov 29 07:43:47 crc kubenswrapper[4947]: timeout: failed to connect service ":50051" within 1s Nov 29 07:43:47 crc kubenswrapper[4947]: > Nov 29 07:43:56 crc kubenswrapper[4947]: I1129 07:43:56.213960 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-48xt6" Nov 29 07:43:56 crc kubenswrapper[4947]: I1129 07:43:56.266830 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-48xt6" Nov 29 07:43:56 crc kubenswrapper[4947]: I1129 07:43:56.453771 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-48xt6"] Nov 29 07:43:57 crc kubenswrapper[4947]: I1129 07:43:57.741826 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-48xt6" podUID="7a22c650-bb4f-41d6-ac06-f53d84478d28" containerName="registry-server" containerID="cri-o://9d00a62aa633fcefd8cccc4fdc3ee134d65d80e84787b2c8f3b6ad82d7e169bb" gracePeriod=2 Nov 29 07:43:58 crc kubenswrapper[4947]: I1129 07:43:58.232932 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-48xt6" Nov 29 07:43:58 crc kubenswrapper[4947]: I1129 07:43:58.371883 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqbst\" (UniqueName: \"kubernetes.io/projected/7a22c650-bb4f-41d6-ac06-f53d84478d28-kube-api-access-lqbst\") pod \"7a22c650-bb4f-41d6-ac06-f53d84478d28\" (UID: \"7a22c650-bb4f-41d6-ac06-f53d84478d28\") " Nov 29 07:43:58 crc kubenswrapper[4947]: I1129 07:43:58.372277 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a22c650-bb4f-41d6-ac06-f53d84478d28-utilities\") pod \"7a22c650-bb4f-41d6-ac06-f53d84478d28\" (UID: \"7a22c650-bb4f-41d6-ac06-f53d84478d28\") " Nov 29 07:43:58 crc kubenswrapper[4947]: I1129 07:43:58.372394 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a22c650-bb4f-41d6-ac06-f53d84478d28-catalog-content\") pod \"7a22c650-bb4f-41d6-ac06-f53d84478d28\" (UID: \"7a22c650-bb4f-41d6-ac06-f53d84478d28\") " Nov 29 07:43:58 crc kubenswrapper[4947]: I1129 07:43:58.373380 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a22c650-bb4f-41d6-ac06-f53d84478d28-utilities" (OuterVolumeSpecName: "utilities") pod "7a22c650-bb4f-41d6-ac06-f53d84478d28" (UID: "7a22c650-bb4f-41d6-ac06-f53d84478d28"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:43:58 crc kubenswrapper[4947]: I1129 07:43:58.377619 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a22c650-bb4f-41d6-ac06-f53d84478d28-kube-api-access-lqbst" (OuterVolumeSpecName: "kube-api-access-lqbst") pod "7a22c650-bb4f-41d6-ac06-f53d84478d28" (UID: "7a22c650-bb4f-41d6-ac06-f53d84478d28"). InnerVolumeSpecName "kube-api-access-lqbst". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:43:58 crc kubenswrapper[4947]: I1129 07:43:58.475384 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqbst\" (UniqueName: \"kubernetes.io/projected/7a22c650-bb4f-41d6-ac06-f53d84478d28-kube-api-access-lqbst\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:58 crc kubenswrapper[4947]: I1129 07:43:58.475701 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a22c650-bb4f-41d6-ac06-f53d84478d28-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:58 crc kubenswrapper[4947]: I1129 07:43:58.496445 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a22c650-bb4f-41d6-ac06-f53d84478d28-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7a22c650-bb4f-41d6-ac06-f53d84478d28" (UID: "7a22c650-bb4f-41d6-ac06-f53d84478d28"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:43:58 crc kubenswrapper[4947]: I1129 07:43:58.578471 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a22c650-bb4f-41d6-ac06-f53d84478d28-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:58 crc kubenswrapper[4947]: I1129 07:43:58.752572 4947 generic.go:334] "Generic (PLEG): container finished" podID="7a22c650-bb4f-41d6-ac06-f53d84478d28" containerID="9d00a62aa633fcefd8cccc4fdc3ee134d65d80e84787b2c8f3b6ad82d7e169bb" exitCode=0 Nov 29 07:43:58 crc kubenswrapper[4947]: I1129 07:43:58.752620 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-48xt6" event={"ID":"7a22c650-bb4f-41d6-ac06-f53d84478d28","Type":"ContainerDied","Data":"9d00a62aa633fcefd8cccc4fdc3ee134d65d80e84787b2c8f3b6ad82d7e169bb"} Nov 29 07:43:58 crc kubenswrapper[4947]: I1129 07:43:58.752651 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-48xt6" event={"ID":"7a22c650-bb4f-41d6-ac06-f53d84478d28","Type":"ContainerDied","Data":"02e203edf488d77831cf26ffaa42c6711c76ae4340e8ba9cc43029825b255e06"} Nov 29 07:43:58 crc kubenswrapper[4947]: I1129 07:43:58.752674 4947 scope.go:117] "RemoveContainer" containerID="9d00a62aa633fcefd8cccc4fdc3ee134d65d80e84787b2c8f3b6ad82d7e169bb" Nov 29 07:43:58 crc kubenswrapper[4947]: I1129 07:43:58.752806 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-48xt6" Nov 29 07:43:58 crc kubenswrapper[4947]: I1129 07:43:58.792581 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-48xt6"] Nov 29 07:43:58 crc kubenswrapper[4947]: I1129 07:43:58.797353 4947 scope.go:117] "RemoveContainer" containerID="ba0a2b6e81453a0110c189a2782bd996c3ca989885031a87aa1d9406841edca1" Nov 29 07:43:58 crc kubenswrapper[4947]: I1129 07:43:58.804125 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-48xt6"] Nov 29 07:43:58 crc kubenswrapper[4947]: I1129 07:43:58.828619 4947 scope.go:117] "RemoveContainer" containerID="94f2c02b3294bd0f211fa291e7369e8426bdda8497d2f4a116b9891beb92feb4" Nov 29 07:43:58 crc kubenswrapper[4947]: I1129 07:43:58.882122 4947 scope.go:117] "RemoveContainer" containerID="9d00a62aa633fcefd8cccc4fdc3ee134d65d80e84787b2c8f3b6ad82d7e169bb" Nov 29 07:43:58 crc kubenswrapper[4947]: E1129 07:43:58.883039 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d00a62aa633fcefd8cccc4fdc3ee134d65d80e84787b2c8f3b6ad82d7e169bb\": container with ID starting with 9d00a62aa633fcefd8cccc4fdc3ee134d65d80e84787b2c8f3b6ad82d7e169bb not found: ID does not exist" containerID="9d00a62aa633fcefd8cccc4fdc3ee134d65d80e84787b2c8f3b6ad82d7e169bb" Nov 29 07:43:58 crc kubenswrapper[4947]: I1129 07:43:58.883131 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d00a62aa633fcefd8cccc4fdc3ee134d65d80e84787b2c8f3b6ad82d7e169bb"} err="failed to get container status \"9d00a62aa633fcefd8cccc4fdc3ee134d65d80e84787b2c8f3b6ad82d7e169bb\": rpc error: code = NotFound desc = could not find container \"9d00a62aa633fcefd8cccc4fdc3ee134d65d80e84787b2c8f3b6ad82d7e169bb\": container with ID starting with 9d00a62aa633fcefd8cccc4fdc3ee134d65d80e84787b2c8f3b6ad82d7e169bb not found: ID does not exist" Nov 29 07:43:58 crc kubenswrapper[4947]: I1129 07:43:58.883183 4947 scope.go:117] "RemoveContainer" containerID="ba0a2b6e81453a0110c189a2782bd996c3ca989885031a87aa1d9406841edca1" Nov 29 07:43:58 crc kubenswrapper[4947]: E1129 07:43:58.883841 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba0a2b6e81453a0110c189a2782bd996c3ca989885031a87aa1d9406841edca1\": container with ID starting with ba0a2b6e81453a0110c189a2782bd996c3ca989885031a87aa1d9406841edca1 not found: ID does not exist" containerID="ba0a2b6e81453a0110c189a2782bd996c3ca989885031a87aa1d9406841edca1" Nov 29 07:43:58 crc kubenswrapper[4947]: I1129 07:43:58.883926 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba0a2b6e81453a0110c189a2782bd996c3ca989885031a87aa1d9406841edca1"} err="failed to get container status \"ba0a2b6e81453a0110c189a2782bd996c3ca989885031a87aa1d9406841edca1\": rpc error: code = NotFound desc = could not find container \"ba0a2b6e81453a0110c189a2782bd996c3ca989885031a87aa1d9406841edca1\": container with ID starting with ba0a2b6e81453a0110c189a2782bd996c3ca989885031a87aa1d9406841edca1 not found: ID does not exist" Nov 29 07:43:58 crc kubenswrapper[4947]: I1129 07:43:58.883983 4947 scope.go:117] "RemoveContainer" containerID="94f2c02b3294bd0f211fa291e7369e8426bdda8497d2f4a116b9891beb92feb4" Nov 29 07:43:58 crc kubenswrapper[4947]: E1129 07:43:58.884729 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94f2c02b3294bd0f211fa291e7369e8426bdda8497d2f4a116b9891beb92feb4\": container with ID starting with 94f2c02b3294bd0f211fa291e7369e8426bdda8497d2f4a116b9891beb92feb4 not found: ID does not exist" containerID="94f2c02b3294bd0f211fa291e7369e8426bdda8497d2f4a116b9891beb92feb4" Nov 29 07:43:58 crc kubenswrapper[4947]: I1129 07:43:58.884785 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94f2c02b3294bd0f211fa291e7369e8426bdda8497d2f4a116b9891beb92feb4"} err="failed to get container status \"94f2c02b3294bd0f211fa291e7369e8426bdda8497d2f4a116b9891beb92feb4\": rpc error: code = NotFound desc = could not find container \"94f2c02b3294bd0f211fa291e7369e8426bdda8497d2f4a116b9891beb92feb4\": container with ID starting with 94f2c02b3294bd0f211fa291e7369e8426bdda8497d2f4a116b9891beb92feb4 not found: ID does not exist" Nov 29 07:43:59 crc kubenswrapper[4947]: I1129 07:43:59.190750 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a22c650-bb4f-41d6-ac06-f53d84478d28" path="/var/lib/kubelet/pods/7a22c650-bb4f-41d6-ac06-f53d84478d28/volumes" Nov 29 07:44:26 crc kubenswrapper[4947]: I1129 07:44:26.904368 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7kz82"] Nov 29 07:44:26 crc kubenswrapper[4947]: E1129 07:44:26.905252 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a22c650-bb4f-41d6-ac06-f53d84478d28" containerName="registry-server" Nov 29 07:44:26 crc kubenswrapper[4947]: I1129 07:44:26.905266 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a22c650-bb4f-41d6-ac06-f53d84478d28" containerName="registry-server" Nov 29 07:44:26 crc kubenswrapper[4947]: E1129 07:44:26.905283 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55a10382-1dac-420c-912e-48afb52d0c26" containerName="extract-content" Nov 29 07:44:26 crc kubenswrapper[4947]: I1129 07:44:26.905289 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="55a10382-1dac-420c-912e-48afb52d0c26" containerName="extract-content" Nov 29 07:44:26 crc kubenswrapper[4947]: E1129 07:44:26.905304 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55a10382-1dac-420c-912e-48afb52d0c26" containerName="registry-server" Nov 29 07:44:26 crc kubenswrapper[4947]: I1129 07:44:26.905310 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="55a10382-1dac-420c-912e-48afb52d0c26" containerName="registry-server" Nov 29 07:44:26 crc kubenswrapper[4947]: E1129 07:44:26.905327 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a22c650-bb4f-41d6-ac06-f53d84478d28" containerName="extract-utilities" Nov 29 07:44:26 crc kubenswrapper[4947]: I1129 07:44:26.905333 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a22c650-bb4f-41d6-ac06-f53d84478d28" containerName="extract-utilities" Nov 29 07:44:26 crc kubenswrapper[4947]: E1129 07:44:26.905348 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a22c650-bb4f-41d6-ac06-f53d84478d28" containerName="extract-content" Nov 29 07:44:26 crc kubenswrapper[4947]: I1129 07:44:26.905355 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a22c650-bb4f-41d6-ac06-f53d84478d28" containerName="extract-content" Nov 29 07:44:26 crc kubenswrapper[4947]: E1129 07:44:26.905366 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55a10382-1dac-420c-912e-48afb52d0c26" containerName="extract-utilities" Nov 29 07:44:26 crc kubenswrapper[4947]: I1129 07:44:26.905372 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="55a10382-1dac-420c-912e-48afb52d0c26" containerName="extract-utilities" Nov 29 07:44:26 crc kubenswrapper[4947]: I1129 07:44:26.905536 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a22c650-bb4f-41d6-ac06-f53d84478d28" containerName="registry-server" Nov 29 07:44:26 crc kubenswrapper[4947]: I1129 07:44:26.905558 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="55a10382-1dac-420c-912e-48afb52d0c26" containerName="registry-server" Nov 29 07:44:26 crc kubenswrapper[4947]: I1129 07:44:26.907143 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7kz82" Nov 29 07:44:26 crc kubenswrapper[4947]: I1129 07:44:26.922970 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7kz82"] Nov 29 07:44:27 crc kubenswrapper[4947]: I1129 07:44:27.080355 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b864faf-129c-4728-8d8c-5ebda5285ca4-utilities\") pod \"community-operators-7kz82\" (UID: \"9b864faf-129c-4728-8d8c-5ebda5285ca4\") " pod="openshift-marketplace/community-operators-7kz82" Nov 29 07:44:27 crc kubenswrapper[4947]: I1129 07:44:27.080398 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtfwt\" (UniqueName: \"kubernetes.io/projected/9b864faf-129c-4728-8d8c-5ebda5285ca4-kube-api-access-vtfwt\") pod \"community-operators-7kz82\" (UID: \"9b864faf-129c-4728-8d8c-5ebda5285ca4\") " pod="openshift-marketplace/community-operators-7kz82" Nov 29 07:44:27 crc kubenswrapper[4947]: I1129 07:44:27.080518 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b864faf-129c-4728-8d8c-5ebda5285ca4-catalog-content\") pod \"community-operators-7kz82\" (UID: \"9b864faf-129c-4728-8d8c-5ebda5285ca4\") " pod="openshift-marketplace/community-operators-7kz82" Nov 29 07:44:27 crc kubenswrapper[4947]: I1129 07:44:27.182983 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b864faf-129c-4728-8d8c-5ebda5285ca4-catalog-content\") pod \"community-operators-7kz82\" (UID: \"9b864faf-129c-4728-8d8c-5ebda5285ca4\") " pod="openshift-marketplace/community-operators-7kz82" Nov 29 07:44:27 crc kubenswrapper[4947]: I1129 07:44:27.183157 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b864faf-129c-4728-8d8c-5ebda5285ca4-utilities\") pod \"community-operators-7kz82\" (UID: \"9b864faf-129c-4728-8d8c-5ebda5285ca4\") " pod="openshift-marketplace/community-operators-7kz82" Nov 29 07:44:27 crc kubenswrapper[4947]: I1129 07:44:27.183200 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtfwt\" (UniqueName: \"kubernetes.io/projected/9b864faf-129c-4728-8d8c-5ebda5285ca4-kube-api-access-vtfwt\") pod \"community-operators-7kz82\" (UID: \"9b864faf-129c-4728-8d8c-5ebda5285ca4\") " pod="openshift-marketplace/community-operators-7kz82" Nov 29 07:44:27 crc kubenswrapper[4947]: I1129 07:44:27.184272 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b864faf-129c-4728-8d8c-5ebda5285ca4-utilities\") pod \"community-operators-7kz82\" (UID: \"9b864faf-129c-4728-8d8c-5ebda5285ca4\") " pod="openshift-marketplace/community-operators-7kz82" Nov 29 07:44:27 crc kubenswrapper[4947]: I1129 07:44:27.184303 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b864faf-129c-4728-8d8c-5ebda5285ca4-catalog-content\") pod \"community-operators-7kz82\" (UID: \"9b864faf-129c-4728-8d8c-5ebda5285ca4\") " pod="openshift-marketplace/community-operators-7kz82" Nov 29 07:44:27 crc kubenswrapper[4947]: I1129 07:44:27.211624 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtfwt\" (UniqueName: \"kubernetes.io/projected/9b864faf-129c-4728-8d8c-5ebda5285ca4-kube-api-access-vtfwt\") pod \"community-operators-7kz82\" (UID: \"9b864faf-129c-4728-8d8c-5ebda5285ca4\") " pod="openshift-marketplace/community-operators-7kz82" Nov 29 07:44:27 crc kubenswrapper[4947]: I1129 07:44:27.235301 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7kz82" Nov 29 07:44:27 crc kubenswrapper[4947]: I1129 07:44:27.844681 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7kz82"] Nov 29 07:44:27 crc kubenswrapper[4947]: W1129 07:44:27.853348 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b864faf_129c_4728_8d8c_5ebda5285ca4.slice/crio-3671b2b8e3e78ca339a62d1738bb00bee9ca6a6486139ccbc594e8476c6dcd28 WatchSource:0}: Error finding container 3671b2b8e3e78ca339a62d1738bb00bee9ca6a6486139ccbc594e8476c6dcd28: Status 404 returned error can't find the container with id 3671b2b8e3e78ca339a62d1738bb00bee9ca6a6486139ccbc594e8476c6dcd28 Nov 29 07:44:28 crc kubenswrapper[4947]: I1129 07:44:28.037423 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7kz82" event={"ID":"9b864faf-129c-4728-8d8c-5ebda5285ca4","Type":"ContainerStarted","Data":"3671b2b8e3e78ca339a62d1738bb00bee9ca6a6486139ccbc594e8476c6dcd28"} Nov 29 07:44:29 crc kubenswrapper[4947]: I1129 07:44:29.049057 4947 generic.go:334] "Generic (PLEG): container finished" podID="9b864faf-129c-4728-8d8c-5ebda5285ca4" containerID="699ac695439cd5b2090916e4a747f9893bc87c0f5051cfc14607dd8bea85ab8d" exitCode=0 Nov 29 07:44:29 crc kubenswrapper[4947]: I1129 07:44:29.049113 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7kz82" event={"ID":"9b864faf-129c-4728-8d8c-5ebda5285ca4","Type":"ContainerDied","Data":"699ac695439cd5b2090916e4a747f9893bc87c0f5051cfc14607dd8bea85ab8d"} Nov 29 07:44:29 crc kubenswrapper[4947]: I1129 07:44:29.051528 4947 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 07:44:30 crc kubenswrapper[4947]: I1129 07:44:30.067821 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7kz82" event={"ID":"9b864faf-129c-4728-8d8c-5ebda5285ca4","Type":"ContainerStarted","Data":"79642506a578971ac9656ec08d95d3ef248471fd9e24140c41aab645e67b34f3"} Nov 29 07:44:31 crc kubenswrapper[4947]: I1129 07:44:31.077399 4947 generic.go:334] "Generic (PLEG): container finished" podID="9b864faf-129c-4728-8d8c-5ebda5285ca4" containerID="79642506a578971ac9656ec08d95d3ef248471fd9e24140c41aab645e67b34f3" exitCode=0 Nov 29 07:44:31 crc kubenswrapper[4947]: I1129 07:44:31.077487 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7kz82" event={"ID":"9b864faf-129c-4728-8d8c-5ebda5285ca4","Type":"ContainerDied","Data":"79642506a578971ac9656ec08d95d3ef248471fd9e24140c41aab645e67b34f3"} Nov 29 07:44:32 crc kubenswrapper[4947]: I1129 07:44:32.091018 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7kz82" event={"ID":"9b864faf-129c-4728-8d8c-5ebda5285ca4","Type":"ContainerStarted","Data":"ab0f2e81171361057cf2173c421da63ae36746b82e31adddf42c74a75c383218"} Nov 29 07:44:32 crc kubenswrapper[4947]: I1129 07:44:32.119121 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7kz82" podStartSLOduration=3.707197019 podStartE2EDuration="6.119095621s" podCreationTimestamp="2025-11-29 07:44:26 +0000 UTC" firstStartedPulling="2025-11-29 07:44:29.051279452 +0000 UTC m=+4220.095661533" lastFinishedPulling="2025-11-29 07:44:31.463178054 +0000 UTC m=+4222.507560135" observedRunningTime="2025-11-29 07:44:32.108035191 +0000 UTC m=+4223.152417272" watchObservedRunningTime="2025-11-29 07:44:32.119095621 +0000 UTC m=+4223.163477702" Nov 29 07:44:37 crc kubenswrapper[4947]: I1129 07:44:37.236365 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7kz82" Nov 29 07:44:37 crc kubenswrapper[4947]: I1129 07:44:37.236808 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7kz82" Nov 29 07:44:37 crc kubenswrapper[4947]: I1129 07:44:37.289362 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7kz82" Nov 29 07:44:38 crc kubenswrapper[4947]: I1129 07:44:38.197271 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7kz82" Nov 29 07:44:38 crc kubenswrapper[4947]: I1129 07:44:38.253938 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7kz82"] Nov 29 07:44:40 crc kubenswrapper[4947]: I1129 07:44:40.164414 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7kz82" podUID="9b864faf-129c-4728-8d8c-5ebda5285ca4" containerName="registry-server" containerID="cri-o://ab0f2e81171361057cf2173c421da63ae36746b82e31adddf42c74a75c383218" gracePeriod=2 Nov 29 07:44:40 crc kubenswrapper[4947]: I1129 07:44:40.780461 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7kz82" Nov 29 07:44:40 crc kubenswrapper[4947]: I1129 07:44:40.877277 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b864faf-129c-4728-8d8c-5ebda5285ca4-catalog-content\") pod \"9b864faf-129c-4728-8d8c-5ebda5285ca4\" (UID: \"9b864faf-129c-4728-8d8c-5ebda5285ca4\") " Nov 29 07:44:40 crc kubenswrapper[4947]: I1129 07:44:40.877420 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtfwt\" (UniqueName: \"kubernetes.io/projected/9b864faf-129c-4728-8d8c-5ebda5285ca4-kube-api-access-vtfwt\") pod \"9b864faf-129c-4728-8d8c-5ebda5285ca4\" (UID: \"9b864faf-129c-4728-8d8c-5ebda5285ca4\") " Nov 29 07:44:40 crc kubenswrapper[4947]: I1129 07:44:40.877499 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b864faf-129c-4728-8d8c-5ebda5285ca4-utilities\") pod \"9b864faf-129c-4728-8d8c-5ebda5285ca4\" (UID: \"9b864faf-129c-4728-8d8c-5ebda5285ca4\") " Nov 29 07:44:40 crc kubenswrapper[4947]: I1129 07:44:40.878799 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b864faf-129c-4728-8d8c-5ebda5285ca4-utilities" (OuterVolumeSpecName: "utilities") pod "9b864faf-129c-4728-8d8c-5ebda5285ca4" (UID: "9b864faf-129c-4728-8d8c-5ebda5285ca4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:44:40 crc kubenswrapper[4947]: I1129 07:44:40.887152 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b864faf-129c-4728-8d8c-5ebda5285ca4-kube-api-access-vtfwt" (OuterVolumeSpecName: "kube-api-access-vtfwt") pod "9b864faf-129c-4728-8d8c-5ebda5285ca4" (UID: "9b864faf-129c-4728-8d8c-5ebda5285ca4"). InnerVolumeSpecName "kube-api-access-vtfwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:44:40 crc kubenswrapper[4947]: I1129 07:44:40.948951 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b864faf-129c-4728-8d8c-5ebda5285ca4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9b864faf-129c-4728-8d8c-5ebda5285ca4" (UID: "9b864faf-129c-4728-8d8c-5ebda5285ca4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:44:40 crc kubenswrapper[4947]: I1129 07:44:40.980469 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b864faf-129c-4728-8d8c-5ebda5285ca4-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:40 crc kubenswrapper[4947]: I1129 07:44:40.980504 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b864faf-129c-4728-8d8c-5ebda5285ca4-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:40 crc kubenswrapper[4947]: I1129 07:44:40.980514 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtfwt\" (UniqueName: \"kubernetes.io/projected/9b864faf-129c-4728-8d8c-5ebda5285ca4-kube-api-access-vtfwt\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:41 crc kubenswrapper[4947]: I1129 07:44:41.184055 4947 generic.go:334] "Generic (PLEG): container finished" podID="9b864faf-129c-4728-8d8c-5ebda5285ca4" containerID="ab0f2e81171361057cf2173c421da63ae36746b82e31adddf42c74a75c383218" exitCode=0 Nov 29 07:44:41 crc kubenswrapper[4947]: I1129 07:44:41.184164 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7kz82" Nov 29 07:44:41 crc kubenswrapper[4947]: I1129 07:44:41.191283 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7kz82" event={"ID":"9b864faf-129c-4728-8d8c-5ebda5285ca4","Type":"ContainerDied","Data":"ab0f2e81171361057cf2173c421da63ae36746b82e31adddf42c74a75c383218"} Nov 29 07:44:41 crc kubenswrapper[4947]: I1129 07:44:41.191352 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7kz82" event={"ID":"9b864faf-129c-4728-8d8c-5ebda5285ca4","Type":"ContainerDied","Data":"3671b2b8e3e78ca339a62d1738bb00bee9ca6a6486139ccbc594e8476c6dcd28"} Nov 29 07:44:41 crc kubenswrapper[4947]: I1129 07:44:41.191372 4947 scope.go:117] "RemoveContainer" containerID="ab0f2e81171361057cf2173c421da63ae36746b82e31adddf42c74a75c383218" Nov 29 07:44:41 crc kubenswrapper[4947]: E1129 07:44:41.197804 4947 kuberuntime_gc.go:389] "Failed to remove container log dead symlink" err="remove /var/log/containers/community-operators-7kz82_openshift-marketplace_registry-server-ab0f2e81171361057cf2173c421da63ae36746b82e31adddf42c74a75c383218.log: no such file or directory" path="/var/log/containers/community-operators-7kz82_openshift-marketplace_registry-server-ab0f2e81171361057cf2173c421da63ae36746b82e31adddf42c74a75c383218.log" Nov 29 07:44:41 crc kubenswrapper[4947]: I1129 07:44:41.215914 4947 scope.go:117] "RemoveContainer" containerID="79642506a578971ac9656ec08d95d3ef248471fd9e24140c41aab645e67b34f3" Nov 29 07:44:41 crc kubenswrapper[4947]: I1129 07:44:41.231392 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7kz82"] Nov 29 07:44:41 crc kubenswrapper[4947]: I1129 07:44:41.236955 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7kz82"] Nov 29 07:44:41 crc kubenswrapper[4947]: I1129 07:44:41.255940 4947 scope.go:117] "RemoveContainer" containerID="699ac695439cd5b2090916e4a747f9893bc87c0f5051cfc14607dd8bea85ab8d" Nov 29 07:44:41 crc kubenswrapper[4947]: I1129 07:44:41.287669 4947 scope.go:117] "RemoveContainer" containerID="ab0f2e81171361057cf2173c421da63ae36746b82e31adddf42c74a75c383218" Nov 29 07:44:41 crc kubenswrapper[4947]: E1129 07:44:41.288750 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab0f2e81171361057cf2173c421da63ae36746b82e31adddf42c74a75c383218\": container with ID starting with ab0f2e81171361057cf2173c421da63ae36746b82e31adddf42c74a75c383218 not found: ID does not exist" containerID="ab0f2e81171361057cf2173c421da63ae36746b82e31adddf42c74a75c383218" Nov 29 07:44:41 crc kubenswrapper[4947]: I1129 07:44:41.288801 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab0f2e81171361057cf2173c421da63ae36746b82e31adddf42c74a75c383218"} err="failed to get container status \"ab0f2e81171361057cf2173c421da63ae36746b82e31adddf42c74a75c383218\": rpc error: code = NotFound desc = could not find container \"ab0f2e81171361057cf2173c421da63ae36746b82e31adddf42c74a75c383218\": container with ID starting with ab0f2e81171361057cf2173c421da63ae36746b82e31adddf42c74a75c383218 not found: ID does not exist" Nov 29 07:44:41 crc kubenswrapper[4947]: I1129 07:44:41.288831 4947 scope.go:117] "RemoveContainer" containerID="79642506a578971ac9656ec08d95d3ef248471fd9e24140c41aab645e67b34f3" Nov 29 07:44:41 crc kubenswrapper[4947]: E1129 07:44:41.289385 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79642506a578971ac9656ec08d95d3ef248471fd9e24140c41aab645e67b34f3\": container with ID starting with 79642506a578971ac9656ec08d95d3ef248471fd9e24140c41aab645e67b34f3 not found: ID does not exist" containerID="79642506a578971ac9656ec08d95d3ef248471fd9e24140c41aab645e67b34f3" Nov 29 07:44:41 crc kubenswrapper[4947]: I1129 07:44:41.289419 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79642506a578971ac9656ec08d95d3ef248471fd9e24140c41aab645e67b34f3"} err="failed to get container status \"79642506a578971ac9656ec08d95d3ef248471fd9e24140c41aab645e67b34f3\": rpc error: code = NotFound desc = could not find container \"79642506a578971ac9656ec08d95d3ef248471fd9e24140c41aab645e67b34f3\": container with ID starting with 79642506a578971ac9656ec08d95d3ef248471fd9e24140c41aab645e67b34f3 not found: ID does not exist" Nov 29 07:44:41 crc kubenswrapper[4947]: I1129 07:44:41.289443 4947 scope.go:117] "RemoveContainer" containerID="699ac695439cd5b2090916e4a747f9893bc87c0f5051cfc14607dd8bea85ab8d" Nov 29 07:44:41 crc kubenswrapper[4947]: E1129 07:44:41.289793 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"699ac695439cd5b2090916e4a747f9893bc87c0f5051cfc14607dd8bea85ab8d\": container with ID starting with 699ac695439cd5b2090916e4a747f9893bc87c0f5051cfc14607dd8bea85ab8d not found: ID does not exist" containerID="699ac695439cd5b2090916e4a747f9893bc87c0f5051cfc14607dd8bea85ab8d" Nov 29 07:44:41 crc kubenswrapper[4947]: I1129 07:44:41.289850 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"699ac695439cd5b2090916e4a747f9893bc87c0f5051cfc14607dd8bea85ab8d"} err="failed to get container status \"699ac695439cd5b2090916e4a747f9893bc87c0f5051cfc14607dd8bea85ab8d\": rpc error: code = NotFound desc = could not find container \"699ac695439cd5b2090916e4a747f9893bc87c0f5051cfc14607dd8bea85ab8d\": container with ID starting with 699ac695439cd5b2090916e4a747f9893bc87c0f5051cfc14607dd8bea85ab8d not found: ID does not exist" Nov 29 07:44:43 crc kubenswrapper[4947]: I1129 07:44:43.189803 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b864faf-129c-4728-8d8c-5ebda5285ca4" path="/var/lib/kubelet/pods/9b864faf-129c-4728-8d8c-5ebda5285ca4/volumes" Nov 29 07:45:00 crc kubenswrapper[4947]: I1129 07:45:00.171538 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406705-r8wwx"] Nov 29 07:45:00 crc kubenswrapper[4947]: E1129 07:45:00.172735 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b864faf-129c-4728-8d8c-5ebda5285ca4" containerName="registry-server" Nov 29 07:45:00 crc kubenswrapper[4947]: I1129 07:45:00.172754 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b864faf-129c-4728-8d8c-5ebda5285ca4" containerName="registry-server" Nov 29 07:45:00 crc kubenswrapper[4947]: E1129 07:45:00.172771 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b864faf-129c-4728-8d8c-5ebda5285ca4" containerName="extract-utilities" Nov 29 07:45:00 crc kubenswrapper[4947]: I1129 07:45:00.172779 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b864faf-129c-4728-8d8c-5ebda5285ca4" containerName="extract-utilities" Nov 29 07:45:00 crc kubenswrapper[4947]: E1129 07:45:00.172811 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b864faf-129c-4728-8d8c-5ebda5285ca4" containerName="extract-content" Nov 29 07:45:00 crc kubenswrapper[4947]: I1129 07:45:00.172820 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b864faf-129c-4728-8d8c-5ebda5285ca4" containerName="extract-content" Nov 29 07:45:00 crc kubenswrapper[4947]: I1129 07:45:00.173094 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b864faf-129c-4728-8d8c-5ebda5285ca4" containerName="registry-server" Nov 29 07:45:00 crc kubenswrapper[4947]: I1129 07:45:00.174051 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-r8wwx" Nov 29 07:45:00 crc kubenswrapper[4947]: I1129 07:45:00.176821 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 29 07:45:00 crc kubenswrapper[4947]: I1129 07:45:00.177836 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 29 07:45:00 crc kubenswrapper[4947]: I1129 07:45:00.215659 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406705-r8wwx"] Nov 29 07:45:00 crc kubenswrapper[4947]: I1129 07:45:00.332237 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4ae4d6c7-41d2-448f-80fa-328befae8ea6-secret-volume\") pod \"collect-profiles-29406705-r8wwx\" (UID: \"4ae4d6c7-41d2-448f-80fa-328befae8ea6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-r8wwx" Nov 29 07:45:00 crc kubenswrapper[4947]: I1129 07:45:00.332293 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs7nt\" (UniqueName: \"kubernetes.io/projected/4ae4d6c7-41d2-448f-80fa-328befae8ea6-kube-api-access-rs7nt\") pod \"collect-profiles-29406705-r8wwx\" (UID: \"4ae4d6c7-41d2-448f-80fa-328befae8ea6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-r8wwx" Nov 29 07:45:00 crc kubenswrapper[4947]: I1129 07:45:00.332451 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4ae4d6c7-41d2-448f-80fa-328befae8ea6-config-volume\") pod \"collect-profiles-29406705-r8wwx\" (UID: \"4ae4d6c7-41d2-448f-80fa-328befae8ea6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-r8wwx" Nov 29 07:45:00 crc kubenswrapper[4947]: I1129 07:45:00.434520 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4ae4d6c7-41d2-448f-80fa-328befae8ea6-secret-volume\") pod \"collect-profiles-29406705-r8wwx\" (UID: \"4ae4d6c7-41d2-448f-80fa-328befae8ea6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-r8wwx" Nov 29 07:45:00 crc kubenswrapper[4947]: I1129 07:45:00.434581 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rs7nt\" (UniqueName: \"kubernetes.io/projected/4ae4d6c7-41d2-448f-80fa-328befae8ea6-kube-api-access-rs7nt\") pod \"collect-profiles-29406705-r8wwx\" (UID: \"4ae4d6c7-41d2-448f-80fa-328befae8ea6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-r8wwx" Nov 29 07:45:00 crc kubenswrapper[4947]: I1129 07:45:00.434646 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4ae4d6c7-41d2-448f-80fa-328befae8ea6-config-volume\") pod \"collect-profiles-29406705-r8wwx\" (UID: \"4ae4d6c7-41d2-448f-80fa-328befae8ea6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-r8wwx" Nov 29 07:45:00 crc kubenswrapper[4947]: I1129 07:45:00.435723 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4ae4d6c7-41d2-448f-80fa-328befae8ea6-config-volume\") pod \"collect-profiles-29406705-r8wwx\" (UID: \"4ae4d6c7-41d2-448f-80fa-328befae8ea6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-r8wwx" Nov 29 07:45:00 crc kubenswrapper[4947]: I1129 07:45:00.449766 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4ae4d6c7-41d2-448f-80fa-328befae8ea6-secret-volume\") pod \"collect-profiles-29406705-r8wwx\" (UID: \"4ae4d6c7-41d2-448f-80fa-328befae8ea6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-r8wwx" Nov 29 07:45:00 crc kubenswrapper[4947]: I1129 07:45:00.460172 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rs7nt\" (UniqueName: \"kubernetes.io/projected/4ae4d6c7-41d2-448f-80fa-328befae8ea6-kube-api-access-rs7nt\") pod \"collect-profiles-29406705-r8wwx\" (UID: \"4ae4d6c7-41d2-448f-80fa-328befae8ea6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-r8wwx" Nov 29 07:45:00 crc kubenswrapper[4947]: I1129 07:45:00.509761 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-r8wwx" Nov 29 07:45:00 crc kubenswrapper[4947]: I1129 07:45:00.985434 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406705-r8wwx"] Nov 29 07:45:01 crc kubenswrapper[4947]: I1129 07:45:01.786175 4947 generic.go:334] "Generic (PLEG): container finished" podID="4ae4d6c7-41d2-448f-80fa-328befae8ea6" containerID="db7ab53c6ce290ec9e06021fc79dd0f6aaa01161b2772dbbcf5d4aba7299f59b" exitCode=0 Nov 29 07:45:01 crc kubenswrapper[4947]: I1129 07:45:01.786298 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-r8wwx" event={"ID":"4ae4d6c7-41d2-448f-80fa-328befae8ea6","Type":"ContainerDied","Data":"db7ab53c6ce290ec9e06021fc79dd0f6aaa01161b2772dbbcf5d4aba7299f59b"} Nov 29 07:45:01 crc kubenswrapper[4947]: I1129 07:45:01.786686 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-r8wwx" event={"ID":"4ae4d6c7-41d2-448f-80fa-328befae8ea6","Type":"ContainerStarted","Data":"0ec4e8ca29ea67794e944a34f8488a52ebc844ccf524e2d84b8eef8ce464572b"} Nov 29 07:45:03 crc kubenswrapper[4947]: I1129 07:45:03.146793 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-r8wwx" Nov 29 07:45:03 crc kubenswrapper[4947]: I1129 07:45:03.299138 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4ae4d6c7-41d2-448f-80fa-328befae8ea6-secret-volume\") pod \"4ae4d6c7-41d2-448f-80fa-328befae8ea6\" (UID: \"4ae4d6c7-41d2-448f-80fa-328befae8ea6\") " Nov 29 07:45:03 crc kubenswrapper[4947]: I1129 07:45:03.300571 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4ae4d6c7-41d2-448f-80fa-328befae8ea6-config-volume\") pod \"4ae4d6c7-41d2-448f-80fa-328befae8ea6\" (UID: \"4ae4d6c7-41d2-448f-80fa-328befae8ea6\") " Nov 29 07:45:03 crc kubenswrapper[4947]: I1129 07:45:03.300706 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rs7nt\" (UniqueName: \"kubernetes.io/projected/4ae4d6c7-41d2-448f-80fa-328befae8ea6-kube-api-access-rs7nt\") pod \"4ae4d6c7-41d2-448f-80fa-328befae8ea6\" (UID: \"4ae4d6c7-41d2-448f-80fa-328befae8ea6\") " Nov 29 07:45:03 crc kubenswrapper[4947]: I1129 07:45:03.301202 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ae4d6c7-41d2-448f-80fa-328befae8ea6-config-volume" (OuterVolumeSpecName: "config-volume") pod "4ae4d6c7-41d2-448f-80fa-328befae8ea6" (UID: "4ae4d6c7-41d2-448f-80fa-328befae8ea6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:45:03 crc kubenswrapper[4947]: I1129 07:45:03.301379 4947 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4ae4d6c7-41d2-448f-80fa-328befae8ea6-config-volume\") on node \"crc\" DevicePath \"\"" Nov 29 07:45:03 crc kubenswrapper[4947]: I1129 07:45:03.308813 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ae4d6c7-41d2-448f-80fa-328befae8ea6-kube-api-access-rs7nt" (OuterVolumeSpecName: "kube-api-access-rs7nt") pod "4ae4d6c7-41d2-448f-80fa-328befae8ea6" (UID: "4ae4d6c7-41d2-448f-80fa-328befae8ea6"). InnerVolumeSpecName "kube-api-access-rs7nt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:45:03 crc kubenswrapper[4947]: I1129 07:45:03.309351 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ae4d6c7-41d2-448f-80fa-328befae8ea6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4ae4d6c7-41d2-448f-80fa-328befae8ea6" (UID: "4ae4d6c7-41d2-448f-80fa-328befae8ea6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:45:03 crc kubenswrapper[4947]: I1129 07:45:03.403954 4947 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4ae4d6c7-41d2-448f-80fa-328befae8ea6-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 29 07:45:03 crc kubenswrapper[4947]: I1129 07:45:03.404015 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rs7nt\" (UniqueName: \"kubernetes.io/projected/4ae4d6c7-41d2-448f-80fa-328befae8ea6-kube-api-access-rs7nt\") on node \"crc\" DevicePath \"\"" Nov 29 07:45:03 crc kubenswrapper[4947]: I1129 07:45:03.806287 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-r8wwx" event={"ID":"4ae4d6c7-41d2-448f-80fa-328befae8ea6","Type":"ContainerDied","Data":"0ec4e8ca29ea67794e944a34f8488a52ebc844ccf524e2d84b8eef8ce464572b"} Nov 29 07:45:03 crc kubenswrapper[4947]: I1129 07:45:03.806349 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ec4e8ca29ea67794e944a34f8488a52ebc844ccf524e2d84b8eef8ce464572b" Nov 29 07:45:03 crc kubenswrapper[4947]: I1129 07:45:03.806326 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-r8wwx" Nov 29 07:45:04 crc kubenswrapper[4947]: I1129 07:45:04.289851 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406660-v5gfx"] Nov 29 07:45:04 crc kubenswrapper[4947]: I1129 07:45:04.298528 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406660-v5gfx"] Nov 29 07:45:05 crc kubenswrapper[4947]: I1129 07:45:05.192197 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b7812d7-d76d-4055-8c4c-c59f058af07f" path="/var/lib/kubelet/pods/7b7812d7-d76d-4055-8c4c-c59f058af07f/volumes" Nov 29 07:45:41 crc kubenswrapper[4947]: I1129 07:45:41.205580 4947 scope.go:117] "RemoveContainer" containerID="d3ea36a03170ca5c542cde276b1e0a404cef6cbc6e1bb7e8a93dbba887929c75" Nov 29 07:45:52 crc kubenswrapper[4947]: I1129 07:45:52.987251 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:45:52 crc kubenswrapper[4947]: I1129 07:45:52.988159 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:46:22 crc kubenswrapper[4947]: I1129 07:46:22.988334 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:46:22 crc kubenswrapper[4947]: I1129 07:46:22.988949 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:46:52 crc kubenswrapper[4947]: I1129 07:46:52.988270 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:46:52 crc kubenswrapper[4947]: I1129 07:46:52.989109 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:46:52 crc kubenswrapper[4947]: I1129 07:46:52.989174 4947 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" Nov 29 07:46:52 crc kubenswrapper[4947]: I1129 07:46:52.990243 4947 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"386fed2bf9e7112ee72ce8a6463b65c3b8de94f73faee1eae597cec2ca1d03cc"} pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 07:46:52 crc kubenswrapper[4947]: I1129 07:46:52.990320 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" containerID="cri-o://386fed2bf9e7112ee72ce8a6463b65c3b8de94f73faee1eae597cec2ca1d03cc" gracePeriod=600 Nov 29 07:46:53 crc kubenswrapper[4947]: I1129 07:46:53.965384 4947 generic.go:334] "Generic (PLEG): container finished" podID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerID="386fed2bf9e7112ee72ce8a6463b65c3b8de94f73faee1eae597cec2ca1d03cc" exitCode=0 Nov 29 07:46:53 crc kubenswrapper[4947]: I1129 07:46:53.965492 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" event={"ID":"5f4d791f-bb61-4aaa-a09c-3007b59645a7","Type":"ContainerDied","Data":"386fed2bf9e7112ee72ce8a6463b65c3b8de94f73faee1eae597cec2ca1d03cc"} Nov 29 07:46:53 crc kubenswrapper[4947]: I1129 07:46:53.966339 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" event={"ID":"5f4d791f-bb61-4aaa-a09c-3007b59645a7","Type":"ContainerStarted","Data":"fd81a38636ce2198b2d17d9163a769d289f0348c4d0ee39882dd57ac20213475"} Nov 29 07:46:53 crc kubenswrapper[4947]: I1129 07:46:53.966407 4947 scope.go:117] "RemoveContainer" containerID="befe8bc1f518b72b2765c4bbae633eaff2671198765b803461fe977b3f76f166" Nov 29 07:46:58 crc kubenswrapper[4947]: I1129 07:46:58.306716 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gkrpv"] Nov 29 07:46:58 crc kubenswrapper[4947]: E1129 07:46:58.312813 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ae4d6c7-41d2-448f-80fa-328befae8ea6" containerName="collect-profiles" Nov 29 07:46:58 crc kubenswrapper[4947]: I1129 07:46:58.312926 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ae4d6c7-41d2-448f-80fa-328befae8ea6" containerName="collect-profiles" Nov 29 07:46:58 crc kubenswrapper[4947]: I1129 07:46:58.313356 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ae4d6c7-41d2-448f-80fa-328befae8ea6" containerName="collect-profiles" Nov 29 07:46:58 crc kubenswrapper[4947]: I1129 07:46:58.314944 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gkrpv" Nov 29 07:46:58 crc kubenswrapper[4947]: I1129 07:46:58.353162 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gkrpv"] Nov 29 07:46:58 crc kubenswrapper[4947]: I1129 07:46:58.486265 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwv6h\" (UniqueName: \"kubernetes.io/projected/bd2b8464-1fa6-4cd9-8380-d6634168053a-kube-api-access-mwv6h\") pod \"redhat-marketplace-gkrpv\" (UID: \"bd2b8464-1fa6-4cd9-8380-d6634168053a\") " pod="openshift-marketplace/redhat-marketplace-gkrpv" Nov 29 07:46:58 crc kubenswrapper[4947]: I1129 07:46:58.486355 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd2b8464-1fa6-4cd9-8380-d6634168053a-utilities\") pod \"redhat-marketplace-gkrpv\" (UID: \"bd2b8464-1fa6-4cd9-8380-d6634168053a\") " pod="openshift-marketplace/redhat-marketplace-gkrpv" Nov 29 07:46:58 crc kubenswrapper[4947]: I1129 07:46:58.486478 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd2b8464-1fa6-4cd9-8380-d6634168053a-catalog-content\") pod \"redhat-marketplace-gkrpv\" (UID: \"bd2b8464-1fa6-4cd9-8380-d6634168053a\") " pod="openshift-marketplace/redhat-marketplace-gkrpv" Nov 29 07:46:58 crc kubenswrapper[4947]: I1129 07:46:58.588993 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd2b8464-1fa6-4cd9-8380-d6634168053a-catalog-content\") pod \"redhat-marketplace-gkrpv\" (UID: \"bd2b8464-1fa6-4cd9-8380-d6634168053a\") " pod="openshift-marketplace/redhat-marketplace-gkrpv" Nov 29 07:46:58 crc kubenswrapper[4947]: I1129 07:46:58.589132 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwv6h\" (UniqueName: \"kubernetes.io/projected/bd2b8464-1fa6-4cd9-8380-d6634168053a-kube-api-access-mwv6h\") pod \"redhat-marketplace-gkrpv\" (UID: \"bd2b8464-1fa6-4cd9-8380-d6634168053a\") " pod="openshift-marketplace/redhat-marketplace-gkrpv" Nov 29 07:46:58 crc kubenswrapper[4947]: I1129 07:46:58.589196 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd2b8464-1fa6-4cd9-8380-d6634168053a-utilities\") pod \"redhat-marketplace-gkrpv\" (UID: \"bd2b8464-1fa6-4cd9-8380-d6634168053a\") " pod="openshift-marketplace/redhat-marketplace-gkrpv" Nov 29 07:46:58 crc kubenswrapper[4947]: I1129 07:46:58.589895 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd2b8464-1fa6-4cd9-8380-d6634168053a-utilities\") pod \"redhat-marketplace-gkrpv\" (UID: \"bd2b8464-1fa6-4cd9-8380-d6634168053a\") " pod="openshift-marketplace/redhat-marketplace-gkrpv" Nov 29 07:46:58 crc kubenswrapper[4947]: I1129 07:46:58.590162 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd2b8464-1fa6-4cd9-8380-d6634168053a-catalog-content\") pod \"redhat-marketplace-gkrpv\" (UID: \"bd2b8464-1fa6-4cd9-8380-d6634168053a\") " pod="openshift-marketplace/redhat-marketplace-gkrpv" Nov 29 07:46:58 crc kubenswrapper[4947]: I1129 07:46:58.624029 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwv6h\" (UniqueName: \"kubernetes.io/projected/bd2b8464-1fa6-4cd9-8380-d6634168053a-kube-api-access-mwv6h\") pod \"redhat-marketplace-gkrpv\" (UID: \"bd2b8464-1fa6-4cd9-8380-d6634168053a\") " pod="openshift-marketplace/redhat-marketplace-gkrpv" Nov 29 07:46:58 crc kubenswrapper[4947]: I1129 07:46:58.654208 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gkrpv" Nov 29 07:46:59 crc kubenswrapper[4947]: I1129 07:46:59.167635 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gkrpv"] Nov 29 07:47:00 crc kubenswrapper[4947]: I1129 07:47:00.029906 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gkrpv" event={"ID":"bd2b8464-1fa6-4cd9-8380-d6634168053a","Type":"ContainerStarted","Data":"736db2f68ee06579d777d56473d8f04976d5a64bdf312eff5467abc348cc0517"} Nov 29 07:47:00 crc kubenswrapper[4947]: I1129 07:47:00.030343 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gkrpv" event={"ID":"bd2b8464-1fa6-4cd9-8380-d6634168053a","Type":"ContainerStarted","Data":"f68be78a60c4ed7ed462687aa73ff7eaa413266152e4469a9515124e74ff761f"} Nov 29 07:47:01 crc kubenswrapper[4947]: I1129 07:47:01.045502 4947 generic.go:334] "Generic (PLEG): container finished" podID="bd2b8464-1fa6-4cd9-8380-d6634168053a" containerID="736db2f68ee06579d777d56473d8f04976d5a64bdf312eff5467abc348cc0517" exitCode=0 Nov 29 07:47:01 crc kubenswrapper[4947]: I1129 07:47:01.045605 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gkrpv" event={"ID":"bd2b8464-1fa6-4cd9-8380-d6634168053a","Type":"ContainerDied","Data":"736db2f68ee06579d777d56473d8f04976d5a64bdf312eff5467abc348cc0517"} Nov 29 07:47:03 crc kubenswrapper[4947]: I1129 07:47:03.073425 4947 generic.go:334] "Generic (PLEG): container finished" podID="bd2b8464-1fa6-4cd9-8380-d6634168053a" containerID="c95e70509a5dbd02a24a24610625e6ff862229f24e1ed79bef0d131673165adc" exitCode=0 Nov 29 07:47:03 crc kubenswrapper[4947]: I1129 07:47:03.073717 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gkrpv" event={"ID":"bd2b8464-1fa6-4cd9-8380-d6634168053a","Type":"ContainerDied","Data":"c95e70509a5dbd02a24a24610625e6ff862229f24e1ed79bef0d131673165adc"} Nov 29 07:47:04 crc kubenswrapper[4947]: I1129 07:47:04.089297 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gkrpv" event={"ID":"bd2b8464-1fa6-4cd9-8380-d6634168053a","Type":"ContainerStarted","Data":"71b1be86af348459907c401fcfeb9d4657b3fe351b190ad437043a0e58c1745d"} Nov 29 07:47:08 crc kubenswrapper[4947]: I1129 07:47:08.655625 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gkrpv" Nov 29 07:47:08 crc kubenswrapper[4947]: I1129 07:47:08.656847 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gkrpv" Nov 29 07:47:08 crc kubenswrapper[4947]: I1129 07:47:08.715274 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gkrpv" Nov 29 07:47:08 crc kubenswrapper[4947]: I1129 07:47:08.751617 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gkrpv" podStartSLOduration=8.0750901 podStartE2EDuration="10.751575281s" podCreationTimestamp="2025-11-29 07:46:58 +0000 UTC" firstStartedPulling="2025-11-29 07:47:01.050347747 +0000 UTC m=+4372.094729828" lastFinishedPulling="2025-11-29 07:47:03.726832928 +0000 UTC m=+4374.771215009" observedRunningTime="2025-11-29 07:47:04.142264156 +0000 UTC m=+4375.186646237" watchObservedRunningTime="2025-11-29 07:47:08.751575281 +0000 UTC m=+4379.795957362" Nov 29 07:47:09 crc kubenswrapper[4947]: I1129 07:47:09.202598 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gkrpv" Nov 29 07:47:09 crc kubenswrapper[4947]: I1129 07:47:09.272014 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gkrpv"] Nov 29 07:47:11 crc kubenswrapper[4947]: I1129 07:47:11.162285 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gkrpv" podUID="bd2b8464-1fa6-4cd9-8380-d6634168053a" containerName="registry-server" containerID="cri-o://71b1be86af348459907c401fcfeb9d4657b3fe351b190ad437043a0e58c1745d" gracePeriod=2 Nov 29 07:47:11 crc kubenswrapper[4947]: I1129 07:47:11.788015 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gkrpv" Nov 29 07:47:11 crc kubenswrapper[4947]: I1129 07:47:11.960251 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwv6h\" (UniqueName: \"kubernetes.io/projected/bd2b8464-1fa6-4cd9-8380-d6634168053a-kube-api-access-mwv6h\") pod \"bd2b8464-1fa6-4cd9-8380-d6634168053a\" (UID: \"bd2b8464-1fa6-4cd9-8380-d6634168053a\") " Nov 29 07:47:11 crc kubenswrapper[4947]: I1129 07:47:11.961074 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd2b8464-1fa6-4cd9-8380-d6634168053a-catalog-content\") pod \"bd2b8464-1fa6-4cd9-8380-d6634168053a\" (UID: \"bd2b8464-1fa6-4cd9-8380-d6634168053a\") " Nov 29 07:47:11 crc kubenswrapper[4947]: I1129 07:47:11.961613 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd2b8464-1fa6-4cd9-8380-d6634168053a-utilities\") pod \"bd2b8464-1fa6-4cd9-8380-d6634168053a\" (UID: \"bd2b8464-1fa6-4cd9-8380-d6634168053a\") " Nov 29 07:47:11 crc kubenswrapper[4947]: I1129 07:47:11.962480 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd2b8464-1fa6-4cd9-8380-d6634168053a-utilities" (OuterVolumeSpecName: "utilities") pod "bd2b8464-1fa6-4cd9-8380-d6634168053a" (UID: "bd2b8464-1fa6-4cd9-8380-d6634168053a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:47:11 crc kubenswrapper[4947]: I1129 07:47:11.976838 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd2b8464-1fa6-4cd9-8380-d6634168053a-kube-api-access-mwv6h" (OuterVolumeSpecName: "kube-api-access-mwv6h") pod "bd2b8464-1fa6-4cd9-8380-d6634168053a" (UID: "bd2b8464-1fa6-4cd9-8380-d6634168053a"). InnerVolumeSpecName "kube-api-access-mwv6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:47:11 crc kubenswrapper[4947]: I1129 07:47:11.991388 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd2b8464-1fa6-4cd9-8380-d6634168053a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bd2b8464-1fa6-4cd9-8380-d6634168053a" (UID: "bd2b8464-1fa6-4cd9-8380-d6634168053a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:47:12 crc kubenswrapper[4947]: I1129 07:47:12.065021 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd2b8464-1fa6-4cd9-8380-d6634168053a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:47:12 crc kubenswrapper[4947]: I1129 07:47:12.065068 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd2b8464-1fa6-4cd9-8380-d6634168053a-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:47:12 crc kubenswrapper[4947]: I1129 07:47:12.065085 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwv6h\" (UniqueName: \"kubernetes.io/projected/bd2b8464-1fa6-4cd9-8380-d6634168053a-kube-api-access-mwv6h\") on node \"crc\" DevicePath \"\"" Nov 29 07:47:12 crc kubenswrapper[4947]: I1129 07:47:12.179780 4947 generic.go:334] "Generic (PLEG): container finished" podID="bd2b8464-1fa6-4cd9-8380-d6634168053a" containerID="71b1be86af348459907c401fcfeb9d4657b3fe351b190ad437043a0e58c1745d" exitCode=0 Nov 29 07:47:12 crc kubenswrapper[4947]: I1129 07:47:12.179836 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gkrpv" event={"ID":"bd2b8464-1fa6-4cd9-8380-d6634168053a","Type":"ContainerDied","Data":"71b1be86af348459907c401fcfeb9d4657b3fe351b190ad437043a0e58c1745d"} Nov 29 07:47:12 crc kubenswrapper[4947]: I1129 07:47:12.179849 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gkrpv" Nov 29 07:47:12 crc kubenswrapper[4947]: I1129 07:47:12.179904 4947 scope.go:117] "RemoveContainer" containerID="71b1be86af348459907c401fcfeb9d4657b3fe351b190ad437043a0e58c1745d" Nov 29 07:47:12 crc kubenswrapper[4947]: I1129 07:47:12.179885 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gkrpv" event={"ID":"bd2b8464-1fa6-4cd9-8380-d6634168053a","Type":"ContainerDied","Data":"f68be78a60c4ed7ed462687aa73ff7eaa413266152e4469a9515124e74ff761f"} Nov 29 07:47:12 crc kubenswrapper[4947]: I1129 07:47:12.216386 4947 scope.go:117] "RemoveContainer" containerID="c95e70509a5dbd02a24a24610625e6ff862229f24e1ed79bef0d131673165adc" Nov 29 07:47:12 crc kubenswrapper[4947]: I1129 07:47:12.223384 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gkrpv"] Nov 29 07:47:12 crc kubenswrapper[4947]: I1129 07:47:12.234190 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gkrpv"] Nov 29 07:47:12 crc kubenswrapper[4947]: I1129 07:47:12.246674 4947 scope.go:117] "RemoveContainer" containerID="736db2f68ee06579d777d56473d8f04976d5a64bdf312eff5467abc348cc0517" Nov 29 07:47:12 crc kubenswrapper[4947]: I1129 07:47:12.293748 4947 scope.go:117] "RemoveContainer" containerID="71b1be86af348459907c401fcfeb9d4657b3fe351b190ad437043a0e58c1745d" Nov 29 07:47:12 crc kubenswrapper[4947]: E1129 07:47:12.294785 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71b1be86af348459907c401fcfeb9d4657b3fe351b190ad437043a0e58c1745d\": container with ID starting with 71b1be86af348459907c401fcfeb9d4657b3fe351b190ad437043a0e58c1745d not found: ID does not exist" containerID="71b1be86af348459907c401fcfeb9d4657b3fe351b190ad437043a0e58c1745d" Nov 29 07:47:12 crc kubenswrapper[4947]: I1129 07:47:12.294934 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71b1be86af348459907c401fcfeb9d4657b3fe351b190ad437043a0e58c1745d"} err="failed to get container status \"71b1be86af348459907c401fcfeb9d4657b3fe351b190ad437043a0e58c1745d\": rpc error: code = NotFound desc = could not find container \"71b1be86af348459907c401fcfeb9d4657b3fe351b190ad437043a0e58c1745d\": container with ID starting with 71b1be86af348459907c401fcfeb9d4657b3fe351b190ad437043a0e58c1745d not found: ID does not exist" Nov 29 07:47:12 crc kubenswrapper[4947]: I1129 07:47:12.295051 4947 scope.go:117] "RemoveContainer" containerID="c95e70509a5dbd02a24a24610625e6ff862229f24e1ed79bef0d131673165adc" Nov 29 07:47:12 crc kubenswrapper[4947]: E1129 07:47:12.295830 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c95e70509a5dbd02a24a24610625e6ff862229f24e1ed79bef0d131673165adc\": container with ID starting with c95e70509a5dbd02a24a24610625e6ff862229f24e1ed79bef0d131673165adc not found: ID does not exist" containerID="c95e70509a5dbd02a24a24610625e6ff862229f24e1ed79bef0d131673165adc" Nov 29 07:47:12 crc kubenswrapper[4947]: I1129 07:47:12.295911 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c95e70509a5dbd02a24a24610625e6ff862229f24e1ed79bef0d131673165adc"} err="failed to get container status \"c95e70509a5dbd02a24a24610625e6ff862229f24e1ed79bef0d131673165adc\": rpc error: code = NotFound desc = could not find container \"c95e70509a5dbd02a24a24610625e6ff862229f24e1ed79bef0d131673165adc\": container with ID starting with c95e70509a5dbd02a24a24610625e6ff862229f24e1ed79bef0d131673165adc not found: ID does not exist" Nov 29 07:47:12 crc kubenswrapper[4947]: I1129 07:47:12.295947 4947 scope.go:117] "RemoveContainer" containerID="736db2f68ee06579d777d56473d8f04976d5a64bdf312eff5467abc348cc0517" Nov 29 07:47:12 crc kubenswrapper[4947]: E1129 07:47:12.296750 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"736db2f68ee06579d777d56473d8f04976d5a64bdf312eff5467abc348cc0517\": container with ID starting with 736db2f68ee06579d777d56473d8f04976d5a64bdf312eff5467abc348cc0517 not found: ID does not exist" containerID="736db2f68ee06579d777d56473d8f04976d5a64bdf312eff5467abc348cc0517" Nov 29 07:47:12 crc kubenswrapper[4947]: I1129 07:47:12.296814 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"736db2f68ee06579d777d56473d8f04976d5a64bdf312eff5467abc348cc0517"} err="failed to get container status \"736db2f68ee06579d777d56473d8f04976d5a64bdf312eff5467abc348cc0517\": rpc error: code = NotFound desc = could not find container \"736db2f68ee06579d777d56473d8f04976d5a64bdf312eff5467abc348cc0517\": container with ID starting with 736db2f68ee06579d777d56473d8f04976d5a64bdf312eff5467abc348cc0517 not found: ID does not exist" Nov 29 07:47:13 crc kubenswrapper[4947]: I1129 07:47:13.203982 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd2b8464-1fa6-4cd9-8380-d6634168053a" path="/var/lib/kubelet/pods/bd2b8464-1fa6-4cd9-8380-d6634168053a/volumes" Nov 29 07:49:05 crc kubenswrapper[4947]: I1129 07:49:05.067419 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-83fe-account-create-update-lc9t4"] Nov 29 07:49:05 crc kubenswrapper[4947]: I1129 07:49:05.080484 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-tjns7"] Nov 29 07:49:05 crc kubenswrapper[4947]: I1129 07:49:05.088195 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-83fe-account-create-update-lc9t4"] Nov 29 07:49:05 crc kubenswrapper[4947]: I1129 07:49:05.097392 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-tjns7"] Nov 29 07:49:05 crc kubenswrapper[4947]: I1129 07:49:05.192975 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15bbf1b5-d20c-4156-a60e-d2720d012aa0" path="/var/lib/kubelet/pods/15bbf1b5-d20c-4156-a60e-d2720d012aa0/volumes" Nov 29 07:49:05 crc kubenswrapper[4947]: I1129 07:49:05.194367 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb628464-6376-4dc1-8142-0b9134560fb3" path="/var/lib/kubelet/pods/eb628464-6376-4dc1-8142-0b9134560fb3/volumes" Nov 29 07:49:22 crc kubenswrapper[4947]: I1129 07:49:22.988184 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:49:22 crc kubenswrapper[4947]: I1129 07:49:22.989582 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:49:41 crc kubenswrapper[4947]: I1129 07:49:41.372418 4947 scope.go:117] "RemoveContainer" containerID="0221df0e523302d68273396173bacc0433bd5dd17dfec96f0eb6fc89ff4b9528" Nov 29 07:49:41 crc kubenswrapper[4947]: I1129 07:49:41.401424 4947 scope.go:117] "RemoveContainer" containerID="4214d3a9e2566d640257ea17f352d9b43d2c3a9228541ccbd3ec2c1c7b838a8f" Nov 29 07:49:41 crc kubenswrapper[4947]: I1129 07:49:41.432727 4947 scope.go:117] "RemoveContainer" containerID="1e06e322291d7ec555d9d7f61ea67ee48d6243753cb29c829415d39f2ca2d7fd" Nov 29 07:49:41 crc kubenswrapper[4947]: I1129 07:49:41.503996 4947 scope.go:117] "RemoveContainer" containerID="aff48e45f9019219314cfaea62afb5b2b82262bb597d222cfbd2c5cb020f6613" Nov 29 07:49:41 crc kubenswrapper[4947]: I1129 07:49:41.555977 4947 scope.go:117] "RemoveContainer" containerID="f08a08db469a0514ca910b1df3d7dd7ab4fe4c18f7416f9ec1c748152905fd83" Nov 29 07:49:52 crc kubenswrapper[4947]: I1129 07:49:52.988151 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:49:52 crc kubenswrapper[4947]: I1129 07:49:52.990424 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:49:56 crc kubenswrapper[4947]: I1129 07:49:56.050238 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-9sh8j"] Nov 29 07:49:56 crc kubenswrapper[4947]: I1129 07:49:56.059410 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-9sh8j"] Nov 29 07:49:57 crc kubenswrapper[4947]: I1129 07:49:57.193727 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0759a6d0-0585-4855-8f73-db253214a75b" path="/var/lib/kubelet/pods/0759a6d0-0585-4855-8f73-db253214a75b/volumes" Nov 29 07:50:22 crc kubenswrapper[4947]: I1129 07:50:22.988211 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:50:22 crc kubenswrapper[4947]: I1129 07:50:22.989242 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:50:22 crc kubenswrapper[4947]: I1129 07:50:22.989316 4947 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" Nov 29 07:50:22 crc kubenswrapper[4947]: I1129 07:50:22.990549 4947 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fd81a38636ce2198b2d17d9163a769d289f0348c4d0ee39882dd57ac20213475"} pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 07:50:22 crc kubenswrapper[4947]: I1129 07:50:22.990615 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" containerID="cri-o://fd81a38636ce2198b2d17d9163a769d289f0348c4d0ee39882dd57ac20213475" gracePeriod=600 Nov 29 07:50:24 crc kubenswrapper[4947]: I1129 07:50:24.148914 4947 generic.go:334] "Generic (PLEG): container finished" podID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerID="fd81a38636ce2198b2d17d9163a769d289f0348c4d0ee39882dd57ac20213475" exitCode=0 Nov 29 07:50:24 crc kubenswrapper[4947]: I1129 07:50:24.149017 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" event={"ID":"5f4d791f-bb61-4aaa-a09c-3007b59645a7","Type":"ContainerDied","Data":"fd81a38636ce2198b2d17d9163a769d289f0348c4d0ee39882dd57ac20213475"} Nov 29 07:50:24 crc kubenswrapper[4947]: I1129 07:50:24.149406 4947 scope.go:117] "RemoveContainer" containerID="386fed2bf9e7112ee72ce8a6463b65c3b8de94f73faee1eae597cec2ca1d03cc" Nov 29 07:50:24 crc kubenswrapper[4947]: E1129 07:50:24.305475 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:50:25 crc kubenswrapper[4947]: I1129 07:50:25.161318 4947 scope.go:117] "RemoveContainer" containerID="fd81a38636ce2198b2d17d9163a769d289f0348c4d0ee39882dd57ac20213475" Nov 29 07:50:25 crc kubenswrapper[4947]: E1129 07:50:25.162251 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:50:39 crc kubenswrapper[4947]: I1129 07:50:39.187107 4947 scope.go:117] "RemoveContainer" containerID="fd81a38636ce2198b2d17d9163a769d289f0348c4d0ee39882dd57ac20213475" Nov 29 07:50:39 crc kubenswrapper[4947]: E1129 07:50:39.188567 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:50:41 crc kubenswrapper[4947]: I1129 07:50:41.672896 4947 scope.go:117] "RemoveContainer" containerID="1846f44ffbb038b201977d69380185fb60435d6213183da535c84ddf89ebf1ef" Nov 29 07:50:51 crc kubenswrapper[4947]: I1129 07:50:51.179310 4947 scope.go:117] "RemoveContainer" containerID="fd81a38636ce2198b2d17d9163a769d289f0348c4d0ee39882dd57ac20213475" Nov 29 07:50:51 crc kubenswrapper[4947]: E1129 07:50:51.180111 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:51:03 crc kubenswrapper[4947]: I1129 07:51:03.179457 4947 scope.go:117] "RemoveContainer" containerID="fd81a38636ce2198b2d17d9163a769d289f0348c4d0ee39882dd57ac20213475" Nov 29 07:51:03 crc kubenswrapper[4947]: E1129 07:51:03.180808 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:51:16 crc kubenswrapper[4947]: I1129 07:51:16.180000 4947 scope.go:117] "RemoveContainer" containerID="fd81a38636ce2198b2d17d9163a769d289f0348c4d0ee39882dd57ac20213475" Nov 29 07:51:16 crc kubenswrapper[4947]: E1129 07:51:16.181333 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:51:30 crc kubenswrapper[4947]: I1129 07:51:30.179642 4947 scope.go:117] "RemoveContainer" containerID="fd81a38636ce2198b2d17d9163a769d289f0348c4d0ee39882dd57ac20213475" Nov 29 07:51:30 crc kubenswrapper[4947]: E1129 07:51:30.180747 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:51:42 crc kubenswrapper[4947]: I1129 07:51:42.180171 4947 scope.go:117] "RemoveContainer" containerID="fd81a38636ce2198b2d17d9163a769d289f0348c4d0ee39882dd57ac20213475" Nov 29 07:51:42 crc kubenswrapper[4947]: E1129 07:51:42.183550 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:51:54 crc kubenswrapper[4947]: I1129 07:51:54.181446 4947 scope.go:117] "RemoveContainer" containerID="fd81a38636ce2198b2d17d9163a769d289f0348c4d0ee39882dd57ac20213475" Nov 29 07:51:54 crc kubenswrapper[4947]: E1129 07:51:54.182662 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:52:08 crc kubenswrapper[4947]: I1129 07:52:08.178583 4947 scope.go:117] "RemoveContainer" containerID="fd81a38636ce2198b2d17d9163a769d289f0348c4d0ee39882dd57ac20213475" Nov 29 07:52:08 crc kubenswrapper[4947]: E1129 07:52:08.179781 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:52:19 crc kubenswrapper[4947]: I1129 07:52:19.189875 4947 scope.go:117] "RemoveContainer" containerID="fd81a38636ce2198b2d17d9163a769d289f0348c4d0ee39882dd57ac20213475" Nov 29 07:52:19 crc kubenswrapper[4947]: E1129 07:52:19.191047 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:52:34 crc kubenswrapper[4947]: I1129 07:52:34.179118 4947 scope.go:117] "RemoveContainer" containerID="fd81a38636ce2198b2d17d9163a769d289f0348c4d0ee39882dd57ac20213475" Nov 29 07:52:34 crc kubenswrapper[4947]: E1129 07:52:34.180008 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:52:49 crc kubenswrapper[4947]: I1129 07:52:49.186770 4947 scope.go:117] "RemoveContainer" containerID="fd81a38636ce2198b2d17d9163a769d289f0348c4d0ee39882dd57ac20213475" Nov 29 07:52:49 crc kubenswrapper[4947]: E1129 07:52:49.187754 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:53:02 crc kubenswrapper[4947]: I1129 07:53:02.180727 4947 scope.go:117] "RemoveContainer" containerID="fd81a38636ce2198b2d17d9163a769d289f0348c4d0ee39882dd57ac20213475" Nov 29 07:53:02 crc kubenswrapper[4947]: E1129 07:53:02.181720 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:53:17 crc kubenswrapper[4947]: I1129 07:53:17.179292 4947 scope.go:117] "RemoveContainer" containerID="fd81a38636ce2198b2d17d9163a769d289f0348c4d0ee39882dd57ac20213475" Nov 29 07:53:17 crc kubenswrapper[4947]: E1129 07:53:17.180290 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:53:32 crc kubenswrapper[4947]: I1129 07:53:32.179908 4947 scope.go:117] "RemoveContainer" containerID="fd81a38636ce2198b2d17d9163a769d289f0348c4d0ee39882dd57ac20213475" Nov 29 07:53:32 crc kubenswrapper[4947]: E1129 07:53:32.180929 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:53:44 crc kubenswrapper[4947]: I1129 07:53:44.178367 4947 scope.go:117] "RemoveContainer" containerID="fd81a38636ce2198b2d17d9163a769d289f0348c4d0ee39882dd57ac20213475" Nov 29 07:53:44 crc kubenswrapper[4947]: E1129 07:53:44.179249 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:53:59 crc kubenswrapper[4947]: I1129 07:53:59.184772 4947 scope.go:117] "RemoveContainer" containerID="fd81a38636ce2198b2d17d9163a769d289f0348c4d0ee39882dd57ac20213475" Nov 29 07:53:59 crc kubenswrapper[4947]: E1129 07:53:59.187030 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:54:10 crc kubenswrapper[4947]: I1129 07:54:10.179093 4947 scope.go:117] "RemoveContainer" containerID="fd81a38636ce2198b2d17d9163a769d289f0348c4d0ee39882dd57ac20213475" Nov 29 07:54:10 crc kubenswrapper[4947]: E1129 07:54:10.180200 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:54:24 crc kubenswrapper[4947]: I1129 07:54:24.181106 4947 scope.go:117] "RemoveContainer" containerID="fd81a38636ce2198b2d17d9163a769d289f0348c4d0ee39882dd57ac20213475" Nov 29 07:54:24 crc kubenswrapper[4947]: E1129 07:54:24.182947 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:54:36 crc kubenswrapper[4947]: I1129 07:54:36.179280 4947 scope.go:117] "RemoveContainer" containerID="fd81a38636ce2198b2d17d9163a769d289f0348c4d0ee39882dd57ac20213475" Nov 29 07:54:36 crc kubenswrapper[4947]: E1129 07:54:36.180193 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:54:42 crc kubenswrapper[4947]: I1129 07:54:42.596290 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gpcrq"] Nov 29 07:54:42 crc kubenswrapper[4947]: E1129 07:54:42.597606 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd2b8464-1fa6-4cd9-8380-d6634168053a" containerName="extract-utilities" Nov 29 07:54:42 crc kubenswrapper[4947]: I1129 07:54:42.597624 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd2b8464-1fa6-4cd9-8380-d6634168053a" containerName="extract-utilities" Nov 29 07:54:42 crc kubenswrapper[4947]: E1129 07:54:42.597640 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd2b8464-1fa6-4cd9-8380-d6634168053a" containerName="extract-content" Nov 29 07:54:42 crc kubenswrapper[4947]: I1129 07:54:42.597646 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd2b8464-1fa6-4cd9-8380-d6634168053a" containerName="extract-content" Nov 29 07:54:42 crc kubenswrapper[4947]: E1129 07:54:42.597661 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd2b8464-1fa6-4cd9-8380-d6634168053a" containerName="registry-server" Nov 29 07:54:42 crc kubenswrapper[4947]: I1129 07:54:42.597668 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd2b8464-1fa6-4cd9-8380-d6634168053a" containerName="registry-server" Nov 29 07:54:42 crc kubenswrapper[4947]: I1129 07:54:42.597871 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd2b8464-1fa6-4cd9-8380-d6634168053a" containerName="registry-server" Nov 29 07:54:42 crc kubenswrapper[4947]: I1129 07:54:42.599412 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gpcrq" Nov 29 07:54:42 crc kubenswrapper[4947]: I1129 07:54:42.610210 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gpcrq"] Nov 29 07:54:42 crc kubenswrapper[4947]: I1129 07:54:42.734705 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqbq5\" (UniqueName: \"kubernetes.io/projected/5b923e22-f064-4798-9884-b2e556365327-kube-api-access-hqbq5\") pod \"certified-operators-gpcrq\" (UID: \"5b923e22-f064-4798-9884-b2e556365327\") " pod="openshift-marketplace/certified-operators-gpcrq" Nov 29 07:54:42 crc kubenswrapper[4947]: I1129 07:54:42.734829 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b923e22-f064-4798-9884-b2e556365327-utilities\") pod \"certified-operators-gpcrq\" (UID: \"5b923e22-f064-4798-9884-b2e556365327\") " pod="openshift-marketplace/certified-operators-gpcrq" Nov 29 07:54:42 crc kubenswrapper[4947]: I1129 07:54:42.734852 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b923e22-f064-4798-9884-b2e556365327-catalog-content\") pod \"certified-operators-gpcrq\" (UID: \"5b923e22-f064-4798-9884-b2e556365327\") " pod="openshift-marketplace/certified-operators-gpcrq" Nov 29 07:54:42 crc kubenswrapper[4947]: I1129 07:54:42.837568 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqbq5\" (UniqueName: \"kubernetes.io/projected/5b923e22-f064-4798-9884-b2e556365327-kube-api-access-hqbq5\") pod \"certified-operators-gpcrq\" (UID: \"5b923e22-f064-4798-9884-b2e556365327\") " pod="openshift-marketplace/certified-operators-gpcrq" Nov 29 07:54:42 crc kubenswrapper[4947]: I1129 07:54:42.837720 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b923e22-f064-4798-9884-b2e556365327-utilities\") pod \"certified-operators-gpcrq\" (UID: \"5b923e22-f064-4798-9884-b2e556365327\") " pod="openshift-marketplace/certified-operators-gpcrq" Nov 29 07:54:42 crc kubenswrapper[4947]: I1129 07:54:42.837751 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b923e22-f064-4798-9884-b2e556365327-catalog-content\") pod \"certified-operators-gpcrq\" (UID: \"5b923e22-f064-4798-9884-b2e556365327\") " pod="openshift-marketplace/certified-operators-gpcrq" Nov 29 07:54:42 crc kubenswrapper[4947]: I1129 07:54:42.838431 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b923e22-f064-4798-9884-b2e556365327-catalog-content\") pod \"certified-operators-gpcrq\" (UID: \"5b923e22-f064-4798-9884-b2e556365327\") " pod="openshift-marketplace/certified-operators-gpcrq" Nov 29 07:54:42 crc kubenswrapper[4947]: I1129 07:54:42.838516 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b923e22-f064-4798-9884-b2e556365327-utilities\") pod \"certified-operators-gpcrq\" (UID: \"5b923e22-f064-4798-9884-b2e556365327\") " pod="openshift-marketplace/certified-operators-gpcrq" Nov 29 07:54:43 crc kubenswrapper[4947]: I1129 07:54:43.430848 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqbq5\" (UniqueName: \"kubernetes.io/projected/5b923e22-f064-4798-9884-b2e556365327-kube-api-access-hqbq5\") pod \"certified-operators-gpcrq\" (UID: \"5b923e22-f064-4798-9884-b2e556365327\") " pod="openshift-marketplace/certified-operators-gpcrq" Nov 29 07:54:43 crc kubenswrapper[4947]: I1129 07:54:43.524355 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gpcrq" Nov 29 07:54:44 crc kubenswrapper[4947]: I1129 07:54:44.123466 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gpcrq"] Nov 29 07:54:44 crc kubenswrapper[4947]: I1129 07:54:44.951448 4947 generic.go:334] "Generic (PLEG): container finished" podID="5b923e22-f064-4798-9884-b2e556365327" containerID="809309981f344204b1f75159b2eef87770f3b22456cc3c02119bebdf7c064045" exitCode=0 Nov 29 07:54:44 crc kubenswrapper[4947]: I1129 07:54:44.951493 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gpcrq" event={"ID":"5b923e22-f064-4798-9884-b2e556365327","Type":"ContainerDied","Data":"809309981f344204b1f75159b2eef87770f3b22456cc3c02119bebdf7c064045"} Nov 29 07:54:44 crc kubenswrapper[4947]: I1129 07:54:44.952060 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gpcrq" event={"ID":"5b923e22-f064-4798-9884-b2e556365327","Type":"ContainerStarted","Data":"28f50f1f6fe0c61f230741a030450b9d7416cd59de2c21d907dbc4733e1a5da8"} Nov 29 07:54:44 crc kubenswrapper[4947]: I1129 07:54:44.954461 4947 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 07:54:45 crc kubenswrapper[4947]: I1129 07:54:45.962991 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gpcrq" event={"ID":"5b923e22-f064-4798-9884-b2e556365327","Type":"ContainerStarted","Data":"330b043bd7a7ce8e582138cb0fbd8e7568a863bfaa24fd311caaa5b626687587"} Nov 29 07:54:46 crc kubenswrapper[4947]: I1129 07:54:46.975863 4947 generic.go:334] "Generic (PLEG): container finished" podID="5b923e22-f064-4798-9884-b2e556365327" containerID="330b043bd7a7ce8e582138cb0fbd8e7568a863bfaa24fd311caaa5b626687587" exitCode=0 Nov 29 07:54:46 crc kubenswrapper[4947]: I1129 07:54:46.975929 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gpcrq" event={"ID":"5b923e22-f064-4798-9884-b2e556365327","Type":"ContainerDied","Data":"330b043bd7a7ce8e582138cb0fbd8e7568a863bfaa24fd311caaa5b626687587"} Nov 29 07:54:47 crc kubenswrapper[4947]: I1129 07:54:47.181031 4947 scope.go:117] "RemoveContainer" containerID="fd81a38636ce2198b2d17d9163a769d289f0348c4d0ee39882dd57ac20213475" Nov 29 07:54:47 crc kubenswrapper[4947]: E1129 07:54:47.181694 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:54:47 crc kubenswrapper[4947]: I1129 07:54:47.992513 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gpcrq" event={"ID":"5b923e22-f064-4798-9884-b2e556365327","Type":"ContainerStarted","Data":"bb4471477a337904df565e4b142d15c4d39bb4a582dc352588ded86558bc287a"} Nov 29 07:54:48 crc kubenswrapper[4947]: I1129 07:54:48.019333 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gpcrq" podStartSLOduration=3.5687911100000003 podStartE2EDuration="6.019309841s" podCreationTimestamp="2025-11-29 07:54:42 +0000 UTC" firstStartedPulling="2025-11-29 07:54:44.95418295 +0000 UTC m=+4835.998565031" lastFinishedPulling="2025-11-29 07:54:47.404701681 +0000 UTC m=+4838.449083762" observedRunningTime="2025-11-29 07:54:48.010432527 +0000 UTC m=+4839.054814608" watchObservedRunningTime="2025-11-29 07:54:48.019309841 +0000 UTC m=+4839.063691942" Nov 29 07:54:53 crc kubenswrapper[4947]: I1129 07:54:53.525542 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gpcrq" Nov 29 07:54:53 crc kubenswrapper[4947]: I1129 07:54:53.526714 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gpcrq" Nov 29 07:54:53 crc kubenswrapper[4947]: I1129 07:54:53.585720 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gpcrq" Nov 29 07:54:54 crc kubenswrapper[4947]: I1129 07:54:54.104259 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gpcrq" Nov 29 07:54:54 crc kubenswrapper[4947]: I1129 07:54:54.164019 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gpcrq"] Nov 29 07:54:56 crc kubenswrapper[4947]: I1129 07:54:56.065815 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gpcrq" podUID="5b923e22-f064-4798-9884-b2e556365327" containerName="registry-server" containerID="cri-o://bb4471477a337904df565e4b142d15c4d39bb4a582dc352588ded86558bc287a" gracePeriod=2 Nov 29 07:54:56 crc kubenswrapper[4947]: I1129 07:54:56.692787 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gpcrq" Nov 29 07:54:56 crc kubenswrapper[4947]: I1129 07:54:56.767824 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b923e22-f064-4798-9884-b2e556365327-catalog-content\") pod \"5b923e22-f064-4798-9884-b2e556365327\" (UID: \"5b923e22-f064-4798-9884-b2e556365327\") " Nov 29 07:54:56 crc kubenswrapper[4947]: I1129 07:54:56.768123 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqbq5\" (UniqueName: \"kubernetes.io/projected/5b923e22-f064-4798-9884-b2e556365327-kube-api-access-hqbq5\") pod \"5b923e22-f064-4798-9884-b2e556365327\" (UID: \"5b923e22-f064-4798-9884-b2e556365327\") " Nov 29 07:54:56 crc kubenswrapper[4947]: I1129 07:54:56.768385 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b923e22-f064-4798-9884-b2e556365327-utilities\") pod \"5b923e22-f064-4798-9884-b2e556365327\" (UID: \"5b923e22-f064-4798-9884-b2e556365327\") " Nov 29 07:54:56 crc kubenswrapper[4947]: I1129 07:54:56.769791 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b923e22-f064-4798-9884-b2e556365327-utilities" (OuterVolumeSpecName: "utilities") pod "5b923e22-f064-4798-9884-b2e556365327" (UID: "5b923e22-f064-4798-9884-b2e556365327"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:54:56 crc kubenswrapper[4947]: I1129 07:54:56.780525 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b923e22-f064-4798-9884-b2e556365327-kube-api-access-hqbq5" (OuterVolumeSpecName: "kube-api-access-hqbq5") pod "5b923e22-f064-4798-9884-b2e556365327" (UID: "5b923e22-f064-4798-9884-b2e556365327"). InnerVolumeSpecName "kube-api-access-hqbq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:54:56 crc kubenswrapper[4947]: I1129 07:54:56.818773 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b923e22-f064-4798-9884-b2e556365327-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5b923e22-f064-4798-9884-b2e556365327" (UID: "5b923e22-f064-4798-9884-b2e556365327"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:54:56 crc kubenswrapper[4947]: I1129 07:54:56.871027 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b923e22-f064-4798-9884-b2e556365327-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:54:56 crc kubenswrapper[4947]: I1129 07:54:56.871080 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqbq5\" (UniqueName: \"kubernetes.io/projected/5b923e22-f064-4798-9884-b2e556365327-kube-api-access-hqbq5\") on node \"crc\" DevicePath \"\"" Nov 29 07:54:56 crc kubenswrapper[4947]: I1129 07:54:56.871094 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b923e22-f064-4798-9884-b2e556365327-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:54:57 crc kubenswrapper[4947]: I1129 07:54:57.078899 4947 generic.go:334] "Generic (PLEG): container finished" podID="5b923e22-f064-4798-9884-b2e556365327" containerID="bb4471477a337904df565e4b142d15c4d39bb4a582dc352588ded86558bc287a" exitCode=0 Nov 29 07:54:57 crc kubenswrapper[4947]: I1129 07:54:57.078982 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gpcrq" Nov 29 07:54:57 crc kubenswrapper[4947]: I1129 07:54:57.078981 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gpcrq" event={"ID":"5b923e22-f064-4798-9884-b2e556365327","Type":"ContainerDied","Data":"bb4471477a337904df565e4b142d15c4d39bb4a582dc352588ded86558bc287a"} Nov 29 07:54:57 crc kubenswrapper[4947]: I1129 07:54:57.079386 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gpcrq" event={"ID":"5b923e22-f064-4798-9884-b2e556365327","Type":"ContainerDied","Data":"28f50f1f6fe0c61f230741a030450b9d7416cd59de2c21d907dbc4733e1a5da8"} Nov 29 07:54:57 crc kubenswrapper[4947]: I1129 07:54:57.079406 4947 scope.go:117] "RemoveContainer" containerID="bb4471477a337904df565e4b142d15c4d39bb4a582dc352588ded86558bc287a" Nov 29 07:54:57 crc kubenswrapper[4947]: I1129 07:54:57.098789 4947 scope.go:117] "RemoveContainer" containerID="330b043bd7a7ce8e582138cb0fbd8e7568a863bfaa24fd311caaa5b626687587" Nov 29 07:54:57 crc kubenswrapper[4947]: I1129 07:54:57.120350 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gpcrq"] Nov 29 07:54:57 crc kubenswrapper[4947]: I1129 07:54:57.132003 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gpcrq"] Nov 29 07:54:57 crc kubenswrapper[4947]: I1129 07:54:57.136029 4947 scope.go:117] "RemoveContainer" containerID="809309981f344204b1f75159b2eef87770f3b22456cc3c02119bebdf7c064045" Nov 29 07:54:57 crc kubenswrapper[4947]: I1129 07:54:57.176437 4947 scope.go:117] "RemoveContainer" containerID="bb4471477a337904df565e4b142d15c4d39bb4a582dc352588ded86558bc287a" Nov 29 07:54:57 crc kubenswrapper[4947]: E1129 07:54:57.176914 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb4471477a337904df565e4b142d15c4d39bb4a582dc352588ded86558bc287a\": container with ID starting with bb4471477a337904df565e4b142d15c4d39bb4a582dc352588ded86558bc287a not found: ID does not exist" containerID="bb4471477a337904df565e4b142d15c4d39bb4a582dc352588ded86558bc287a" Nov 29 07:54:57 crc kubenswrapper[4947]: I1129 07:54:57.176995 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb4471477a337904df565e4b142d15c4d39bb4a582dc352588ded86558bc287a"} err="failed to get container status \"bb4471477a337904df565e4b142d15c4d39bb4a582dc352588ded86558bc287a\": rpc error: code = NotFound desc = could not find container \"bb4471477a337904df565e4b142d15c4d39bb4a582dc352588ded86558bc287a\": container with ID starting with bb4471477a337904df565e4b142d15c4d39bb4a582dc352588ded86558bc287a not found: ID does not exist" Nov 29 07:54:57 crc kubenswrapper[4947]: I1129 07:54:57.177032 4947 scope.go:117] "RemoveContainer" containerID="330b043bd7a7ce8e582138cb0fbd8e7568a863bfaa24fd311caaa5b626687587" Nov 29 07:54:57 crc kubenswrapper[4947]: E1129 07:54:57.177683 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"330b043bd7a7ce8e582138cb0fbd8e7568a863bfaa24fd311caaa5b626687587\": container with ID starting with 330b043bd7a7ce8e582138cb0fbd8e7568a863bfaa24fd311caaa5b626687587 not found: ID does not exist" containerID="330b043bd7a7ce8e582138cb0fbd8e7568a863bfaa24fd311caaa5b626687587" Nov 29 07:54:57 crc kubenswrapper[4947]: I1129 07:54:57.177708 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"330b043bd7a7ce8e582138cb0fbd8e7568a863bfaa24fd311caaa5b626687587"} err="failed to get container status \"330b043bd7a7ce8e582138cb0fbd8e7568a863bfaa24fd311caaa5b626687587\": rpc error: code = NotFound desc = could not find container \"330b043bd7a7ce8e582138cb0fbd8e7568a863bfaa24fd311caaa5b626687587\": container with ID starting with 330b043bd7a7ce8e582138cb0fbd8e7568a863bfaa24fd311caaa5b626687587 not found: ID does not exist" Nov 29 07:54:57 crc kubenswrapper[4947]: I1129 07:54:57.177725 4947 scope.go:117] "RemoveContainer" containerID="809309981f344204b1f75159b2eef87770f3b22456cc3c02119bebdf7c064045" Nov 29 07:54:57 crc kubenswrapper[4947]: E1129 07:54:57.178725 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"809309981f344204b1f75159b2eef87770f3b22456cc3c02119bebdf7c064045\": container with ID starting with 809309981f344204b1f75159b2eef87770f3b22456cc3c02119bebdf7c064045 not found: ID does not exist" containerID="809309981f344204b1f75159b2eef87770f3b22456cc3c02119bebdf7c064045" Nov 29 07:54:57 crc kubenswrapper[4947]: I1129 07:54:57.178807 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"809309981f344204b1f75159b2eef87770f3b22456cc3c02119bebdf7c064045"} err="failed to get container status \"809309981f344204b1f75159b2eef87770f3b22456cc3c02119bebdf7c064045\": rpc error: code = NotFound desc = could not find container \"809309981f344204b1f75159b2eef87770f3b22456cc3c02119bebdf7c064045\": container with ID starting with 809309981f344204b1f75159b2eef87770f3b22456cc3c02119bebdf7c064045 not found: ID does not exist" Nov 29 07:54:57 crc kubenswrapper[4947]: I1129 07:54:57.194588 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b923e22-f064-4798-9884-b2e556365327" path="/var/lib/kubelet/pods/5b923e22-f064-4798-9884-b2e556365327/volumes" Nov 29 07:55:02 crc kubenswrapper[4947]: I1129 07:55:02.179557 4947 scope.go:117] "RemoveContainer" containerID="fd81a38636ce2198b2d17d9163a769d289f0348c4d0ee39882dd57ac20213475" Nov 29 07:55:02 crc kubenswrapper[4947]: E1129 07:55:02.180653 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:55:17 crc kubenswrapper[4947]: I1129 07:55:17.179878 4947 scope.go:117] "RemoveContainer" containerID="fd81a38636ce2198b2d17d9163a769d289f0348c4d0ee39882dd57ac20213475" Nov 29 07:55:17 crc kubenswrapper[4947]: E1129 07:55:17.181153 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 07:55:30 crc kubenswrapper[4947]: I1129 07:55:30.180282 4947 scope.go:117] "RemoveContainer" containerID="fd81a38636ce2198b2d17d9163a769d289f0348c4d0ee39882dd57ac20213475" Nov 29 07:55:31 crc kubenswrapper[4947]: I1129 07:55:31.467547 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" event={"ID":"5f4d791f-bb61-4aaa-a09c-3007b59645a7","Type":"ContainerStarted","Data":"9f080681d1601bc144f15e156ed1c8685aa9615dbcfac1411311ef254a024e7f"} Nov 29 07:55:51 crc kubenswrapper[4947]: I1129 07:55:51.839624 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wlcwn"] Nov 29 07:55:51 crc kubenswrapper[4947]: E1129 07:55:51.842161 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b923e22-f064-4798-9884-b2e556365327" containerName="extract-utilities" Nov 29 07:55:51 crc kubenswrapper[4947]: I1129 07:55:51.842267 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b923e22-f064-4798-9884-b2e556365327" containerName="extract-utilities" Nov 29 07:55:51 crc kubenswrapper[4947]: E1129 07:55:51.842290 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b923e22-f064-4798-9884-b2e556365327" containerName="registry-server" Nov 29 07:55:51 crc kubenswrapper[4947]: I1129 07:55:51.842299 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b923e22-f064-4798-9884-b2e556365327" containerName="registry-server" Nov 29 07:55:51 crc kubenswrapper[4947]: E1129 07:55:51.842378 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b923e22-f064-4798-9884-b2e556365327" containerName="extract-content" Nov 29 07:55:51 crc kubenswrapper[4947]: I1129 07:55:51.842390 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b923e22-f064-4798-9884-b2e556365327" containerName="extract-content" Nov 29 07:55:51 crc kubenswrapper[4947]: I1129 07:55:51.842908 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b923e22-f064-4798-9884-b2e556365327" containerName="registry-server" Nov 29 07:55:51 crc kubenswrapper[4947]: I1129 07:55:51.844615 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wlcwn" Nov 29 07:55:51 crc kubenswrapper[4947]: I1129 07:55:51.852557 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wlcwn"] Nov 29 07:55:51 crc kubenswrapper[4947]: I1129 07:55:51.930586 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c96591b-c4ee-434a-bd8e-de5300a24bc7-catalog-content\") pod \"community-operators-wlcwn\" (UID: \"7c96591b-c4ee-434a-bd8e-de5300a24bc7\") " pod="openshift-marketplace/community-operators-wlcwn" Nov 29 07:55:51 crc kubenswrapper[4947]: I1129 07:55:51.930759 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c96591b-c4ee-434a-bd8e-de5300a24bc7-utilities\") pod \"community-operators-wlcwn\" (UID: \"7c96591b-c4ee-434a-bd8e-de5300a24bc7\") " pod="openshift-marketplace/community-operators-wlcwn" Nov 29 07:55:51 crc kubenswrapper[4947]: I1129 07:55:51.930812 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmdlh\" (UniqueName: \"kubernetes.io/projected/7c96591b-c4ee-434a-bd8e-de5300a24bc7-kube-api-access-pmdlh\") pod \"community-operators-wlcwn\" (UID: \"7c96591b-c4ee-434a-bd8e-de5300a24bc7\") " pod="openshift-marketplace/community-operators-wlcwn" Nov 29 07:55:52 crc kubenswrapper[4947]: I1129 07:55:52.032288 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c96591b-c4ee-434a-bd8e-de5300a24bc7-utilities\") pod \"community-operators-wlcwn\" (UID: \"7c96591b-c4ee-434a-bd8e-de5300a24bc7\") " pod="openshift-marketplace/community-operators-wlcwn" Nov 29 07:55:52 crc kubenswrapper[4947]: I1129 07:55:52.032375 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmdlh\" (UniqueName: \"kubernetes.io/projected/7c96591b-c4ee-434a-bd8e-de5300a24bc7-kube-api-access-pmdlh\") pod \"community-operators-wlcwn\" (UID: \"7c96591b-c4ee-434a-bd8e-de5300a24bc7\") " pod="openshift-marketplace/community-operators-wlcwn" Nov 29 07:55:52 crc kubenswrapper[4947]: I1129 07:55:52.032436 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c96591b-c4ee-434a-bd8e-de5300a24bc7-catalog-content\") pod \"community-operators-wlcwn\" (UID: \"7c96591b-c4ee-434a-bd8e-de5300a24bc7\") " pod="openshift-marketplace/community-operators-wlcwn" Nov 29 07:55:52 crc kubenswrapper[4947]: I1129 07:55:52.032967 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c96591b-c4ee-434a-bd8e-de5300a24bc7-utilities\") pod \"community-operators-wlcwn\" (UID: \"7c96591b-c4ee-434a-bd8e-de5300a24bc7\") " pod="openshift-marketplace/community-operators-wlcwn" Nov 29 07:55:52 crc kubenswrapper[4947]: I1129 07:55:52.033013 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c96591b-c4ee-434a-bd8e-de5300a24bc7-catalog-content\") pod \"community-operators-wlcwn\" (UID: \"7c96591b-c4ee-434a-bd8e-de5300a24bc7\") " pod="openshift-marketplace/community-operators-wlcwn" Nov 29 07:55:52 crc kubenswrapper[4947]: I1129 07:55:52.063054 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmdlh\" (UniqueName: \"kubernetes.io/projected/7c96591b-c4ee-434a-bd8e-de5300a24bc7-kube-api-access-pmdlh\") pod \"community-operators-wlcwn\" (UID: \"7c96591b-c4ee-434a-bd8e-de5300a24bc7\") " pod="openshift-marketplace/community-operators-wlcwn" Nov 29 07:55:52 crc kubenswrapper[4947]: I1129 07:55:52.170169 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wlcwn" Nov 29 07:55:52 crc kubenswrapper[4947]: I1129 07:55:52.857177 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wlcwn"] Nov 29 07:55:53 crc kubenswrapper[4947]: I1129 07:55:53.752712 4947 generic.go:334] "Generic (PLEG): container finished" podID="7c96591b-c4ee-434a-bd8e-de5300a24bc7" containerID="833bff3422da453349e47533cf7922c62b962e471c3eef31623a634c345a74a7" exitCode=0 Nov 29 07:55:53 crc kubenswrapper[4947]: I1129 07:55:53.752823 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wlcwn" event={"ID":"7c96591b-c4ee-434a-bd8e-de5300a24bc7","Type":"ContainerDied","Data":"833bff3422da453349e47533cf7922c62b962e471c3eef31623a634c345a74a7"} Nov 29 07:55:53 crc kubenswrapper[4947]: I1129 07:55:53.753413 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wlcwn" event={"ID":"7c96591b-c4ee-434a-bd8e-de5300a24bc7","Type":"ContainerStarted","Data":"6a167f514db83a88a9ba67d3f2723dc70b3fe0b291c4e15dd2a96130a6d5e70e"} Nov 29 07:55:56 crc kubenswrapper[4947]: I1129 07:55:56.641121 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g4n8c"] Nov 29 07:55:56 crc kubenswrapper[4947]: I1129 07:55:56.643784 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g4n8c" Nov 29 07:55:56 crc kubenswrapper[4947]: I1129 07:55:56.653122 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g4n8c"] Nov 29 07:55:56 crc kubenswrapper[4947]: I1129 07:55:56.782930 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a22722d3-5ba9-4151-a69f-6014ff5717c4-utilities\") pod \"redhat-operators-g4n8c\" (UID: \"a22722d3-5ba9-4151-a69f-6014ff5717c4\") " pod="openshift-marketplace/redhat-operators-g4n8c" Nov 29 07:55:56 crc kubenswrapper[4947]: I1129 07:55:56.783382 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvmqc\" (UniqueName: \"kubernetes.io/projected/a22722d3-5ba9-4151-a69f-6014ff5717c4-kube-api-access-cvmqc\") pod \"redhat-operators-g4n8c\" (UID: \"a22722d3-5ba9-4151-a69f-6014ff5717c4\") " pod="openshift-marketplace/redhat-operators-g4n8c" Nov 29 07:55:56 crc kubenswrapper[4947]: I1129 07:55:56.783622 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a22722d3-5ba9-4151-a69f-6014ff5717c4-catalog-content\") pod \"redhat-operators-g4n8c\" (UID: \"a22722d3-5ba9-4151-a69f-6014ff5717c4\") " pod="openshift-marketplace/redhat-operators-g4n8c" Nov 29 07:55:56 crc kubenswrapper[4947]: I1129 07:55:56.886287 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a22722d3-5ba9-4151-a69f-6014ff5717c4-utilities\") pod \"redhat-operators-g4n8c\" (UID: \"a22722d3-5ba9-4151-a69f-6014ff5717c4\") " pod="openshift-marketplace/redhat-operators-g4n8c" Nov 29 07:55:56 crc kubenswrapper[4947]: I1129 07:55:56.886425 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvmqc\" (UniqueName: \"kubernetes.io/projected/a22722d3-5ba9-4151-a69f-6014ff5717c4-kube-api-access-cvmqc\") pod \"redhat-operators-g4n8c\" (UID: \"a22722d3-5ba9-4151-a69f-6014ff5717c4\") " pod="openshift-marketplace/redhat-operators-g4n8c" Nov 29 07:55:56 crc kubenswrapper[4947]: I1129 07:55:56.886506 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a22722d3-5ba9-4151-a69f-6014ff5717c4-catalog-content\") pod \"redhat-operators-g4n8c\" (UID: \"a22722d3-5ba9-4151-a69f-6014ff5717c4\") " pod="openshift-marketplace/redhat-operators-g4n8c" Nov 29 07:55:56 crc kubenswrapper[4947]: I1129 07:55:56.886989 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a22722d3-5ba9-4151-a69f-6014ff5717c4-utilities\") pod \"redhat-operators-g4n8c\" (UID: \"a22722d3-5ba9-4151-a69f-6014ff5717c4\") " pod="openshift-marketplace/redhat-operators-g4n8c" Nov 29 07:55:56 crc kubenswrapper[4947]: I1129 07:55:56.887050 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a22722d3-5ba9-4151-a69f-6014ff5717c4-catalog-content\") pod \"redhat-operators-g4n8c\" (UID: \"a22722d3-5ba9-4151-a69f-6014ff5717c4\") " pod="openshift-marketplace/redhat-operators-g4n8c" Nov 29 07:55:56 crc kubenswrapper[4947]: I1129 07:55:56.913610 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvmqc\" (UniqueName: \"kubernetes.io/projected/a22722d3-5ba9-4151-a69f-6014ff5717c4-kube-api-access-cvmqc\") pod \"redhat-operators-g4n8c\" (UID: \"a22722d3-5ba9-4151-a69f-6014ff5717c4\") " pod="openshift-marketplace/redhat-operators-g4n8c" Nov 29 07:55:56 crc kubenswrapper[4947]: I1129 07:55:56.980572 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g4n8c" Nov 29 07:55:57 crc kubenswrapper[4947]: I1129 07:55:57.512670 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g4n8c"] Nov 29 07:55:57 crc kubenswrapper[4947]: W1129 07:55:57.519410 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda22722d3_5ba9_4151_a69f_6014ff5717c4.slice/crio-32259d1083f06a3397bd177dbb2d1a7a8a5746ae99f5a9c4602ee82a52435568 WatchSource:0}: Error finding container 32259d1083f06a3397bd177dbb2d1a7a8a5746ae99f5a9c4602ee82a52435568: Status 404 returned error can't find the container with id 32259d1083f06a3397bd177dbb2d1a7a8a5746ae99f5a9c4602ee82a52435568 Nov 29 07:55:57 crc kubenswrapper[4947]: I1129 07:55:57.795793 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4n8c" event={"ID":"a22722d3-5ba9-4151-a69f-6014ff5717c4","Type":"ContainerStarted","Data":"32259d1083f06a3397bd177dbb2d1a7a8a5746ae99f5a9c4602ee82a52435568"} Nov 29 07:55:58 crc kubenswrapper[4947]: I1129 07:55:58.814108 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wlcwn" event={"ID":"7c96591b-c4ee-434a-bd8e-de5300a24bc7","Type":"ContainerStarted","Data":"16843f85c035345cce56787e298633928a19fcc97781f52da76a8d5aa3781027"} Nov 29 07:55:58 crc kubenswrapper[4947]: I1129 07:55:58.815846 4947 generic.go:334] "Generic (PLEG): container finished" podID="a22722d3-5ba9-4151-a69f-6014ff5717c4" containerID="6662bd32681dce802dcf8b0f0beea0cb00850c24ad0ee4e11599fc046cc01259" exitCode=0 Nov 29 07:55:58 crc kubenswrapper[4947]: I1129 07:55:58.815881 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4n8c" event={"ID":"a22722d3-5ba9-4151-a69f-6014ff5717c4","Type":"ContainerDied","Data":"6662bd32681dce802dcf8b0f0beea0cb00850c24ad0ee4e11599fc046cc01259"} Nov 29 07:55:59 crc kubenswrapper[4947]: I1129 07:55:59.828142 4947 generic.go:334] "Generic (PLEG): container finished" podID="7c96591b-c4ee-434a-bd8e-de5300a24bc7" containerID="16843f85c035345cce56787e298633928a19fcc97781f52da76a8d5aa3781027" exitCode=0 Nov 29 07:55:59 crc kubenswrapper[4947]: I1129 07:55:59.828199 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wlcwn" event={"ID":"7c96591b-c4ee-434a-bd8e-de5300a24bc7","Type":"ContainerDied","Data":"16843f85c035345cce56787e298633928a19fcc97781f52da76a8d5aa3781027"} Nov 29 07:56:00 crc kubenswrapper[4947]: I1129 07:56:00.840576 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4n8c" event={"ID":"a22722d3-5ba9-4151-a69f-6014ff5717c4","Type":"ContainerStarted","Data":"ce0ffc4ea06b6f2565c706d6188c34d0b86dda1aca268fcc70d2e2f6ac424dd5"} Nov 29 07:56:01 crc kubenswrapper[4947]: I1129 07:56:01.851408 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wlcwn" event={"ID":"7c96591b-c4ee-434a-bd8e-de5300a24bc7","Type":"ContainerStarted","Data":"a87a79474aaecb5402b683ee629d3782ce286c189b72e568a10c81e56eca1d39"} Nov 29 07:56:01 crc kubenswrapper[4947]: I1129 07:56:01.852942 4947 generic.go:334] "Generic (PLEG): container finished" podID="a22722d3-5ba9-4151-a69f-6014ff5717c4" containerID="ce0ffc4ea06b6f2565c706d6188c34d0b86dda1aca268fcc70d2e2f6ac424dd5" exitCode=0 Nov 29 07:56:01 crc kubenswrapper[4947]: I1129 07:56:01.852971 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4n8c" event={"ID":"a22722d3-5ba9-4151-a69f-6014ff5717c4","Type":"ContainerDied","Data":"ce0ffc4ea06b6f2565c706d6188c34d0b86dda1aca268fcc70d2e2f6ac424dd5"} Nov 29 07:56:01 crc kubenswrapper[4947]: I1129 07:56:01.875414 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wlcwn" podStartSLOduration=3.510719093 podStartE2EDuration="10.875385586s" podCreationTimestamp="2025-11-29 07:55:51 +0000 UTC" firstStartedPulling="2025-11-29 07:55:53.755342138 +0000 UTC m=+4904.799724209" lastFinishedPulling="2025-11-29 07:56:01.120008621 +0000 UTC m=+4912.164390702" observedRunningTime="2025-11-29 07:56:01.869531097 +0000 UTC m=+4912.913913188" watchObservedRunningTime="2025-11-29 07:56:01.875385586 +0000 UTC m=+4912.919767667" Nov 29 07:56:02 crc kubenswrapper[4947]: I1129 07:56:02.170639 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wlcwn" Nov 29 07:56:02 crc kubenswrapper[4947]: I1129 07:56:02.170832 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wlcwn" Nov 29 07:56:03 crc kubenswrapper[4947]: I1129 07:56:03.225958 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-wlcwn" podUID="7c96591b-c4ee-434a-bd8e-de5300a24bc7" containerName="registry-server" probeResult="failure" output=< Nov 29 07:56:03 crc kubenswrapper[4947]: timeout: failed to connect service ":50051" within 1s Nov 29 07:56:03 crc kubenswrapper[4947]: > Nov 29 07:56:07 crc kubenswrapper[4947]: I1129 07:56:07.911054 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4n8c" event={"ID":"a22722d3-5ba9-4151-a69f-6014ff5717c4","Type":"ContainerStarted","Data":"73cf3ce8c80b4e05bf5ba2b403b35eaa9dbd2ce1976a123f2ff5299b811fec67"} Nov 29 07:56:07 crc kubenswrapper[4947]: I1129 07:56:07.940736 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g4n8c" podStartSLOduration=4.050302009 podStartE2EDuration="11.940714843s" podCreationTimestamp="2025-11-29 07:55:56 +0000 UTC" firstStartedPulling="2025-11-29 07:55:58.819114368 +0000 UTC m=+4909.863496449" lastFinishedPulling="2025-11-29 07:56:06.709527202 +0000 UTC m=+4917.753909283" observedRunningTime="2025-11-29 07:56:07.93863575 +0000 UTC m=+4918.983017841" watchObservedRunningTime="2025-11-29 07:56:07.940714843 +0000 UTC m=+4918.985096924" Nov 29 07:56:12 crc kubenswrapper[4947]: I1129 07:56:12.243686 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wlcwn" Nov 29 07:56:12 crc kubenswrapper[4947]: I1129 07:56:12.311216 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wlcwn" Nov 29 07:56:12 crc kubenswrapper[4947]: I1129 07:56:12.485020 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wlcwn"] Nov 29 07:56:13 crc kubenswrapper[4947]: I1129 07:56:13.969884 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wlcwn" podUID="7c96591b-c4ee-434a-bd8e-de5300a24bc7" containerName="registry-server" containerID="cri-o://a87a79474aaecb5402b683ee629d3782ce286c189b72e568a10c81e56eca1d39" gracePeriod=2 Nov 29 07:56:14 crc kubenswrapper[4947]: I1129 07:56:14.561000 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wlcwn" Nov 29 07:56:14 crc kubenswrapper[4947]: I1129 07:56:14.760478 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c96591b-c4ee-434a-bd8e-de5300a24bc7-catalog-content\") pod \"7c96591b-c4ee-434a-bd8e-de5300a24bc7\" (UID: \"7c96591b-c4ee-434a-bd8e-de5300a24bc7\") " Nov 29 07:56:14 crc kubenswrapper[4947]: I1129 07:56:14.760713 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c96591b-c4ee-434a-bd8e-de5300a24bc7-utilities\") pod \"7c96591b-c4ee-434a-bd8e-de5300a24bc7\" (UID: \"7c96591b-c4ee-434a-bd8e-de5300a24bc7\") " Nov 29 07:56:14 crc kubenswrapper[4947]: I1129 07:56:14.760784 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmdlh\" (UniqueName: \"kubernetes.io/projected/7c96591b-c4ee-434a-bd8e-de5300a24bc7-kube-api-access-pmdlh\") pod \"7c96591b-c4ee-434a-bd8e-de5300a24bc7\" (UID: \"7c96591b-c4ee-434a-bd8e-de5300a24bc7\") " Nov 29 07:56:14 crc kubenswrapper[4947]: I1129 07:56:14.762101 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c96591b-c4ee-434a-bd8e-de5300a24bc7-utilities" (OuterVolumeSpecName: "utilities") pod "7c96591b-c4ee-434a-bd8e-de5300a24bc7" (UID: "7c96591b-c4ee-434a-bd8e-de5300a24bc7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:56:14 crc kubenswrapper[4947]: I1129 07:56:14.776644 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c96591b-c4ee-434a-bd8e-de5300a24bc7-kube-api-access-pmdlh" (OuterVolumeSpecName: "kube-api-access-pmdlh") pod "7c96591b-c4ee-434a-bd8e-de5300a24bc7" (UID: "7c96591b-c4ee-434a-bd8e-de5300a24bc7"). InnerVolumeSpecName "kube-api-access-pmdlh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:56:14 crc kubenswrapper[4947]: I1129 07:56:14.819616 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c96591b-c4ee-434a-bd8e-de5300a24bc7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7c96591b-c4ee-434a-bd8e-de5300a24bc7" (UID: "7c96591b-c4ee-434a-bd8e-de5300a24bc7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:56:14 crc kubenswrapper[4947]: I1129 07:56:14.864660 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c96591b-c4ee-434a-bd8e-de5300a24bc7-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:56:14 crc kubenswrapper[4947]: I1129 07:56:14.864714 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmdlh\" (UniqueName: \"kubernetes.io/projected/7c96591b-c4ee-434a-bd8e-de5300a24bc7-kube-api-access-pmdlh\") on node \"crc\" DevicePath \"\"" Nov 29 07:56:14 crc kubenswrapper[4947]: I1129 07:56:14.864725 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c96591b-c4ee-434a-bd8e-de5300a24bc7-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:56:14 crc kubenswrapper[4947]: I1129 07:56:14.981508 4947 generic.go:334] "Generic (PLEG): container finished" podID="7c96591b-c4ee-434a-bd8e-de5300a24bc7" containerID="a87a79474aaecb5402b683ee629d3782ce286c189b72e568a10c81e56eca1d39" exitCode=0 Nov 29 07:56:14 crc kubenswrapper[4947]: I1129 07:56:14.981561 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wlcwn" event={"ID":"7c96591b-c4ee-434a-bd8e-de5300a24bc7","Type":"ContainerDied","Data":"a87a79474aaecb5402b683ee629d3782ce286c189b72e568a10c81e56eca1d39"} Nov 29 07:56:14 crc kubenswrapper[4947]: I1129 07:56:14.981591 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wlcwn" Nov 29 07:56:14 crc kubenswrapper[4947]: I1129 07:56:14.981618 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wlcwn" event={"ID":"7c96591b-c4ee-434a-bd8e-de5300a24bc7","Type":"ContainerDied","Data":"6a167f514db83a88a9ba67d3f2723dc70b3fe0b291c4e15dd2a96130a6d5e70e"} Nov 29 07:56:14 crc kubenswrapper[4947]: I1129 07:56:14.981646 4947 scope.go:117] "RemoveContainer" containerID="a87a79474aaecb5402b683ee629d3782ce286c189b72e568a10c81e56eca1d39" Nov 29 07:56:15 crc kubenswrapper[4947]: I1129 07:56:15.005047 4947 scope.go:117] "RemoveContainer" containerID="16843f85c035345cce56787e298633928a19fcc97781f52da76a8d5aa3781027" Nov 29 07:56:15 crc kubenswrapper[4947]: I1129 07:56:15.017631 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wlcwn"] Nov 29 07:56:15 crc kubenswrapper[4947]: I1129 07:56:15.026293 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wlcwn"] Nov 29 07:56:15 crc kubenswrapper[4947]: I1129 07:56:15.039427 4947 scope.go:117] "RemoveContainer" containerID="833bff3422da453349e47533cf7922c62b962e471c3eef31623a634c345a74a7" Nov 29 07:56:15 crc kubenswrapper[4947]: I1129 07:56:15.086815 4947 scope.go:117] "RemoveContainer" containerID="a87a79474aaecb5402b683ee629d3782ce286c189b72e568a10c81e56eca1d39" Nov 29 07:56:15 crc kubenswrapper[4947]: E1129 07:56:15.087320 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a87a79474aaecb5402b683ee629d3782ce286c189b72e568a10c81e56eca1d39\": container with ID starting with a87a79474aaecb5402b683ee629d3782ce286c189b72e568a10c81e56eca1d39 not found: ID does not exist" containerID="a87a79474aaecb5402b683ee629d3782ce286c189b72e568a10c81e56eca1d39" Nov 29 07:56:15 crc kubenswrapper[4947]: I1129 07:56:15.087374 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a87a79474aaecb5402b683ee629d3782ce286c189b72e568a10c81e56eca1d39"} err="failed to get container status \"a87a79474aaecb5402b683ee629d3782ce286c189b72e568a10c81e56eca1d39\": rpc error: code = NotFound desc = could not find container \"a87a79474aaecb5402b683ee629d3782ce286c189b72e568a10c81e56eca1d39\": container with ID starting with a87a79474aaecb5402b683ee629d3782ce286c189b72e568a10c81e56eca1d39 not found: ID does not exist" Nov 29 07:56:15 crc kubenswrapper[4947]: I1129 07:56:15.087417 4947 scope.go:117] "RemoveContainer" containerID="16843f85c035345cce56787e298633928a19fcc97781f52da76a8d5aa3781027" Nov 29 07:56:15 crc kubenswrapper[4947]: E1129 07:56:15.087730 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16843f85c035345cce56787e298633928a19fcc97781f52da76a8d5aa3781027\": container with ID starting with 16843f85c035345cce56787e298633928a19fcc97781f52da76a8d5aa3781027 not found: ID does not exist" containerID="16843f85c035345cce56787e298633928a19fcc97781f52da76a8d5aa3781027" Nov 29 07:56:15 crc kubenswrapper[4947]: I1129 07:56:15.087749 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16843f85c035345cce56787e298633928a19fcc97781f52da76a8d5aa3781027"} err="failed to get container status \"16843f85c035345cce56787e298633928a19fcc97781f52da76a8d5aa3781027\": rpc error: code = NotFound desc = could not find container \"16843f85c035345cce56787e298633928a19fcc97781f52da76a8d5aa3781027\": container with ID starting with 16843f85c035345cce56787e298633928a19fcc97781f52da76a8d5aa3781027 not found: ID does not exist" Nov 29 07:56:15 crc kubenswrapper[4947]: I1129 07:56:15.087762 4947 scope.go:117] "RemoveContainer" containerID="833bff3422da453349e47533cf7922c62b962e471c3eef31623a634c345a74a7" Nov 29 07:56:15 crc kubenswrapper[4947]: E1129 07:56:15.087935 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"833bff3422da453349e47533cf7922c62b962e471c3eef31623a634c345a74a7\": container with ID starting with 833bff3422da453349e47533cf7922c62b962e471c3eef31623a634c345a74a7 not found: ID does not exist" containerID="833bff3422da453349e47533cf7922c62b962e471c3eef31623a634c345a74a7" Nov 29 07:56:15 crc kubenswrapper[4947]: I1129 07:56:15.087955 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"833bff3422da453349e47533cf7922c62b962e471c3eef31623a634c345a74a7"} err="failed to get container status \"833bff3422da453349e47533cf7922c62b962e471c3eef31623a634c345a74a7\": rpc error: code = NotFound desc = could not find container \"833bff3422da453349e47533cf7922c62b962e471c3eef31623a634c345a74a7\": container with ID starting with 833bff3422da453349e47533cf7922c62b962e471c3eef31623a634c345a74a7 not found: ID does not exist" Nov 29 07:56:15 crc kubenswrapper[4947]: I1129 07:56:15.193111 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c96591b-c4ee-434a-bd8e-de5300a24bc7" path="/var/lib/kubelet/pods/7c96591b-c4ee-434a-bd8e-de5300a24bc7/volumes" Nov 29 07:56:16 crc kubenswrapper[4947]: I1129 07:56:16.980811 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-g4n8c" Nov 29 07:56:16 crc kubenswrapper[4947]: I1129 07:56:16.981352 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g4n8c" Nov 29 07:56:17 crc kubenswrapper[4947]: I1129 07:56:17.036950 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g4n8c" Nov 29 07:56:17 crc kubenswrapper[4947]: I1129 07:56:17.093384 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g4n8c" Nov 29 07:56:17 crc kubenswrapper[4947]: I1129 07:56:17.885838 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g4n8c"] Nov 29 07:56:19 crc kubenswrapper[4947]: I1129 07:56:19.021535 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-g4n8c" podUID="a22722d3-5ba9-4151-a69f-6014ff5717c4" containerName="registry-server" containerID="cri-o://73cf3ce8c80b4e05bf5ba2b403b35eaa9dbd2ce1976a123f2ff5299b811fec67" gracePeriod=2 Nov 29 07:56:19 crc kubenswrapper[4947]: I1129 07:56:19.625196 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g4n8c" Nov 29 07:56:19 crc kubenswrapper[4947]: I1129 07:56:19.769200 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a22722d3-5ba9-4151-a69f-6014ff5717c4-utilities\") pod \"a22722d3-5ba9-4151-a69f-6014ff5717c4\" (UID: \"a22722d3-5ba9-4151-a69f-6014ff5717c4\") " Nov 29 07:56:19 crc kubenswrapper[4947]: I1129 07:56:19.769311 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvmqc\" (UniqueName: \"kubernetes.io/projected/a22722d3-5ba9-4151-a69f-6014ff5717c4-kube-api-access-cvmqc\") pod \"a22722d3-5ba9-4151-a69f-6014ff5717c4\" (UID: \"a22722d3-5ba9-4151-a69f-6014ff5717c4\") " Nov 29 07:56:19 crc kubenswrapper[4947]: I1129 07:56:19.769434 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a22722d3-5ba9-4151-a69f-6014ff5717c4-catalog-content\") pod \"a22722d3-5ba9-4151-a69f-6014ff5717c4\" (UID: \"a22722d3-5ba9-4151-a69f-6014ff5717c4\") " Nov 29 07:56:19 crc kubenswrapper[4947]: I1129 07:56:19.770330 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a22722d3-5ba9-4151-a69f-6014ff5717c4-utilities" (OuterVolumeSpecName: "utilities") pod "a22722d3-5ba9-4151-a69f-6014ff5717c4" (UID: "a22722d3-5ba9-4151-a69f-6014ff5717c4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:56:19 crc kubenswrapper[4947]: I1129 07:56:19.872091 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a22722d3-5ba9-4151-a69f-6014ff5717c4-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:56:19 crc kubenswrapper[4947]: I1129 07:56:19.895806 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a22722d3-5ba9-4151-a69f-6014ff5717c4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a22722d3-5ba9-4151-a69f-6014ff5717c4" (UID: "a22722d3-5ba9-4151-a69f-6014ff5717c4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:56:19 crc kubenswrapper[4947]: I1129 07:56:19.974388 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a22722d3-5ba9-4151-a69f-6014ff5717c4-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:56:20 crc kubenswrapper[4947]: I1129 07:56:20.033376 4947 generic.go:334] "Generic (PLEG): container finished" podID="a22722d3-5ba9-4151-a69f-6014ff5717c4" containerID="73cf3ce8c80b4e05bf5ba2b403b35eaa9dbd2ce1976a123f2ff5299b811fec67" exitCode=0 Nov 29 07:56:20 crc kubenswrapper[4947]: I1129 07:56:20.033404 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g4n8c" Nov 29 07:56:20 crc kubenswrapper[4947]: I1129 07:56:20.033438 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4n8c" event={"ID":"a22722d3-5ba9-4151-a69f-6014ff5717c4","Type":"ContainerDied","Data":"73cf3ce8c80b4e05bf5ba2b403b35eaa9dbd2ce1976a123f2ff5299b811fec67"} Nov 29 07:56:20 crc kubenswrapper[4947]: I1129 07:56:20.033469 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4n8c" event={"ID":"a22722d3-5ba9-4151-a69f-6014ff5717c4","Type":"ContainerDied","Data":"32259d1083f06a3397bd177dbb2d1a7a8a5746ae99f5a9c4602ee82a52435568"} Nov 29 07:56:20 crc kubenswrapper[4947]: I1129 07:56:20.033492 4947 scope.go:117] "RemoveContainer" containerID="73cf3ce8c80b4e05bf5ba2b403b35eaa9dbd2ce1976a123f2ff5299b811fec67" Nov 29 07:56:20 crc kubenswrapper[4947]: I1129 07:56:20.057938 4947 scope.go:117] "RemoveContainer" containerID="ce0ffc4ea06b6f2565c706d6188c34d0b86dda1aca268fcc70d2e2f6ac424dd5" Nov 29 07:56:20 crc kubenswrapper[4947]: I1129 07:56:20.228663 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a22722d3-5ba9-4151-a69f-6014ff5717c4-kube-api-access-cvmqc" (OuterVolumeSpecName: "kube-api-access-cvmqc") pod "a22722d3-5ba9-4151-a69f-6014ff5717c4" (UID: "a22722d3-5ba9-4151-a69f-6014ff5717c4"). InnerVolumeSpecName "kube-api-access-cvmqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:56:20 crc kubenswrapper[4947]: I1129 07:56:20.245261 4947 scope.go:117] "RemoveContainer" containerID="6662bd32681dce802dcf8b0f0beea0cb00850c24ad0ee4e11599fc046cc01259" Nov 29 07:56:20 crc kubenswrapper[4947]: I1129 07:56:20.282729 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvmqc\" (UniqueName: \"kubernetes.io/projected/a22722d3-5ba9-4151-a69f-6014ff5717c4-kube-api-access-cvmqc\") on node \"crc\" DevicePath \"\"" Nov 29 07:56:20 crc kubenswrapper[4947]: I1129 07:56:20.555894 4947 scope.go:117] "RemoveContainer" containerID="73cf3ce8c80b4e05bf5ba2b403b35eaa9dbd2ce1976a123f2ff5299b811fec67" Nov 29 07:56:20 crc kubenswrapper[4947]: E1129 07:56:20.556913 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73cf3ce8c80b4e05bf5ba2b403b35eaa9dbd2ce1976a123f2ff5299b811fec67\": container with ID starting with 73cf3ce8c80b4e05bf5ba2b403b35eaa9dbd2ce1976a123f2ff5299b811fec67 not found: ID does not exist" containerID="73cf3ce8c80b4e05bf5ba2b403b35eaa9dbd2ce1976a123f2ff5299b811fec67" Nov 29 07:56:20 crc kubenswrapper[4947]: I1129 07:56:20.556965 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73cf3ce8c80b4e05bf5ba2b403b35eaa9dbd2ce1976a123f2ff5299b811fec67"} err="failed to get container status \"73cf3ce8c80b4e05bf5ba2b403b35eaa9dbd2ce1976a123f2ff5299b811fec67\": rpc error: code = NotFound desc = could not find container \"73cf3ce8c80b4e05bf5ba2b403b35eaa9dbd2ce1976a123f2ff5299b811fec67\": container with ID starting with 73cf3ce8c80b4e05bf5ba2b403b35eaa9dbd2ce1976a123f2ff5299b811fec67 not found: ID does not exist" Nov 29 07:56:20 crc kubenswrapper[4947]: I1129 07:56:20.556994 4947 scope.go:117] "RemoveContainer" containerID="ce0ffc4ea06b6f2565c706d6188c34d0b86dda1aca268fcc70d2e2f6ac424dd5" Nov 29 07:56:20 crc kubenswrapper[4947]: E1129 07:56:20.557597 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce0ffc4ea06b6f2565c706d6188c34d0b86dda1aca268fcc70d2e2f6ac424dd5\": container with ID starting with ce0ffc4ea06b6f2565c706d6188c34d0b86dda1aca268fcc70d2e2f6ac424dd5 not found: ID does not exist" containerID="ce0ffc4ea06b6f2565c706d6188c34d0b86dda1aca268fcc70d2e2f6ac424dd5" Nov 29 07:56:20 crc kubenswrapper[4947]: I1129 07:56:20.557629 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce0ffc4ea06b6f2565c706d6188c34d0b86dda1aca268fcc70d2e2f6ac424dd5"} err="failed to get container status \"ce0ffc4ea06b6f2565c706d6188c34d0b86dda1aca268fcc70d2e2f6ac424dd5\": rpc error: code = NotFound desc = could not find container \"ce0ffc4ea06b6f2565c706d6188c34d0b86dda1aca268fcc70d2e2f6ac424dd5\": container with ID starting with ce0ffc4ea06b6f2565c706d6188c34d0b86dda1aca268fcc70d2e2f6ac424dd5 not found: ID does not exist" Nov 29 07:56:20 crc kubenswrapper[4947]: I1129 07:56:20.557647 4947 scope.go:117] "RemoveContainer" containerID="6662bd32681dce802dcf8b0f0beea0cb00850c24ad0ee4e11599fc046cc01259" Nov 29 07:56:20 crc kubenswrapper[4947]: E1129 07:56:20.558197 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6662bd32681dce802dcf8b0f0beea0cb00850c24ad0ee4e11599fc046cc01259\": container with ID starting with 6662bd32681dce802dcf8b0f0beea0cb00850c24ad0ee4e11599fc046cc01259 not found: ID does not exist" containerID="6662bd32681dce802dcf8b0f0beea0cb00850c24ad0ee4e11599fc046cc01259" Nov 29 07:56:20 crc kubenswrapper[4947]: I1129 07:56:20.558273 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6662bd32681dce802dcf8b0f0beea0cb00850c24ad0ee4e11599fc046cc01259"} err="failed to get container status \"6662bd32681dce802dcf8b0f0beea0cb00850c24ad0ee4e11599fc046cc01259\": rpc error: code = NotFound desc = could not find container \"6662bd32681dce802dcf8b0f0beea0cb00850c24ad0ee4e11599fc046cc01259\": container with ID starting with 6662bd32681dce802dcf8b0f0beea0cb00850c24ad0ee4e11599fc046cc01259 not found: ID does not exist" Nov 29 07:56:20 crc kubenswrapper[4947]: I1129 07:56:20.614198 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g4n8c"] Nov 29 07:56:20 crc kubenswrapper[4947]: I1129 07:56:20.627303 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-g4n8c"] Nov 29 07:56:21 crc kubenswrapper[4947]: I1129 07:56:21.215156 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a22722d3-5ba9-4151-a69f-6014ff5717c4" path="/var/lib/kubelet/pods/a22722d3-5ba9-4151-a69f-6014ff5717c4/volumes" Nov 29 07:57:52 crc kubenswrapper[4947]: I1129 07:57:52.987305 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:57:52 crc kubenswrapper[4947]: I1129 07:57:52.987928 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:58:16 crc kubenswrapper[4947]: I1129 07:58:16.614399 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bwgmm"] Nov 29 07:58:16 crc kubenswrapper[4947]: E1129 07:58:16.615608 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c96591b-c4ee-434a-bd8e-de5300a24bc7" containerName="extract-content" Nov 29 07:58:16 crc kubenswrapper[4947]: I1129 07:58:16.615623 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c96591b-c4ee-434a-bd8e-de5300a24bc7" containerName="extract-content" Nov 29 07:58:16 crc kubenswrapper[4947]: E1129 07:58:16.615641 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a22722d3-5ba9-4151-a69f-6014ff5717c4" containerName="extract-content" Nov 29 07:58:16 crc kubenswrapper[4947]: I1129 07:58:16.615650 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="a22722d3-5ba9-4151-a69f-6014ff5717c4" containerName="extract-content" Nov 29 07:58:16 crc kubenswrapper[4947]: E1129 07:58:16.615666 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c96591b-c4ee-434a-bd8e-de5300a24bc7" containerName="extract-utilities" Nov 29 07:58:16 crc kubenswrapper[4947]: I1129 07:58:16.615673 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c96591b-c4ee-434a-bd8e-de5300a24bc7" containerName="extract-utilities" Nov 29 07:58:16 crc kubenswrapper[4947]: E1129 07:58:16.615702 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a22722d3-5ba9-4151-a69f-6014ff5717c4" containerName="registry-server" Nov 29 07:58:16 crc kubenswrapper[4947]: I1129 07:58:16.615707 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="a22722d3-5ba9-4151-a69f-6014ff5717c4" containerName="registry-server" Nov 29 07:58:16 crc kubenswrapper[4947]: E1129 07:58:16.615730 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c96591b-c4ee-434a-bd8e-de5300a24bc7" containerName="registry-server" Nov 29 07:58:16 crc kubenswrapper[4947]: I1129 07:58:16.615736 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c96591b-c4ee-434a-bd8e-de5300a24bc7" containerName="registry-server" Nov 29 07:58:16 crc kubenswrapper[4947]: E1129 07:58:16.615748 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a22722d3-5ba9-4151-a69f-6014ff5717c4" containerName="extract-utilities" Nov 29 07:58:16 crc kubenswrapper[4947]: I1129 07:58:16.615754 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="a22722d3-5ba9-4151-a69f-6014ff5717c4" containerName="extract-utilities" Nov 29 07:58:16 crc kubenswrapper[4947]: I1129 07:58:16.615987 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c96591b-c4ee-434a-bd8e-de5300a24bc7" containerName="registry-server" Nov 29 07:58:16 crc kubenswrapper[4947]: I1129 07:58:16.616007 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="a22722d3-5ba9-4151-a69f-6014ff5717c4" containerName="registry-server" Nov 29 07:58:16 crc kubenswrapper[4947]: I1129 07:58:16.619200 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bwgmm" Nov 29 07:58:16 crc kubenswrapper[4947]: I1129 07:58:16.631903 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bwgmm"] Nov 29 07:58:16 crc kubenswrapper[4947]: I1129 07:58:16.688060 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnsxn\" (UniqueName: \"kubernetes.io/projected/c4f11288-4855-42ca-afd2-5ef9419382ba-kube-api-access-lnsxn\") pod \"redhat-marketplace-bwgmm\" (UID: \"c4f11288-4855-42ca-afd2-5ef9419382ba\") " pod="openshift-marketplace/redhat-marketplace-bwgmm" Nov 29 07:58:16 crc kubenswrapper[4947]: I1129 07:58:16.688203 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4f11288-4855-42ca-afd2-5ef9419382ba-utilities\") pod \"redhat-marketplace-bwgmm\" (UID: \"c4f11288-4855-42ca-afd2-5ef9419382ba\") " pod="openshift-marketplace/redhat-marketplace-bwgmm" Nov 29 07:58:16 crc kubenswrapper[4947]: I1129 07:58:16.688246 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4f11288-4855-42ca-afd2-5ef9419382ba-catalog-content\") pod \"redhat-marketplace-bwgmm\" (UID: \"c4f11288-4855-42ca-afd2-5ef9419382ba\") " pod="openshift-marketplace/redhat-marketplace-bwgmm" Nov 29 07:58:16 crc kubenswrapper[4947]: I1129 07:58:16.790355 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnsxn\" (UniqueName: \"kubernetes.io/projected/c4f11288-4855-42ca-afd2-5ef9419382ba-kube-api-access-lnsxn\") pod \"redhat-marketplace-bwgmm\" (UID: \"c4f11288-4855-42ca-afd2-5ef9419382ba\") " pod="openshift-marketplace/redhat-marketplace-bwgmm" Nov 29 07:58:16 crc kubenswrapper[4947]: I1129 07:58:16.790526 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4f11288-4855-42ca-afd2-5ef9419382ba-utilities\") pod \"redhat-marketplace-bwgmm\" (UID: \"c4f11288-4855-42ca-afd2-5ef9419382ba\") " pod="openshift-marketplace/redhat-marketplace-bwgmm" Nov 29 07:58:16 crc kubenswrapper[4947]: I1129 07:58:16.790565 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4f11288-4855-42ca-afd2-5ef9419382ba-catalog-content\") pod \"redhat-marketplace-bwgmm\" (UID: \"c4f11288-4855-42ca-afd2-5ef9419382ba\") " pod="openshift-marketplace/redhat-marketplace-bwgmm" Nov 29 07:58:16 crc kubenswrapper[4947]: I1129 07:58:16.791150 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4f11288-4855-42ca-afd2-5ef9419382ba-catalog-content\") pod \"redhat-marketplace-bwgmm\" (UID: \"c4f11288-4855-42ca-afd2-5ef9419382ba\") " pod="openshift-marketplace/redhat-marketplace-bwgmm" Nov 29 07:58:16 crc kubenswrapper[4947]: I1129 07:58:16.791312 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4f11288-4855-42ca-afd2-5ef9419382ba-utilities\") pod \"redhat-marketplace-bwgmm\" (UID: \"c4f11288-4855-42ca-afd2-5ef9419382ba\") " pod="openshift-marketplace/redhat-marketplace-bwgmm" Nov 29 07:58:16 crc kubenswrapper[4947]: I1129 07:58:16.812698 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnsxn\" (UniqueName: \"kubernetes.io/projected/c4f11288-4855-42ca-afd2-5ef9419382ba-kube-api-access-lnsxn\") pod \"redhat-marketplace-bwgmm\" (UID: \"c4f11288-4855-42ca-afd2-5ef9419382ba\") " pod="openshift-marketplace/redhat-marketplace-bwgmm" Nov 29 07:58:16 crc kubenswrapper[4947]: I1129 07:58:16.949136 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bwgmm" Nov 29 07:58:17 crc kubenswrapper[4947]: I1129 07:58:17.468137 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bwgmm"] Nov 29 07:58:18 crc kubenswrapper[4947]: I1129 07:58:18.235402 4947 generic.go:334] "Generic (PLEG): container finished" podID="c4f11288-4855-42ca-afd2-5ef9419382ba" containerID="97e29141337be01eff6e522ecde3e8b2d9ef391535c98627e1dace2b8a744c8f" exitCode=0 Nov 29 07:58:18 crc kubenswrapper[4947]: I1129 07:58:18.235540 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bwgmm" event={"ID":"c4f11288-4855-42ca-afd2-5ef9419382ba","Type":"ContainerDied","Data":"97e29141337be01eff6e522ecde3e8b2d9ef391535c98627e1dace2b8a744c8f"} Nov 29 07:58:18 crc kubenswrapper[4947]: I1129 07:58:18.236665 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bwgmm" event={"ID":"c4f11288-4855-42ca-afd2-5ef9419382ba","Type":"ContainerStarted","Data":"bf887f74e779326f4ff4d66b4b41dd72c08208f50b0efe1647a87d535315eb73"} Nov 29 07:58:22 crc kubenswrapper[4947]: I1129 07:58:22.987706 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:58:22 crc kubenswrapper[4947]: I1129 07:58:22.988353 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:58:24 crc kubenswrapper[4947]: I1129 07:58:24.313032 4947 generic.go:334] "Generic (PLEG): container finished" podID="c4f11288-4855-42ca-afd2-5ef9419382ba" containerID="e38a1b011246f4e29951aba335d4909cc5186e6b68a897edbf7ccaa1935bda32" exitCode=0 Nov 29 07:58:24 crc kubenswrapper[4947]: I1129 07:58:24.313301 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bwgmm" event={"ID":"c4f11288-4855-42ca-afd2-5ef9419382ba","Type":"ContainerDied","Data":"e38a1b011246f4e29951aba335d4909cc5186e6b68a897edbf7ccaa1935bda32"} Nov 29 07:58:26 crc kubenswrapper[4947]: I1129 07:58:26.334909 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bwgmm" event={"ID":"c4f11288-4855-42ca-afd2-5ef9419382ba","Type":"ContainerStarted","Data":"5cd4c559fe3491e70a970b9f57ae99b4ba51b8469f71a828106d2ffb8c9f7dcd"} Nov 29 07:58:27 crc kubenswrapper[4947]: I1129 07:58:27.371084 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bwgmm" podStartSLOduration=3.535774742 podStartE2EDuration="11.371055641s" podCreationTimestamp="2025-11-29 07:58:16 +0000 UTC" firstStartedPulling="2025-11-29 07:58:18.237130124 +0000 UTC m=+5049.281512215" lastFinishedPulling="2025-11-29 07:58:26.072411023 +0000 UTC m=+5057.116793114" observedRunningTime="2025-11-29 07:58:27.366926997 +0000 UTC m=+5058.411309078" watchObservedRunningTime="2025-11-29 07:58:27.371055641 +0000 UTC m=+5058.415437722" Nov 29 07:58:36 crc kubenswrapper[4947]: I1129 07:58:36.950011 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bwgmm" Nov 29 07:58:36 crc kubenswrapper[4947]: I1129 07:58:36.950632 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bwgmm" Nov 29 07:58:36 crc kubenswrapper[4947]: I1129 07:58:36.997659 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bwgmm" Nov 29 07:58:37 crc kubenswrapper[4947]: I1129 07:58:37.501598 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bwgmm" Nov 29 07:58:37 crc kubenswrapper[4947]: I1129 07:58:37.587370 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bwgmm"] Nov 29 07:58:39 crc kubenswrapper[4947]: I1129 07:58:39.465373 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bwgmm" podUID="c4f11288-4855-42ca-afd2-5ef9419382ba" containerName="registry-server" containerID="cri-o://5cd4c559fe3491e70a970b9f57ae99b4ba51b8469f71a828106d2ffb8c9f7dcd" gracePeriod=2 Nov 29 07:58:40 crc kubenswrapper[4947]: I1129 07:58:40.477715 4947 generic.go:334] "Generic (PLEG): container finished" podID="c4f11288-4855-42ca-afd2-5ef9419382ba" containerID="5cd4c559fe3491e70a970b9f57ae99b4ba51b8469f71a828106d2ffb8c9f7dcd" exitCode=0 Nov 29 07:58:40 crc kubenswrapper[4947]: I1129 07:58:40.477792 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bwgmm" event={"ID":"c4f11288-4855-42ca-afd2-5ef9419382ba","Type":"ContainerDied","Data":"5cd4c559fe3491e70a970b9f57ae99b4ba51b8469f71a828106d2ffb8c9f7dcd"} Nov 29 07:58:40 crc kubenswrapper[4947]: I1129 07:58:40.478027 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bwgmm" event={"ID":"c4f11288-4855-42ca-afd2-5ef9419382ba","Type":"ContainerDied","Data":"bf887f74e779326f4ff4d66b4b41dd72c08208f50b0efe1647a87d535315eb73"} Nov 29 07:58:40 crc kubenswrapper[4947]: I1129 07:58:40.478044 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf887f74e779326f4ff4d66b4b41dd72c08208f50b0efe1647a87d535315eb73" Nov 29 07:58:40 crc kubenswrapper[4947]: I1129 07:58:40.568356 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bwgmm" Nov 29 07:58:40 crc kubenswrapper[4947]: I1129 07:58:40.613999 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnsxn\" (UniqueName: \"kubernetes.io/projected/c4f11288-4855-42ca-afd2-5ef9419382ba-kube-api-access-lnsxn\") pod \"c4f11288-4855-42ca-afd2-5ef9419382ba\" (UID: \"c4f11288-4855-42ca-afd2-5ef9419382ba\") " Nov 29 07:58:40 crc kubenswrapper[4947]: I1129 07:58:40.614124 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4f11288-4855-42ca-afd2-5ef9419382ba-catalog-content\") pod \"c4f11288-4855-42ca-afd2-5ef9419382ba\" (UID: \"c4f11288-4855-42ca-afd2-5ef9419382ba\") " Nov 29 07:58:40 crc kubenswrapper[4947]: I1129 07:58:40.614379 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4f11288-4855-42ca-afd2-5ef9419382ba-utilities\") pod \"c4f11288-4855-42ca-afd2-5ef9419382ba\" (UID: \"c4f11288-4855-42ca-afd2-5ef9419382ba\") " Nov 29 07:58:40 crc kubenswrapper[4947]: I1129 07:58:40.615108 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4f11288-4855-42ca-afd2-5ef9419382ba-utilities" (OuterVolumeSpecName: "utilities") pod "c4f11288-4855-42ca-afd2-5ef9419382ba" (UID: "c4f11288-4855-42ca-afd2-5ef9419382ba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:58:40 crc kubenswrapper[4947]: I1129 07:58:40.620097 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4f11288-4855-42ca-afd2-5ef9419382ba-kube-api-access-lnsxn" (OuterVolumeSpecName: "kube-api-access-lnsxn") pod "c4f11288-4855-42ca-afd2-5ef9419382ba" (UID: "c4f11288-4855-42ca-afd2-5ef9419382ba"). InnerVolumeSpecName "kube-api-access-lnsxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:58:40 crc kubenswrapper[4947]: I1129 07:58:40.635728 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4f11288-4855-42ca-afd2-5ef9419382ba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c4f11288-4855-42ca-afd2-5ef9419382ba" (UID: "c4f11288-4855-42ca-afd2-5ef9419382ba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:58:40 crc kubenswrapper[4947]: I1129 07:58:40.716574 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4f11288-4855-42ca-afd2-5ef9419382ba-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:58:40 crc kubenswrapper[4947]: I1129 07:58:40.716623 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4f11288-4855-42ca-afd2-5ef9419382ba-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:58:40 crc kubenswrapper[4947]: I1129 07:58:40.716636 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnsxn\" (UniqueName: \"kubernetes.io/projected/c4f11288-4855-42ca-afd2-5ef9419382ba-kube-api-access-lnsxn\") on node \"crc\" DevicePath \"\"" Nov 29 07:58:41 crc kubenswrapper[4947]: I1129 07:58:41.486946 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bwgmm" Nov 29 07:58:41 crc kubenswrapper[4947]: I1129 07:58:41.526356 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bwgmm"] Nov 29 07:58:41 crc kubenswrapper[4947]: I1129 07:58:41.539606 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bwgmm"] Nov 29 07:58:43 crc kubenswrapper[4947]: I1129 07:58:43.192858 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4f11288-4855-42ca-afd2-5ef9419382ba" path="/var/lib/kubelet/pods/c4f11288-4855-42ca-afd2-5ef9419382ba/volumes" Nov 29 07:58:52 crc kubenswrapper[4947]: I1129 07:58:52.987550 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:58:52 crc kubenswrapper[4947]: I1129 07:58:52.988199 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:58:52 crc kubenswrapper[4947]: I1129 07:58:52.988277 4947 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" Nov 29 07:58:52 crc kubenswrapper[4947]: I1129 07:58:52.989231 4947 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9f080681d1601bc144f15e156ed1c8685aa9615dbcfac1411311ef254a024e7f"} pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 07:58:52 crc kubenswrapper[4947]: I1129 07:58:52.989306 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" containerID="cri-o://9f080681d1601bc144f15e156ed1c8685aa9615dbcfac1411311ef254a024e7f" gracePeriod=600 Nov 29 07:58:53 crc kubenswrapper[4947]: I1129 07:58:53.621335 4947 generic.go:334] "Generic (PLEG): container finished" podID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerID="9f080681d1601bc144f15e156ed1c8685aa9615dbcfac1411311ef254a024e7f" exitCode=0 Nov 29 07:58:53 crc kubenswrapper[4947]: I1129 07:58:53.621410 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" event={"ID":"5f4d791f-bb61-4aaa-a09c-3007b59645a7","Type":"ContainerDied","Data":"9f080681d1601bc144f15e156ed1c8685aa9615dbcfac1411311ef254a024e7f"} Nov 29 07:58:53 crc kubenswrapper[4947]: I1129 07:58:53.621694 4947 scope.go:117] "RemoveContainer" containerID="fd81a38636ce2198b2d17d9163a769d289f0348c4d0ee39882dd57ac20213475" Nov 29 07:58:54 crc kubenswrapper[4947]: I1129 07:58:54.637103 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" event={"ID":"5f4d791f-bb61-4aaa-a09c-3007b59645a7","Type":"ContainerStarted","Data":"9efc0bea37e0aeeedba6f448c245a84df0d0229c909c70cf18ea4cc93f9ccdd5"} Nov 29 08:00:00 crc kubenswrapper[4947]: I1129 08:00:00.149718 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406720-jmwzb"] Nov 29 08:00:00 crc kubenswrapper[4947]: E1129 08:00:00.150946 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4f11288-4855-42ca-afd2-5ef9419382ba" containerName="extract-utilities" Nov 29 08:00:00 crc kubenswrapper[4947]: I1129 08:00:00.150965 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4f11288-4855-42ca-afd2-5ef9419382ba" containerName="extract-utilities" Nov 29 08:00:00 crc kubenswrapper[4947]: E1129 08:00:00.150993 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4f11288-4855-42ca-afd2-5ef9419382ba" containerName="extract-content" Nov 29 08:00:00 crc kubenswrapper[4947]: I1129 08:00:00.151002 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4f11288-4855-42ca-afd2-5ef9419382ba" containerName="extract-content" Nov 29 08:00:00 crc kubenswrapper[4947]: E1129 08:00:00.151047 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4f11288-4855-42ca-afd2-5ef9419382ba" containerName="registry-server" Nov 29 08:00:00 crc kubenswrapper[4947]: I1129 08:00:00.151055 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4f11288-4855-42ca-afd2-5ef9419382ba" containerName="registry-server" Nov 29 08:00:00 crc kubenswrapper[4947]: I1129 08:00:00.151311 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4f11288-4855-42ca-afd2-5ef9419382ba" containerName="registry-server" Nov 29 08:00:00 crc kubenswrapper[4947]: I1129 08:00:00.152211 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-jmwzb" Nov 29 08:00:00 crc kubenswrapper[4947]: I1129 08:00:00.155075 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 29 08:00:00 crc kubenswrapper[4947]: I1129 08:00:00.155327 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 29 08:00:00 crc kubenswrapper[4947]: I1129 08:00:00.161771 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406720-jmwzb"] Nov 29 08:00:00 crc kubenswrapper[4947]: I1129 08:00:00.237444 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr2rn\" (UniqueName: \"kubernetes.io/projected/8fbf29d8-535f-4b77-a38b-7a5893e9fc07-kube-api-access-nr2rn\") pod \"collect-profiles-29406720-jmwzb\" (UID: \"8fbf29d8-535f-4b77-a38b-7a5893e9fc07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-jmwzb" Nov 29 08:00:00 crc kubenswrapper[4947]: I1129 08:00:00.238106 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8fbf29d8-535f-4b77-a38b-7a5893e9fc07-config-volume\") pod \"collect-profiles-29406720-jmwzb\" (UID: \"8fbf29d8-535f-4b77-a38b-7a5893e9fc07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-jmwzb" Nov 29 08:00:00 crc kubenswrapper[4947]: I1129 08:00:00.238520 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8fbf29d8-535f-4b77-a38b-7a5893e9fc07-secret-volume\") pod \"collect-profiles-29406720-jmwzb\" (UID: \"8fbf29d8-535f-4b77-a38b-7a5893e9fc07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-jmwzb" Nov 29 08:00:00 crc kubenswrapper[4947]: I1129 08:00:00.341587 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nr2rn\" (UniqueName: \"kubernetes.io/projected/8fbf29d8-535f-4b77-a38b-7a5893e9fc07-kube-api-access-nr2rn\") pod \"collect-profiles-29406720-jmwzb\" (UID: \"8fbf29d8-535f-4b77-a38b-7a5893e9fc07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-jmwzb" Nov 29 08:00:00 crc kubenswrapper[4947]: I1129 08:00:00.341726 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8fbf29d8-535f-4b77-a38b-7a5893e9fc07-config-volume\") pod \"collect-profiles-29406720-jmwzb\" (UID: \"8fbf29d8-535f-4b77-a38b-7a5893e9fc07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-jmwzb" Nov 29 08:00:00 crc kubenswrapper[4947]: I1129 08:00:00.341770 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8fbf29d8-535f-4b77-a38b-7a5893e9fc07-secret-volume\") pod \"collect-profiles-29406720-jmwzb\" (UID: \"8fbf29d8-535f-4b77-a38b-7a5893e9fc07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-jmwzb" Nov 29 08:00:00 crc kubenswrapper[4947]: I1129 08:00:00.342879 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8fbf29d8-535f-4b77-a38b-7a5893e9fc07-config-volume\") pod \"collect-profiles-29406720-jmwzb\" (UID: \"8fbf29d8-535f-4b77-a38b-7a5893e9fc07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-jmwzb" Nov 29 08:00:00 crc kubenswrapper[4947]: I1129 08:00:00.351110 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8fbf29d8-535f-4b77-a38b-7a5893e9fc07-secret-volume\") pod \"collect-profiles-29406720-jmwzb\" (UID: \"8fbf29d8-535f-4b77-a38b-7a5893e9fc07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-jmwzb" Nov 29 08:00:00 crc kubenswrapper[4947]: I1129 08:00:00.367488 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nr2rn\" (UniqueName: \"kubernetes.io/projected/8fbf29d8-535f-4b77-a38b-7a5893e9fc07-kube-api-access-nr2rn\") pod \"collect-profiles-29406720-jmwzb\" (UID: \"8fbf29d8-535f-4b77-a38b-7a5893e9fc07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-jmwzb" Nov 29 08:00:00 crc kubenswrapper[4947]: I1129 08:00:00.493945 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-jmwzb" Nov 29 08:00:01 crc kubenswrapper[4947]: I1129 08:00:01.028048 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406720-jmwzb"] Nov 29 08:00:01 crc kubenswrapper[4947]: I1129 08:00:01.920063 4947 generic.go:334] "Generic (PLEG): container finished" podID="8fbf29d8-535f-4b77-a38b-7a5893e9fc07" containerID="54ee4dc6b8f2bf35f1c6175e9ea533d9065a1c6afe703aeb8e2450fc9925311a" exitCode=0 Nov 29 08:00:01 crc kubenswrapper[4947]: I1129 08:00:01.920212 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-jmwzb" event={"ID":"8fbf29d8-535f-4b77-a38b-7a5893e9fc07","Type":"ContainerDied","Data":"54ee4dc6b8f2bf35f1c6175e9ea533d9065a1c6afe703aeb8e2450fc9925311a"} Nov 29 08:00:01 crc kubenswrapper[4947]: I1129 08:00:01.922165 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-jmwzb" event={"ID":"8fbf29d8-535f-4b77-a38b-7a5893e9fc07","Type":"ContainerStarted","Data":"19a04695cc57fe128a4c6a40d6596beafeaf376a1bd774c2ae703d2d9fee7657"} Nov 29 08:00:03 crc kubenswrapper[4947]: I1129 08:00:03.386480 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-jmwzb" Nov 29 08:00:03 crc kubenswrapper[4947]: I1129 08:00:03.420173 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nr2rn\" (UniqueName: \"kubernetes.io/projected/8fbf29d8-535f-4b77-a38b-7a5893e9fc07-kube-api-access-nr2rn\") pod \"8fbf29d8-535f-4b77-a38b-7a5893e9fc07\" (UID: \"8fbf29d8-535f-4b77-a38b-7a5893e9fc07\") " Nov 29 08:00:03 crc kubenswrapper[4947]: I1129 08:00:03.420572 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8fbf29d8-535f-4b77-a38b-7a5893e9fc07-config-volume\") pod \"8fbf29d8-535f-4b77-a38b-7a5893e9fc07\" (UID: \"8fbf29d8-535f-4b77-a38b-7a5893e9fc07\") " Nov 29 08:00:03 crc kubenswrapper[4947]: I1129 08:00:03.420646 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8fbf29d8-535f-4b77-a38b-7a5893e9fc07-secret-volume\") pod \"8fbf29d8-535f-4b77-a38b-7a5893e9fc07\" (UID: \"8fbf29d8-535f-4b77-a38b-7a5893e9fc07\") " Nov 29 08:00:03 crc kubenswrapper[4947]: I1129 08:00:03.423038 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fbf29d8-535f-4b77-a38b-7a5893e9fc07-config-volume" (OuterVolumeSpecName: "config-volume") pod "8fbf29d8-535f-4b77-a38b-7a5893e9fc07" (UID: "8fbf29d8-535f-4b77-a38b-7a5893e9fc07"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:00:03 crc kubenswrapper[4947]: I1129 08:00:03.441558 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fbf29d8-535f-4b77-a38b-7a5893e9fc07-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8fbf29d8-535f-4b77-a38b-7a5893e9fc07" (UID: "8fbf29d8-535f-4b77-a38b-7a5893e9fc07"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:00:03 crc kubenswrapper[4947]: I1129 08:00:03.442122 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fbf29d8-535f-4b77-a38b-7a5893e9fc07-kube-api-access-nr2rn" (OuterVolumeSpecName: "kube-api-access-nr2rn") pod "8fbf29d8-535f-4b77-a38b-7a5893e9fc07" (UID: "8fbf29d8-535f-4b77-a38b-7a5893e9fc07"). InnerVolumeSpecName "kube-api-access-nr2rn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:00:03 crc kubenswrapper[4947]: I1129 08:00:03.523672 4947 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8fbf29d8-535f-4b77-a38b-7a5893e9fc07-config-volume\") on node \"crc\" DevicePath \"\"" Nov 29 08:00:03 crc kubenswrapper[4947]: I1129 08:00:03.523719 4947 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8fbf29d8-535f-4b77-a38b-7a5893e9fc07-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 29 08:00:03 crc kubenswrapper[4947]: I1129 08:00:03.523735 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nr2rn\" (UniqueName: \"kubernetes.io/projected/8fbf29d8-535f-4b77-a38b-7a5893e9fc07-kube-api-access-nr2rn\") on node \"crc\" DevicePath \"\"" Nov 29 08:00:03 crc kubenswrapper[4947]: I1129 08:00:03.944572 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-jmwzb" event={"ID":"8fbf29d8-535f-4b77-a38b-7a5893e9fc07","Type":"ContainerDied","Data":"19a04695cc57fe128a4c6a40d6596beafeaf376a1bd774c2ae703d2d9fee7657"} Nov 29 08:00:03 crc kubenswrapper[4947]: I1129 08:00:03.944880 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19a04695cc57fe128a4c6a40d6596beafeaf376a1bd774c2ae703d2d9fee7657" Nov 29 08:00:03 crc kubenswrapper[4947]: I1129 08:00:03.945198 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-jmwzb" Nov 29 08:00:04 crc kubenswrapper[4947]: I1129 08:00:04.486406 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406675-phqfz"] Nov 29 08:00:04 crc kubenswrapper[4947]: I1129 08:00:04.496683 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406675-phqfz"] Nov 29 08:00:05 crc kubenswrapper[4947]: I1129 08:00:05.220894 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ed24124-04a9-44f2-aef4-831e83d62724" path="/var/lib/kubelet/pods/6ed24124-04a9-44f2-aef4-831e83d62724/volumes" Nov 29 08:00:42 crc kubenswrapper[4947]: I1129 08:00:42.013839 4947 scope.go:117] "RemoveContainer" containerID="6d734f57fabf8ea5a75c2189060f1361175d77cf6c1c42a1aa4c405dc94e8bfa" Nov 29 08:01:00 crc kubenswrapper[4947]: I1129 08:01:00.226356 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29406721-fnfl5"] Nov 29 08:01:00 crc kubenswrapper[4947]: E1129 08:01:00.227533 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fbf29d8-535f-4b77-a38b-7a5893e9fc07" containerName="collect-profiles" Nov 29 08:01:00 crc kubenswrapper[4947]: I1129 08:01:00.227552 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fbf29d8-535f-4b77-a38b-7a5893e9fc07" containerName="collect-profiles" Nov 29 08:01:00 crc kubenswrapper[4947]: I1129 08:01:00.227791 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fbf29d8-535f-4b77-a38b-7a5893e9fc07" containerName="collect-profiles" Nov 29 08:01:00 crc kubenswrapper[4947]: I1129 08:01:00.228755 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29406721-fnfl5" Nov 29 08:01:00 crc kubenswrapper[4947]: I1129 08:01:00.249917 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29406721-fnfl5"] Nov 29 08:01:00 crc kubenswrapper[4947]: I1129 08:01:00.331980 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf41f5ca-60a1-44e7-ac93-c4230e7d8be3-combined-ca-bundle\") pod \"keystone-cron-29406721-fnfl5\" (UID: \"cf41f5ca-60a1-44e7-ac93-c4230e7d8be3\") " pod="openstack/keystone-cron-29406721-fnfl5" Nov 29 08:01:00 crc kubenswrapper[4947]: I1129 08:01:00.332429 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf41f5ca-60a1-44e7-ac93-c4230e7d8be3-config-data\") pod \"keystone-cron-29406721-fnfl5\" (UID: \"cf41f5ca-60a1-44e7-ac93-c4230e7d8be3\") " pod="openstack/keystone-cron-29406721-fnfl5" Nov 29 08:01:00 crc kubenswrapper[4947]: I1129 08:01:00.332480 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cf41f5ca-60a1-44e7-ac93-c4230e7d8be3-fernet-keys\") pod \"keystone-cron-29406721-fnfl5\" (UID: \"cf41f5ca-60a1-44e7-ac93-c4230e7d8be3\") " pod="openstack/keystone-cron-29406721-fnfl5" Nov 29 08:01:00 crc kubenswrapper[4947]: I1129 08:01:00.332512 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkwr7\" (UniqueName: \"kubernetes.io/projected/cf41f5ca-60a1-44e7-ac93-c4230e7d8be3-kube-api-access-xkwr7\") pod \"keystone-cron-29406721-fnfl5\" (UID: \"cf41f5ca-60a1-44e7-ac93-c4230e7d8be3\") " pod="openstack/keystone-cron-29406721-fnfl5" Nov 29 08:01:00 crc kubenswrapper[4947]: I1129 08:01:00.435172 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf41f5ca-60a1-44e7-ac93-c4230e7d8be3-combined-ca-bundle\") pod \"keystone-cron-29406721-fnfl5\" (UID: \"cf41f5ca-60a1-44e7-ac93-c4230e7d8be3\") " pod="openstack/keystone-cron-29406721-fnfl5" Nov 29 08:01:00 crc kubenswrapper[4947]: I1129 08:01:00.435324 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf41f5ca-60a1-44e7-ac93-c4230e7d8be3-config-data\") pod \"keystone-cron-29406721-fnfl5\" (UID: \"cf41f5ca-60a1-44e7-ac93-c4230e7d8be3\") " pod="openstack/keystone-cron-29406721-fnfl5" Nov 29 08:01:00 crc kubenswrapper[4947]: I1129 08:01:00.435354 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cf41f5ca-60a1-44e7-ac93-c4230e7d8be3-fernet-keys\") pod \"keystone-cron-29406721-fnfl5\" (UID: \"cf41f5ca-60a1-44e7-ac93-c4230e7d8be3\") " pod="openstack/keystone-cron-29406721-fnfl5" Nov 29 08:01:00 crc kubenswrapper[4947]: I1129 08:01:00.435386 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkwr7\" (UniqueName: \"kubernetes.io/projected/cf41f5ca-60a1-44e7-ac93-c4230e7d8be3-kube-api-access-xkwr7\") pod \"keystone-cron-29406721-fnfl5\" (UID: \"cf41f5ca-60a1-44e7-ac93-c4230e7d8be3\") " pod="openstack/keystone-cron-29406721-fnfl5" Nov 29 08:01:00 crc kubenswrapper[4947]: I1129 08:01:00.443549 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf41f5ca-60a1-44e7-ac93-c4230e7d8be3-combined-ca-bundle\") pod \"keystone-cron-29406721-fnfl5\" (UID: \"cf41f5ca-60a1-44e7-ac93-c4230e7d8be3\") " pod="openstack/keystone-cron-29406721-fnfl5" Nov 29 08:01:00 crc kubenswrapper[4947]: I1129 08:01:00.444084 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cf41f5ca-60a1-44e7-ac93-c4230e7d8be3-fernet-keys\") pod \"keystone-cron-29406721-fnfl5\" (UID: \"cf41f5ca-60a1-44e7-ac93-c4230e7d8be3\") " pod="openstack/keystone-cron-29406721-fnfl5" Nov 29 08:01:00 crc kubenswrapper[4947]: I1129 08:01:00.458097 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf41f5ca-60a1-44e7-ac93-c4230e7d8be3-config-data\") pod \"keystone-cron-29406721-fnfl5\" (UID: \"cf41f5ca-60a1-44e7-ac93-c4230e7d8be3\") " pod="openstack/keystone-cron-29406721-fnfl5" Nov 29 08:01:00 crc kubenswrapper[4947]: I1129 08:01:00.463132 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkwr7\" (UniqueName: \"kubernetes.io/projected/cf41f5ca-60a1-44e7-ac93-c4230e7d8be3-kube-api-access-xkwr7\") pod \"keystone-cron-29406721-fnfl5\" (UID: \"cf41f5ca-60a1-44e7-ac93-c4230e7d8be3\") " pod="openstack/keystone-cron-29406721-fnfl5" Nov 29 08:01:00 crc kubenswrapper[4947]: I1129 08:01:00.562109 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29406721-fnfl5" Nov 29 08:01:01 crc kubenswrapper[4947]: I1129 08:01:01.083007 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29406721-fnfl5"] Nov 29 08:01:01 crc kubenswrapper[4947]: I1129 08:01:01.502097 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29406721-fnfl5" event={"ID":"cf41f5ca-60a1-44e7-ac93-c4230e7d8be3","Type":"ContainerStarted","Data":"0241c4076a3fa54a57e0e1da4d100889e7766fc1586ad1ce3e39fceeba8b80b3"} Nov 29 08:01:01 crc kubenswrapper[4947]: I1129 08:01:01.502709 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29406721-fnfl5" event={"ID":"cf41f5ca-60a1-44e7-ac93-c4230e7d8be3","Type":"ContainerStarted","Data":"055451ffdc72ccbb6ad9949e4c4c7cf732621f36cd067e914bd164f10e56bb7c"} Nov 29 08:01:01 crc kubenswrapper[4947]: I1129 08:01:01.530371 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29406721-fnfl5" podStartSLOduration=1.53034962 podStartE2EDuration="1.53034962s" podCreationTimestamp="2025-11-29 08:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:01:01.526605736 +0000 UTC m=+5212.570987817" watchObservedRunningTime="2025-11-29 08:01:01.53034962 +0000 UTC m=+5212.574731701" Nov 29 08:01:05 crc kubenswrapper[4947]: I1129 08:01:05.544730 4947 generic.go:334] "Generic (PLEG): container finished" podID="cf41f5ca-60a1-44e7-ac93-c4230e7d8be3" containerID="0241c4076a3fa54a57e0e1da4d100889e7766fc1586ad1ce3e39fceeba8b80b3" exitCode=0 Nov 29 08:01:05 crc kubenswrapper[4947]: I1129 08:01:05.544828 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29406721-fnfl5" event={"ID":"cf41f5ca-60a1-44e7-ac93-c4230e7d8be3","Type":"ContainerDied","Data":"0241c4076a3fa54a57e0e1da4d100889e7766fc1586ad1ce3e39fceeba8b80b3"} Nov 29 08:01:06 crc kubenswrapper[4947]: I1129 08:01:06.991063 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29406721-fnfl5" Nov 29 08:01:07 crc kubenswrapper[4947]: I1129 08:01:07.127431 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cf41f5ca-60a1-44e7-ac93-c4230e7d8be3-fernet-keys\") pod \"cf41f5ca-60a1-44e7-ac93-c4230e7d8be3\" (UID: \"cf41f5ca-60a1-44e7-ac93-c4230e7d8be3\") " Nov 29 08:01:07 crc kubenswrapper[4947]: I1129 08:01:07.127541 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkwr7\" (UniqueName: \"kubernetes.io/projected/cf41f5ca-60a1-44e7-ac93-c4230e7d8be3-kube-api-access-xkwr7\") pod \"cf41f5ca-60a1-44e7-ac93-c4230e7d8be3\" (UID: \"cf41f5ca-60a1-44e7-ac93-c4230e7d8be3\") " Nov 29 08:01:07 crc kubenswrapper[4947]: I1129 08:01:07.127586 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf41f5ca-60a1-44e7-ac93-c4230e7d8be3-config-data\") pod \"cf41f5ca-60a1-44e7-ac93-c4230e7d8be3\" (UID: \"cf41f5ca-60a1-44e7-ac93-c4230e7d8be3\") " Nov 29 08:01:07 crc kubenswrapper[4947]: I1129 08:01:07.127863 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf41f5ca-60a1-44e7-ac93-c4230e7d8be3-combined-ca-bundle\") pod \"cf41f5ca-60a1-44e7-ac93-c4230e7d8be3\" (UID: \"cf41f5ca-60a1-44e7-ac93-c4230e7d8be3\") " Nov 29 08:01:07 crc kubenswrapper[4947]: I1129 08:01:07.134357 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf41f5ca-60a1-44e7-ac93-c4230e7d8be3-kube-api-access-xkwr7" (OuterVolumeSpecName: "kube-api-access-xkwr7") pod "cf41f5ca-60a1-44e7-ac93-c4230e7d8be3" (UID: "cf41f5ca-60a1-44e7-ac93-c4230e7d8be3"). InnerVolumeSpecName "kube-api-access-xkwr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:01:07 crc kubenswrapper[4947]: I1129 08:01:07.138667 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf41f5ca-60a1-44e7-ac93-c4230e7d8be3-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "cf41f5ca-60a1-44e7-ac93-c4230e7d8be3" (UID: "cf41f5ca-60a1-44e7-ac93-c4230e7d8be3"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:01:07 crc kubenswrapper[4947]: I1129 08:01:07.166742 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf41f5ca-60a1-44e7-ac93-c4230e7d8be3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cf41f5ca-60a1-44e7-ac93-c4230e7d8be3" (UID: "cf41f5ca-60a1-44e7-ac93-c4230e7d8be3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:01:07 crc kubenswrapper[4947]: I1129 08:01:07.217804 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf41f5ca-60a1-44e7-ac93-c4230e7d8be3-config-data" (OuterVolumeSpecName: "config-data") pod "cf41f5ca-60a1-44e7-ac93-c4230e7d8be3" (UID: "cf41f5ca-60a1-44e7-ac93-c4230e7d8be3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:01:07 crc kubenswrapper[4947]: I1129 08:01:07.230735 4947 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf41f5ca-60a1-44e7-ac93-c4230e7d8be3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:01:07 crc kubenswrapper[4947]: I1129 08:01:07.230794 4947 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cf41f5ca-60a1-44e7-ac93-c4230e7d8be3-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 29 08:01:07 crc kubenswrapper[4947]: I1129 08:01:07.230808 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkwr7\" (UniqueName: \"kubernetes.io/projected/cf41f5ca-60a1-44e7-ac93-c4230e7d8be3-kube-api-access-xkwr7\") on node \"crc\" DevicePath \"\"" Nov 29 08:01:07 crc kubenswrapper[4947]: I1129 08:01:07.230821 4947 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf41f5ca-60a1-44e7-ac93-c4230e7d8be3-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:01:07 crc kubenswrapper[4947]: I1129 08:01:07.569920 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29406721-fnfl5" event={"ID":"cf41f5ca-60a1-44e7-ac93-c4230e7d8be3","Type":"ContainerDied","Data":"055451ffdc72ccbb6ad9949e4c4c7cf732621f36cd067e914bd164f10e56bb7c"} Nov 29 08:01:07 crc kubenswrapper[4947]: I1129 08:01:07.570671 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="055451ffdc72ccbb6ad9949e4c4c7cf732621f36cd067e914bd164f10e56bb7c" Nov 29 08:01:07 crc kubenswrapper[4947]: I1129 08:01:07.570013 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29406721-fnfl5" Nov 29 08:01:22 crc kubenswrapper[4947]: I1129 08:01:22.987501 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:01:22 crc kubenswrapper[4947]: I1129 08:01:22.988484 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:01:52 crc kubenswrapper[4947]: I1129 08:01:52.988023 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:01:52 crc kubenswrapper[4947]: I1129 08:01:52.990813 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:02:22 crc kubenswrapper[4947]: I1129 08:02:22.988091 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:02:22 crc kubenswrapper[4947]: I1129 08:02:22.988627 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:02:22 crc kubenswrapper[4947]: I1129 08:02:22.988683 4947 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" Nov 29 08:02:22 crc kubenswrapper[4947]: I1129 08:02:22.989559 4947 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9efc0bea37e0aeeedba6f448c245a84df0d0229c909c70cf18ea4cc93f9ccdd5"} pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 08:02:22 crc kubenswrapper[4947]: I1129 08:02:22.989761 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" containerID="cri-o://9efc0bea37e0aeeedba6f448c245a84df0d0229c909c70cf18ea4cc93f9ccdd5" gracePeriod=600 Nov 29 08:02:23 crc kubenswrapper[4947]: I1129 08:02:23.448835 4947 generic.go:334] "Generic (PLEG): container finished" podID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerID="9efc0bea37e0aeeedba6f448c245a84df0d0229c909c70cf18ea4cc93f9ccdd5" exitCode=0 Nov 29 08:02:23 crc kubenswrapper[4947]: I1129 08:02:23.448877 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" event={"ID":"5f4d791f-bb61-4aaa-a09c-3007b59645a7","Type":"ContainerDied","Data":"9efc0bea37e0aeeedba6f448c245a84df0d0229c909c70cf18ea4cc93f9ccdd5"} Nov 29 08:02:23 crc kubenswrapper[4947]: I1129 08:02:23.448915 4947 scope.go:117] "RemoveContainer" containerID="9f080681d1601bc144f15e156ed1c8685aa9615dbcfac1411311ef254a024e7f" Nov 29 08:02:23 crc kubenswrapper[4947]: E1129 08:02:23.665674 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 08:02:24 crc kubenswrapper[4947]: I1129 08:02:24.460861 4947 scope.go:117] "RemoveContainer" containerID="9efc0bea37e0aeeedba6f448c245a84df0d0229c909c70cf18ea4cc93f9ccdd5" Nov 29 08:02:24 crc kubenswrapper[4947]: E1129 08:02:24.461416 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 08:02:39 crc kubenswrapper[4947]: I1129 08:02:39.187055 4947 scope.go:117] "RemoveContainer" containerID="9efc0bea37e0aeeedba6f448c245a84df0d0229c909c70cf18ea4cc93f9ccdd5" Nov 29 08:02:39 crc kubenswrapper[4947]: E1129 08:02:39.187924 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 08:02:54 crc kubenswrapper[4947]: I1129 08:02:54.180065 4947 scope.go:117] "RemoveContainer" containerID="9efc0bea37e0aeeedba6f448c245a84df0d0229c909c70cf18ea4cc93f9ccdd5" Nov 29 08:02:54 crc kubenswrapper[4947]: E1129 08:02:54.182870 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 08:03:06 crc kubenswrapper[4947]: I1129 08:03:06.179496 4947 scope.go:117] "RemoveContainer" containerID="9efc0bea37e0aeeedba6f448c245a84df0d0229c909c70cf18ea4cc93f9ccdd5" Nov 29 08:03:06 crc kubenswrapper[4947]: E1129 08:03:06.181271 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 08:03:18 crc kubenswrapper[4947]: I1129 08:03:18.179501 4947 scope.go:117] "RemoveContainer" containerID="9efc0bea37e0aeeedba6f448c245a84df0d0229c909c70cf18ea4cc93f9ccdd5" Nov 29 08:03:18 crc kubenswrapper[4947]: E1129 08:03:18.180772 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 08:03:30 crc kubenswrapper[4947]: I1129 08:03:30.179416 4947 scope.go:117] "RemoveContainer" containerID="9efc0bea37e0aeeedba6f448c245a84df0d0229c909c70cf18ea4cc93f9ccdd5" Nov 29 08:03:30 crc kubenswrapper[4947]: E1129 08:03:30.180251 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 08:03:45 crc kubenswrapper[4947]: I1129 08:03:45.179044 4947 scope.go:117] "RemoveContainer" containerID="9efc0bea37e0aeeedba6f448c245a84df0d0229c909c70cf18ea4cc93f9ccdd5" Nov 29 08:03:45 crc kubenswrapper[4947]: E1129 08:03:45.179982 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 08:03:56 crc kubenswrapper[4947]: I1129 08:03:56.179108 4947 scope.go:117] "RemoveContainer" containerID="9efc0bea37e0aeeedba6f448c245a84df0d0229c909c70cf18ea4cc93f9ccdd5" Nov 29 08:03:56 crc kubenswrapper[4947]: E1129 08:03:56.179975 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 08:04:11 crc kubenswrapper[4947]: I1129 08:04:11.179355 4947 scope.go:117] "RemoveContainer" containerID="9efc0bea37e0aeeedba6f448c245a84df0d0229c909c70cf18ea4cc93f9ccdd5" Nov 29 08:04:11 crc kubenswrapper[4947]: E1129 08:04:11.180346 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 08:04:24 crc kubenswrapper[4947]: I1129 08:04:24.179307 4947 scope.go:117] "RemoveContainer" containerID="9efc0bea37e0aeeedba6f448c245a84df0d0229c909c70cf18ea4cc93f9ccdd5" Nov 29 08:04:24 crc kubenswrapper[4947]: E1129 08:04:24.180154 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 08:04:39 crc kubenswrapper[4947]: I1129 08:04:39.186670 4947 scope.go:117] "RemoveContainer" containerID="9efc0bea37e0aeeedba6f448c245a84df0d0229c909c70cf18ea4cc93f9ccdd5" Nov 29 08:04:39 crc kubenswrapper[4947]: E1129 08:04:39.187455 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 08:04:42 crc kubenswrapper[4947]: I1129 08:04:42.158803 4947 scope.go:117] "RemoveContainer" containerID="e38a1b011246f4e29951aba335d4909cc5186e6b68a897edbf7ccaa1935bda32" Nov 29 08:04:42 crc kubenswrapper[4947]: I1129 08:04:42.190840 4947 scope.go:117] "RemoveContainer" containerID="97e29141337be01eff6e522ecde3e8b2d9ef391535c98627e1dace2b8a744c8f" Nov 29 08:04:42 crc kubenswrapper[4947]: I1129 08:04:42.236994 4947 scope.go:117] "RemoveContainer" containerID="5cd4c559fe3491e70a970b9f57ae99b4ba51b8469f71a828106d2ffb8c9f7dcd" Nov 29 08:04:53 crc kubenswrapper[4947]: I1129 08:04:53.179536 4947 scope.go:117] "RemoveContainer" containerID="9efc0bea37e0aeeedba6f448c245a84df0d0229c909c70cf18ea4cc93f9ccdd5" Nov 29 08:04:53 crc kubenswrapper[4947]: E1129 08:04:53.181480 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 08:05:07 crc kubenswrapper[4947]: I1129 08:05:07.179146 4947 scope.go:117] "RemoveContainer" containerID="9efc0bea37e0aeeedba6f448c245a84df0d0229c909c70cf18ea4cc93f9ccdd5" Nov 29 08:05:07 crc kubenswrapper[4947]: E1129 08:05:07.180063 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 08:05:18 crc kubenswrapper[4947]: I1129 08:05:18.180025 4947 scope.go:117] "RemoveContainer" containerID="9efc0bea37e0aeeedba6f448c245a84df0d0229c909c70cf18ea4cc93f9ccdd5" Nov 29 08:05:18 crc kubenswrapper[4947]: E1129 08:05:18.180690 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 08:05:31 crc kubenswrapper[4947]: I1129 08:05:31.179940 4947 scope.go:117] "RemoveContainer" containerID="9efc0bea37e0aeeedba6f448c245a84df0d0229c909c70cf18ea4cc93f9ccdd5" Nov 29 08:05:31 crc kubenswrapper[4947]: E1129 08:05:31.180843 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 08:05:44 crc kubenswrapper[4947]: I1129 08:05:44.178798 4947 scope.go:117] "RemoveContainer" containerID="9efc0bea37e0aeeedba6f448c245a84df0d0229c909c70cf18ea4cc93f9ccdd5" Nov 29 08:05:44 crc kubenswrapper[4947]: E1129 08:05:44.179594 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 08:05:55 crc kubenswrapper[4947]: I1129 08:05:55.179551 4947 scope.go:117] "RemoveContainer" containerID="9efc0bea37e0aeeedba6f448c245a84df0d0229c909c70cf18ea4cc93f9ccdd5" Nov 29 08:05:55 crc kubenswrapper[4947]: E1129 08:05:55.180591 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 08:06:06 crc kubenswrapper[4947]: I1129 08:06:06.179591 4947 scope.go:117] "RemoveContainer" containerID="9efc0bea37e0aeeedba6f448c245a84df0d0229c909c70cf18ea4cc93f9ccdd5" Nov 29 08:06:06 crc kubenswrapper[4947]: E1129 08:06:06.180627 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 08:06:21 crc kubenswrapper[4947]: I1129 08:06:21.179117 4947 scope.go:117] "RemoveContainer" containerID="9efc0bea37e0aeeedba6f448c245a84df0d0229c909c70cf18ea4cc93f9ccdd5" Nov 29 08:06:21 crc kubenswrapper[4947]: E1129 08:06:21.180120 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 08:06:30 crc kubenswrapper[4947]: I1129 08:06:30.237210 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dq9tk"] Nov 29 08:06:30 crc kubenswrapper[4947]: E1129 08:06:30.238016 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf41f5ca-60a1-44e7-ac93-c4230e7d8be3" containerName="keystone-cron" Nov 29 08:06:30 crc kubenswrapper[4947]: I1129 08:06:30.238031 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf41f5ca-60a1-44e7-ac93-c4230e7d8be3" containerName="keystone-cron" Nov 29 08:06:30 crc kubenswrapper[4947]: I1129 08:06:30.239618 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf41f5ca-60a1-44e7-ac93-c4230e7d8be3" containerName="keystone-cron" Nov 29 08:06:30 crc kubenswrapper[4947]: I1129 08:06:30.240975 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dq9tk" Nov 29 08:06:30 crc kubenswrapper[4947]: I1129 08:06:30.251601 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dq9tk"] Nov 29 08:06:30 crc kubenswrapper[4947]: I1129 08:06:30.373896 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz4m6\" (UniqueName: \"kubernetes.io/projected/23aac1e3-893c-4b94-bc08-a719c7ad8566-kube-api-access-qz4m6\") pod \"community-operators-dq9tk\" (UID: \"23aac1e3-893c-4b94-bc08-a719c7ad8566\") " pod="openshift-marketplace/community-operators-dq9tk" Nov 29 08:06:30 crc kubenswrapper[4947]: I1129 08:06:30.374491 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23aac1e3-893c-4b94-bc08-a719c7ad8566-catalog-content\") pod \"community-operators-dq9tk\" (UID: \"23aac1e3-893c-4b94-bc08-a719c7ad8566\") " pod="openshift-marketplace/community-operators-dq9tk" Nov 29 08:06:30 crc kubenswrapper[4947]: I1129 08:06:30.374598 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23aac1e3-893c-4b94-bc08-a719c7ad8566-utilities\") pod \"community-operators-dq9tk\" (UID: \"23aac1e3-893c-4b94-bc08-a719c7ad8566\") " pod="openshift-marketplace/community-operators-dq9tk" Nov 29 08:06:30 crc kubenswrapper[4947]: I1129 08:06:30.476850 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qz4m6\" (UniqueName: \"kubernetes.io/projected/23aac1e3-893c-4b94-bc08-a719c7ad8566-kube-api-access-qz4m6\") pod \"community-operators-dq9tk\" (UID: \"23aac1e3-893c-4b94-bc08-a719c7ad8566\") " pod="openshift-marketplace/community-operators-dq9tk" Nov 29 08:06:30 crc kubenswrapper[4947]: I1129 08:06:30.476933 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23aac1e3-893c-4b94-bc08-a719c7ad8566-catalog-content\") pod \"community-operators-dq9tk\" (UID: \"23aac1e3-893c-4b94-bc08-a719c7ad8566\") " pod="openshift-marketplace/community-operators-dq9tk" Nov 29 08:06:30 crc kubenswrapper[4947]: I1129 08:06:30.477021 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23aac1e3-893c-4b94-bc08-a719c7ad8566-utilities\") pod \"community-operators-dq9tk\" (UID: \"23aac1e3-893c-4b94-bc08-a719c7ad8566\") " pod="openshift-marketplace/community-operators-dq9tk" Nov 29 08:06:30 crc kubenswrapper[4947]: I1129 08:06:30.477895 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23aac1e3-893c-4b94-bc08-a719c7ad8566-utilities\") pod \"community-operators-dq9tk\" (UID: \"23aac1e3-893c-4b94-bc08-a719c7ad8566\") " pod="openshift-marketplace/community-operators-dq9tk" Nov 29 08:06:30 crc kubenswrapper[4947]: I1129 08:06:30.478251 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23aac1e3-893c-4b94-bc08-a719c7ad8566-catalog-content\") pod \"community-operators-dq9tk\" (UID: \"23aac1e3-893c-4b94-bc08-a719c7ad8566\") " pod="openshift-marketplace/community-operators-dq9tk" Nov 29 08:06:30 crc kubenswrapper[4947]: I1129 08:06:30.510624 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qz4m6\" (UniqueName: \"kubernetes.io/projected/23aac1e3-893c-4b94-bc08-a719c7ad8566-kube-api-access-qz4m6\") pod \"community-operators-dq9tk\" (UID: \"23aac1e3-893c-4b94-bc08-a719c7ad8566\") " pod="openshift-marketplace/community-operators-dq9tk" Nov 29 08:06:30 crc kubenswrapper[4947]: I1129 08:06:30.566032 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dq9tk" Nov 29 08:06:31 crc kubenswrapper[4947]: I1129 08:06:31.238902 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dq9tk"] Nov 29 08:06:31 crc kubenswrapper[4947]: I1129 08:06:31.352182 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dq9tk" event={"ID":"23aac1e3-893c-4b94-bc08-a719c7ad8566","Type":"ContainerStarted","Data":"e03d3a3122a81b9fab4c8f7f5190ce15a7c78552dee05dbede2138bcd1222369"} Nov 29 08:06:32 crc kubenswrapper[4947]: I1129 08:06:32.363664 4947 generic.go:334] "Generic (PLEG): container finished" podID="23aac1e3-893c-4b94-bc08-a719c7ad8566" containerID="43b9e326eaa02e4fa7d0f258fdbef08ec466fddb3259fd0a21845d5e80c329ed" exitCode=0 Nov 29 08:06:32 crc kubenswrapper[4947]: I1129 08:06:32.363725 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dq9tk" event={"ID":"23aac1e3-893c-4b94-bc08-a719c7ad8566","Type":"ContainerDied","Data":"43b9e326eaa02e4fa7d0f258fdbef08ec466fddb3259fd0a21845d5e80c329ed"} Nov 29 08:06:32 crc kubenswrapper[4947]: I1129 08:06:32.366303 4947 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 08:06:36 crc kubenswrapper[4947]: I1129 08:06:36.179317 4947 scope.go:117] "RemoveContainer" containerID="9efc0bea37e0aeeedba6f448c245a84df0d0229c909c70cf18ea4cc93f9ccdd5" Nov 29 08:06:36 crc kubenswrapper[4947]: E1129 08:06:36.180170 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 08:06:37 crc kubenswrapper[4947]: I1129 08:06:37.415904 4947 generic.go:334] "Generic (PLEG): container finished" podID="23aac1e3-893c-4b94-bc08-a719c7ad8566" containerID="5fa71ba3650f1d2295a6187c85795bdf3153d035f5c5a9e8d627db503160813b" exitCode=0 Nov 29 08:06:37 crc kubenswrapper[4947]: I1129 08:06:37.415976 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dq9tk" event={"ID":"23aac1e3-893c-4b94-bc08-a719c7ad8566","Type":"ContainerDied","Data":"5fa71ba3650f1d2295a6187c85795bdf3153d035f5c5a9e8d627db503160813b"} Nov 29 08:06:38 crc kubenswrapper[4947]: I1129 08:06:38.426961 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dq9tk" event={"ID":"23aac1e3-893c-4b94-bc08-a719c7ad8566","Type":"ContainerStarted","Data":"9941d306cb8ec3e3fee1fd01eba0b4096aeb94197a0b8fa0d000edeb891864f1"} Nov 29 08:06:38 crc kubenswrapper[4947]: I1129 08:06:38.449272 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dq9tk" podStartSLOduration=2.782743911 podStartE2EDuration="8.44924605s" podCreationTimestamp="2025-11-29 08:06:30 +0000 UTC" firstStartedPulling="2025-11-29 08:06:32.366031562 +0000 UTC m=+5543.410413653" lastFinishedPulling="2025-11-29 08:06:38.032533711 +0000 UTC m=+5549.076915792" observedRunningTime="2025-11-29 08:06:38.449100287 +0000 UTC m=+5549.493482368" watchObservedRunningTime="2025-11-29 08:06:38.44924605 +0000 UTC m=+5549.493628131" Nov 29 08:06:40 crc kubenswrapper[4947]: I1129 08:06:40.568421 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dq9tk" Nov 29 08:06:40 crc kubenswrapper[4947]: I1129 08:06:40.569812 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dq9tk" Nov 29 08:06:40 crc kubenswrapper[4947]: I1129 08:06:40.619735 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dq9tk" Nov 29 08:06:49 crc kubenswrapper[4947]: I1129 08:06:49.186524 4947 scope.go:117] "RemoveContainer" containerID="9efc0bea37e0aeeedba6f448c245a84df0d0229c909c70cf18ea4cc93f9ccdd5" Nov 29 08:06:49 crc kubenswrapper[4947]: E1129 08:06:49.187321 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 08:06:50 crc kubenswrapper[4947]: I1129 08:06:50.620607 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dq9tk" Nov 29 08:06:50 crc kubenswrapper[4947]: I1129 08:06:50.763380 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dq9tk"] Nov 29 08:06:50 crc kubenswrapper[4947]: I1129 08:06:50.809172 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7wkp9"] Nov 29 08:06:50 crc kubenswrapper[4947]: I1129 08:06:50.809895 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7wkp9" podUID="2435dbb1-46ea-4b97-a974-f00fbfa54c0d" containerName="registry-server" containerID="cri-o://39823f772845f8afcae281985d1c6a0e211a6284f1f52f7aef2b71004dedd123" gracePeriod=2 Nov 29 08:06:51 crc kubenswrapper[4947]: I1129 08:06:51.554812 4947 generic.go:334] "Generic (PLEG): container finished" podID="2435dbb1-46ea-4b97-a974-f00fbfa54c0d" containerID="39823f772845f8afcae281985d1c6a0e211a6284f1f52f7aef2b71004dedd123" exitCode=0 Nov 29 08:06:51 crc kubenswrapper[4947]: I1129 08:06:51.555186 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wkp9" event={"ID":"2435dbb1-46ea-4b97-a974-f00fbfa54c0d","Type":"ContainerDied","Data":"39823f772845f8afcae281985d1c6a0e211a6284f1f52f7aef2b71004dedd123"} Nov 29 08:06:52 crc kubenswrapper[4947]: I1129 08:06:52.151574 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7wkp9" Nov 29 08:06:52 crc kubenswrapper[4947]: I1129 08:06:52.262164 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2435dbb1-46ea-4b97-a974-f00fbfa54c0d-catalog-content\") pod \"2435dbb1-46ea-4b97-a974-f00fbfa54c0d\" (UID: \"2435dbb1-46ea-4b97-a974-f00fbfa54c0d\") " Nov 29 08:06:52 crc kubenswrapper[4947]: I1129 08:06:52.262368 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2435dbb1-46ea-4b97-a974-f00fbfa54c0d-utilities\") pod \"2435dbb1-46ea-4b97-a974-f00fbfa54c0d\" (UID: \"2435dbb1-46ea-4b97-a974-f00fbfa54c0d\") " Nov 29 08:06:52 crc kubenswrapper[4947]: I1129 08:06:52.262423 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kp5j\" (UniqueName: \"kubernetes.io/projected/2435dbb1-46ea-4b97-a974-f00fbfa54c0d-kube-api-access-2kp5j\") pod \"2435dbb1-46ea-4b97-a974-f00fbfa54c0d\" (UID: \"2435dbb1-46ea-4b97-a974-f00fbfa54c0d\") " Nov 29 08:06:52 crc kubenswrapper[4947]: I1129 08:06:52.269183 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2435dbb1-46ea-4b97-a974-f00fbfa54c0d-kube-api-access-2kp5j" (OuterVolumeSpecName: "kube-api-access-2kp5j") pod "2435dbb1-46ea-4b97-a974-f00fbfa54c0d" (UID: "2435dbb1-46ea-4b97-a974-f00fbfa54c0d"). InnerVolumeSpecName "kube-api-access-2kp5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:06:52 crc kubenswrapper[4947]: I1129 08:06:52.272431 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2435dbb1-46ea-4b97-a974-f00fbfa54c0d-utilities" (OuterVolumeSpecName: "utilities") pod "2435dbb1-46ea-4b97-a974-f00fbfa54c0d" (UID: "2435dbb1-46ea-4b97-a974-f00fbfa54c0d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:06:52 crc kubenswrapper[4947]: I1129 08:06:52.364864 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2435dbb1-46ea-4b97-a974-f00fbfa54c0d-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:52 crc kubenswrapper[4947]: I1129 08:06:52.365250 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kp5j\" (UniqueName: \"kubernetes.io/projected/2435dbb1-46ea-4b97-a974-f00fbfa54c0d-kube-api-access-2kp5j\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:52 crc kubenswrapper[4947]: I1129 08:06:52.390977 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2435dbb1-46ea-4b97-a974-f00fbfa54c0d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2435dbb1-46ea-4b97-a974-f00fbfa54c0d" (UID: "2435dbb1-46ea-4b97-a974-f00fbfa54c0d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:06:52 crc kubenswrapper[4947]: I1129 08:06:52.467148 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2435dbb1-46ea-4b97-a974-f00fbfa54c0d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:52 crc kubenswrapper[4947]: I1129 08:06:52.576452 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wkp9" event={"ID":"2435dbb1-46ea-4b97-a974-f00fbfa54c0d","Type":"ContainerDied","Data":"698331b1cb36e149e9b605cd3f19dbd893a5c9ff661951b474b605341c4168c3"} Nov 29 08:06:52 crc kubenswrapper[4947]: I1129 08:06:52.576511 4947 scope.go:117] "RemoveContainer" containerID="39823f772845f8afcae281985d1c6a0e211a6284f1f52f7aef2b71004dedd123" Nov 29 08:06:52 crc kubenswrapper[4947]: I1129 08:06:52.576591 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7wkp9" Nov 29 08:06:52 crc kubenswrapper[4947]: I1129 08:06:52.635288 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7wkp9"] Nov 29 08:06:52 crc kubenswrapper[4947]: I1129 08:06:52.645882 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7wkp9"] Nov 29 08:06:52 crc kubenswrapper[4947]: I1129 08:06:52.659971 4947 scope.go:117] "RemoveContainer" containerID="6812d8d1f9afed58a0aab72ef5e7e52d8373e7b1d1cc32a30b6023acb22592dc" Nov 29 08:06:52 crc kubenswrapper[4947]: I1129 08:06:52.692948 4947 scope.go:117] "RemoveContainer" containerID="2fb9d7cd77d1949c827971d24b0102419468b5d8b14428e35bbc59a829f4a148" Nov 29 08:06:53 crc kubenswrapper[4947]: I1129 08:06:53.198033 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2435dbb1-46ea-4b97-a974-f00fbfa54c0d" path="/var/lib/kubelet/pods/2435dbb1-46ea-4b97-a974-f00fbfa54c0d/volumes" Nov 29 08:07:04 crc kubenswrapper[4947]: I1129 08:07:04.179172 4947 scope.go:117] "RemoveContainer" containerID="9efc0bea37e0aeeedba6f448c245a84df0d0229c909c70cf18ea4cc93f9ccdd5" Nov 29 08:07:04 crc kubenswrapper[4947]: E1129 08:07:04.180085 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 08:07:19 crc kubenswrapper[4947]: I1129 08:07:19.184858 4947 scope.go:117] "RemoveContainer" containerID="9efc0bea37e0aeeedba6f448c245a84df0d0229c909c70cf18ea4cc93f9ccdd5" Nov 29 08:07:19 crc kubenswrapper[4947]: E1129 08:07:19.185670 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 08:07:24 crc kubenswrapper[4947]: I1129 08:07:24.619155 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8gbkb"] Nov 29 08:07:24 crc kubenswrapper[4947]: E1129 08:07:24.620818 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2435dbb1-46ea-4b97-a974-f00fbfa54c0d" containerName="extract-content" Nov 29 08:07:24 crc kubenswrapper[4947]: I1129 08:07:24.620838 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="2435dbb1-46ea-4b97-a974-f00fbfa54c0d" containerName="extract-content" Nov 29 08:07:24 crc kubenswrapper[4947]: E1129 08:07:24.620867 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2435dbb1-46ea-4b97-a974-f00fbfa54c0d" containerName="registry-server" Nov 29 08:07:24 crc kubenswrapper[4947]: I1129 08:07:24.620875 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="2435dbb1-46ea-4b97-a974-f00fbfa54c0d" containerName="registry-server" Nov 29 08:07:24 crc kubenswrapper[4947]: E1129 08:07:24.620891 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2435dbb1-46ea-4b97-a974-f00fbfa54c0d" containerName="extract-utilities" Nov 29 08:07:24 crc kubenswrapper[4947]: I1129 08:07:24.620899 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="2435dbb1-46ea-4b97-a974-f00fbfa54c0d" containerName="extract-utilities" Nov 29 08:07:24 crc kubenswrapper[4947]: I1129 08:07:24.621391 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="2435dbb1-46ea-4b97-a974-f00fbfa54c0d" containerName="registry-server" Nov 29 08:07:24 crc kubenswrapper[4947]: I1129 08:07:24.623889 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8gbkb" Nov 29 08:07:24 crc kubenswrapper[4947]: I1129 08:07:24.634028 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8gbkb"] Nov 29 08:07:24 crc kubenswrapper[4947]: I1129 08:07:24.703736 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a4ed605-37a1-4b03-832c-5d399bc42b99-catalog-content\") pod \"redhat-operators-8gbkb\" (UID: \"7a4ed605-37a1-4b03-832c-5d399bc42b99\") " pod="openshift-marketplace/redhat-operators-8gbkb" Nov 29 08:07:24 crc kubenswrapper[4947]: I1129 08:07:24.703845 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr9jg\" (UniqueName: \"kubernetes.io/projected/7a4ed605-37a1-4b03-832c-5d399bc42b99-kube-api-access-sr9jg\") pod \"redhat-operators-8gbkb\" (UID: \"7a4ed605-37a1-4b03-832c-5d399bc42b99\") " pod="openshift-marketplace/redhat-operators-8gbkb" Nov 29 08:07:24 crc kubenswrapper[4947]: I1129 08:07:24.704005 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a4ed605-37a1-4b03-832c-5d399bc42b99-utilities\") pod \"redhat-operators-8gbkb\" (UID: \"7a4ed605-37a1-4b03-832c-5d399bc42b99\") " pod="openshift-marketplace/redhat-operators-8gbkb" Nov 29 08:07:24 crc kubenswrapper[4947]: I1129 08:07:24.806382 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a4ed605-37a1-4b03-832c-5d399bc42b99-catalog-content\") pod \"redhat-operators-8gbkb\" (UID: \"7a4ed605-37a1-4b03-832c-5d399bc42b99\") " pod="openshift-marketplace/redhat-operators-8gbkb" Nov 29 08:07:24 crc kubenswrapper[4947]: I1129 08:07:24.806490 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sr9jg\" (UniqueName: \"kubernetes.io/projected/7a4ed605-37a1-4b03-832c-5d399bc42b99-kube-api-access-sr9jg\") pod \"redhat-operators-8gbkb\" (UID: \"7a4ed605-37a1-4b03-832c-5d399bc42b99\") " pod="openshift-marketplace/redhat-operators-8gbkb" Nov 29 08:07:24 crc kubenswrapper[4947]: I1129 08:07:24.806625 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a4ed605-37a1-4b03-832c-5d399bc42b99-utilities\") pod \"redhat-operators-8gbkb\" (UID: \"7a4ed605-37a1-4b03-832c-5d399bc42b99\") " pod="openshift-marketplace/redhat-operators-8gbkb" Nov 29 08:07:24 crc kubenswrapper[4947]: I1129 08:07:24.807357 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a4ed605-37a1-4b03-832c-5d399bc42b99-utilities\") pod \"redhat-operators-8gbkb\" (UID: \"7a4ed605-37a1-4b03-832c-5d399bc42b99\") " pod="openshift-marketplace/redhat-operators-8gbkb" Nov 29 08:07:24 crc kubenswrapper[4947]: I1129 08:07:24.807751 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a4ed605-37a1-4b03-832c-5d399bc42b99-catalog-content\") pod \"redhat-operators-8gbkb\" (UID: \"7a4ed605-37a1-4b03-832c-5d399bc42b99\") " pod="openshift-marketplace/redhat-operators-8gbkb" Nov 29 08:07:24 crc kubenswrapper[4947]: I1129 08:07:24.842520 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sr9jg\" (UniqueName: \"kubernetes.io/projected/7a4ed605-37a1-4b03-832c-5d399bc42b99-kube-api-access-sr9jg\") pod \"redhat-operators-8gbkb\" (UID: \"7a4ed605-37a1-4b03-832c-5d399bc42b99\") " pod="openshift-marketplace/redhat-operators-8gbkb" Nov 29 08:07:24 crc kubenswrapper[4947]: I1129 08:07:24.964172 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8gbkb" Nov 29 08:07:25 crc kubenswrapper[4947]: I1129 08:07:25.558424 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8gbkb"] Nov 29 08:07:25 crc kubenswrapper[4947]: I1129 08:07:25.879505 4947 generic.go:334] "Generic (PLEG): container finished" podID="7a4ed605-37a1-4b03-832c-5d399bc42b99" containerID="2da2210b0e4511e76c1569373e3514ff53cc50d430173d68aa4b78d0ffc0ee4b" exitCode=0 Nov 29 08:07:25 crc kubenswrapper[4947]: I1129 08:07:25.879743 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8gbkb" event={"ID":"7a4ed605-37a1-4b03-832c-5d399bc42b99","Type":"ContainerDied","Data":"2da2210b0e4511e76c1569373e3514ff53cc50d430173d68aa4b78d0ffc0ee4b"} Nov 29 08:07:25 crc kubenswrapper[4947]: I1129 08:07:25.879770 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8gbkb" event={"ID":"7a4ed605-37a1-4b03-832c-5d399bc42b99","Type":"ContainerStarted","Data":"fa7cce52e9c1e8fd41c8e471475f4e6811727aaea38ac0f2c2eef7e59872dfc1"} Nov 29 08:07:26 crc kubenswrapper[4947]: E1129 08:07:26.079026 4947 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a4ed605_37a1_4b03_832c_5d399bc42b99.slice/crio-conmon-2da2210b0e4511e76c1569373e3514ff53cc50d430173d68aa4b78d0ffc0ee4b.scope\": RecentStats: unable to find data in memory cache]" Nov 29 08:07:27 crc kubenswrapper[4947]: I1129 08:07:27.910940 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8gbkb" event={"ID":"7a4ed605-37a1-4b03-832c-5d399bc42b99","Type":"ContainerStarted","Data":"4047dc4631283839106e154940f48195ce3d79a776cef519f1a7dadc4d45567c"} Nov 29 08:07:28 crc kubenswrapper[4947]: I1129 08:07:28.023181 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-p5kpt"] Nov 29 08:07:28 crc kubenswrapper[4947]: I1129 08:07:28.025764 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p5kpt" Nov 29 08:07:28 crc kubenswrapper[4947]: I1129 08:07:28.046268 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p5kpt"] Nov 29 08:07:28 crc kubenswrapper[4947]: I1129 08:07:28.080697 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7xb2\" (UniqueName: \"kubernetes.io/projected/ad83548f-5649-4f89-900c-313df6cc048e-kube-api-access-s7xb2\") pod \"certified-operators-p5kpt\" (UID: \"ad83548f-5649-4f89-900c-313df6cc048e\") " pod="openshift-marketplace/certified-operators-p5kpt" Nov 29 08:07:28 crc kubenswrapper[4947]: I1129 08:07:28.080777 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad83548f-5649-4f89-900c-313df6cc048e-catalog-content\") pod \"certified-operators-p5kpt\" (UID: \"ad83548f-5649-4f89-900c-313df6cc048e\") " pod="openshift-marketplace/certified-operators-p5kpt" Nov 29 08:07:28 crc kubenswrapper[4947]: I1129 08:07:28.080952 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad83548f-5649-4f89-900c-313df6cc048e-utilities\") pod \"certified-operators-p5kpt\" (UID: \"ad83548f-5649-4f89-900c-313df6cc048e\") " pod="openshift-marketplace/certified-operators-p5kpt" Nov 29 08:07:28 crc kubenswrapper[4947]: I1129 08:07:28.182013 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad83548f-5649-4f89-900c-313df6cc048e-utilities\") pod \"certified-operators-p5kpt\" (UID: \"ad83548f-5649-4f89-900c-313df6cc048e\") " pod="openshift-marketplace/certified-operators-p5kpt" Nov 29 08:07:28 crc kubenswrapper[4947]: I1129 08:07:28.182081 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7xb2\" (UniqueName: \"kubernetes.io/projected/ad83548f-5649-4f89-900c-313df6cc048e-kube-api-access-s7xb2\") pod \"certified-operators-p5kpt\" (UID: \"ad83548f-5649-4f89-900c-313df6cc048e\") " pod="openshift-marketplace/certified-operators-p5kpt" Nov 29 08:07:28 crc kubenswrapper[4947]: I1129 08:07:28.182113 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad83548f-5649-4f89-900c-313df6cc048e-catalog-content\") pod \"certified-operators-p5kpt\" (UID: \"ad83548f-5649-4f89-900c-313df6cc048e\") " pod="openshift-marketplace/certified-operators-p5kpt" Nov 29 08:07:28 crc kubenswrapper[4947]: I1129 08:07:28.182823 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad83548f-5649-4f89-900c-313df6cc048e-utilities\") pod \"certified-operators-p5kpt\" (UID: \"ad83548f-5649-4f89-900c-313df6cc048e\") " pod="openshift-marketplace/certified-operators-p5kpt" Nov 29 08:07:28 crc kubenswrapper[4947]: I1129 08:07:28.186396 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad83548f-5649-4f89-900c-313df6cc048e-catalog-content\") pod \"certified-operators-p5kpt\" (UID: \"ad83548f-5649-4f89-900c-313df6cc048e\") " pod="openshift-marketplace/certified-operators-p5kpt" Nov 29 08:07:28 crc kubenswrapper[4947]: I1129 08:07:28.236208 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7xb2\" (UniqueName: \"kubernetes.io/projected/ad83548f-5649-4f89-900c-313df6cc048e-kube-api-access-s7xb2\") pod \"certified-operators-p5kpt\" (UID: \"ad83548f-5649-4f89-900c-313df6cc048e\") " pod="openshift-marketplace/certified-operators-p5kpt" Nov 29 08:07:28 crc kubenswrapper[4947]: I1129 08:07:28.370554 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p5kpt" Nov 29 08:07:28 crc kubenswrapper[4947]: I1129 08:07:28.934242 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p5kpt"] Nov 29 08:07:28 crc kubenswrapper[4947]: W1129 08:07:28.946947 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad83548f_5649_4f89_900c_313df6cc048e.slice/crio-0034c75c095c84f746b833a02e9c70051e0cffc5008f9b4f021bb9c588ebc2cc WatchSource:0}: Error finding container 0034c75c095c84f746b833a02e9c70051e0cffc5008f9b4f021bb9c588ebc2cc: Status 404 returned error can't find the container with id 0034c75c095c84f746b833a02e9c70051e0cffc5008f9b4f021bb9c588ebc2cc Nov 29 08:07:29 crc kubenswrapper[4947]: I1129 08:07:29.930182 4947 generic.go:334] "Generic (PLEG): container finished" podID="ad83548f-5649-4f89-900c-313df6cc048e" containerID="d0dc308eb078d44d7498d043e5b44fcb9c7663a76dfbe6203c85c3a9354b27fc" exitCode=0 Nov 29 08:07:29 crc kubenswrapper[4947]: I1129 08:07:29.930294 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p5kpt" event={"ID":"ad83548f-5649-4f89-900c-313df6cc048e","Type":"ContainerDied","Data":"d0dc308eb078d44d7498d043e5b44fcb9c7663a76dfbe6203c85c3a9354b27fc"} Nov 29 08:07:29 crc kubenswrapper[4947]: I1129 08:07:29.930542 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p5kpt" event={"ID":"ad83548f-5649-4f89-900c-313df6cc048e","Type":"ContainerStarted","Data":"0034c75c095c84f746b833a02e9c70051e0cffc5008f9b4f021bb9c588ebc2cc"} Nov 29 08:07:29 crc kubenswrapper[4947]: I1129 08:07:29.934781 4947 generic.go:334] "Generic (PLEG): container finished" podID="7a4ed605-37a1-4b03-832c-5d399bc42b99" containerID="4047dc4631283839106e154940f48195ce3d79a776cef519f1a7dadc4d45567c" exitCode=0 Nov 29 08:07:29 crc kubenswrapper[4947]: I1129 08:07:29.934829 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8gbkb" event={"ID":"7a4ed605-37a1-4b03-832c-5d399bc42b99","Type":"ContainerDied","Data":"4047dc4631283839106e154940f48195ce3d79a776cef519f1a7dadc4d45567c"} Nov 29 08:07:31 crc kubenswrapper[4947]: I1129 08:07:31.977511 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p5kpt" event={"ID":"ad83548f-5649-4f89-900c-313df6cc048e","Type":"ContainerStarted","Data":"670099837baee121157afc96614842aa494871d03c3b368051ce258ce4413b97"} Nov 29 08:07:31 crc kubenswrapper[4947]: I1129 08:07:31.980907 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8gbkb" event={"ID":"7a4ed605-37a1-4b03-832c-5d399bc42b99","Type":"ContainerStarted","Data":"5db97f828fca9626ca5ed28e5ad73d56af73c2e32594aa3919258862995fce12"} Nov 29 08:07:32 crc kubenswrapper[4947]: I1129 08:07:32.019718 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8gbkb" podStartSLOduration=2.495150613 podStartE2EDuration="8.019696709s" podCreationTimestamp="2025-11-29 08:07:24 +0000 UTC" firstStartedPulling="2025-11-29 08:07:25.881563999 +0000 UTC m=+5596.925946080" lastFinishedPulling="2025-11-29 08:07:31.406110105 +0000 UTC m=+5602.450492176" observedRunningTime="2025-11-29 08:07:32.017479804 +0000 UTC m=+5603.061861885" watchObservedRunningTime="2025-11-29 08:07:32.019696709 +0000 UTC m=+5603.064078790" Nov 29 08:07:32 crc kubenswrapper[4947]: I1129 08:07:32.991187 4947 generic.go:334] "Generic (PLEG): container finished" podID="ad83548f-5649-4f89-900c-313df6cc048e" containerID="670099837baee121157afc96614842aa494871d03c3b368051ce258ce4413b97" exitCode=0 Nov 29 08:07:32 crc kubenswrapper[4947]: I1129 08:07:32.991352 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p5kpt" event={"ID":"ad83548f-5649-4f89-900c-313df6cc048e","Type":"ContainerDied","Data":"670099837baee121157afc96614842aa494871d03c3b368051ce258ce4413b97"} Nov 29 08:07:33 crc kubenswrapper[4947]: I1129 08:07:33.184700 4947 scope.go:117] "RemoveContainer" containerID="9efc0bea37e0aeeedba6f448c245a84df0d0229c909c70cf18ea4cc93f9ccdd5" Nov 29 08:07:34 crc kubenswrapper[4947]: I1129 08:07:34.964539 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8gbkb" Nov 29 08:07:34 crc kubenswrapper[4947]: I1129 08:07:34.965146 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8gbkb" Nov 29 08:07:35 crc kubenswrapper[4947]: I1129 08:07:35.011042 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" event={"ID":"5f4d791f-bb61-4aaa-a09c-3007b59645a7","Type":"ContainerStarted","Data":"f016ad3b48559c4e1b707ec7329efae28a02ca95c948d4232c8f1dc15b1126b6"} Nov 29 08:07:36 crc kubenswrapper[4947]: I1129 08:07:36.014000 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8gbkb" podUID="7a4ed605-37a1-4b03-832c-5d399bc42b99" containerName="registry-server" probeResult="failure" output=< Nov 29 08:07:36 crc kubenswrapper[4947]: timeout: failed to connect service ":50051" within 1s Nov 29 08:07:36 crc kubenswrapper[4947]: > Nov 29 08:07:38 crc kubenswrapper[4947]: I1129 08:07:38.038088 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p5kpt" event={"ID":"ad83548f-5649-4f89-900c-313df6cc048e","Type":"ContainerStarted","Data":"d8b3e65118fb8c6f348102658433461abc03cfcceec0a0ad4189a2b189f4273f"} Nov 29 08:07:38 crc kubenswrapper[4947]: I1129 08:07:38.067657 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-p5kpt" podStartSLOduration=3.430871047 podStartE2EDuration="11.067628839s" podCreationTimestamp="2025-11-29 08:07:27 +0000 UTC" firstStartedPulling="2025-11-29 08:07:29.932623536 +0000 UTC m=+5600.977005617" lastFinishedPulling="2025-11-29 08:07:37.569381328 +0000 UTC m=+5608.613763409" observedRunningTime="2025-11-29 08:07:38.057761031 +0000 UTC m=+5609.102143122" watchObservedRunningTime="2025-11-29 08:07:38.067628839 +0000 UTC m=+5609.112010910" Nov 29 08:07:38 crc kubenswrapper[4947]: I1129 08:07:38.371354 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-p5kpt" Nov 29 08:07:38 crc kubenswrapper[4947]: I1129 08:07:38.371420 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-p5kpt" Nov 29 08:07:39 crc kubenswrapper[4947]: I1129 08:07:39.425984 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-p5kpt" podUID="ad83548f-5649-4f89-900c-313df6cc048e" containerName="registry-server" probeResult="failure" output=< Nov 29 08:07:39 crc kubenswrapper[4947]: timeout: failed to connect service ":50051" within 1s Nov 29 08:07:39 crc kubenswrapper[4947]: > Nov 29 08:07:45 crc kubenswrapper[4947]: I1129 08:07:45.039736 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8gbkb" Nov 29 08:07:45 crc kubenswrapper[4947]: I1129 08:07:45.096718 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8gbkb" Nov 29 08:07:45 crc kubenswrapper[4947]: I1129 08:07:45.280422 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8gbkb"] Nov 29 08:07:46 crc kubenswrapper[4947]: I1129 08:07:46.115884 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8gbkb" podUID="7a4ed605-37a1-4b03-832c-5d399bc42b99" containerName="registry-server" containerID="cri-o://5db97f828fca9626ca5ed28e5ad73d56af73c2e32594aa3919258862995fce12" gracePeriod=2 Nov 29 08:07:46 crc kubenswrapper[4947]: I1129 08:07:46.666257 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8gbkb" Nov 29 08:07:46 crc kubenswrapper[4947]: I1129 08:07:46.800433 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a4ed605-37a1-4b03-832c-5d399bc42b99-catalog-content\") pod \"7a4ed605-37a1-4b03-832c-5d399bc42b99\" (UID: \"7a4ed605-37a1-4b03-832c-5d399bc42b99\") " Nov 29 08:07:46 crc kubenswrapper[4947]: I1129 08:07:46.800769 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a4ed605-37a1-4b03-832c-5d399bc42b99-utilities\") pod \"7a4ed605-37a1-4b03-832c-5d399bc42b99\" (UID: \"7a4ed605-37a1-4b03-832c-5d399bc42b99\") " Nov 29 08:07:46 crc kubenswrapper[4947]: I1129 08:07:46.800960 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sr9jg\" (UniqueName: \"kubernetes.io/projected/7a4ed605-37a1-4b03-832c-5d399bc42b99-kube-api-access-sr9jg\") pod \"7a4ed605-37a1-4b03-832c-5d399bc42b99\" (UID: \"7a4ed605-37a1-4b03-832c-5d399bc42b99\") " Nov 29 08:07:46 crc kubenswrapper[4947]: I1129 08:07:46.801603 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a4ed605-37a1-4b03-832c-5d399bc42b99-utilities" (OuterVolumeSpecName: "utilities") pod "7a4ed605-37a1-4b03-832c-5d399bc42b99" (UID: "7a4ed605-37a1-4b03-832c-5d399bc42b99"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:07:46 crc kubenswrapper[4947]: I1129 08:07:46.807067 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a4ed605-37a1-4b03-832c-5d399bc42b99-kube-api-access-sr9jg" (OuterVolumeSpecName: "kube-api-access-sr9jg") pod "7a4ed605-37a1-4b03-832c-5d399bc42b99" (UID: "7a4ed605-37a1-4b03-832c-5d399bc42b99"). InnerVolumeSpecName "kube-api-access-sr9jg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:07:46 crc kubenswrapper[4947]: I1129 08:07:46.902894 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a4ed605-37a1-4b03-832c-5d399bc42b99-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:46 crc kubenswrapper[4947]: I1129 08:07:46.902935 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sr9jg\" (UniqueName: \"kubernetes.io/projected/7a4ed605-37a1-4b03-832c-5d399bc42b99-kube-api-access-sr9jg\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:46 crc kubenswrapper[4947]: I1129 08:07:46.916587 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a4ed605-37a1-4b03-832c-5d399bc42b99-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7a4ed605-37a1-4b03-832c-5d399bc42b99" (UID: "7a4ed605-37a1-4b03-832c-5d399bc42b99"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:07:47 crc kubenswrapper[4947]: I1129 08:07:47.005192 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a4ed605-37a1-4b03-832c-5d399bc42b99-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:47 crc kubenswrapper[4947]: I1129 08:07:47.126011 4947 generic.go:334] "Generic (PLEG): container finished" podID="7a4ed605-37a1-4b03-832c-5d399bc42b99" containerID="5db97f828fca9626ca5ed28e5ad73d56af73c2e32594aa3919258862995fce12" exitCode=0 Nov 29 08:07:47 crc kubenswrapper[4947]: I1129 08:07:47.126062 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8gbkb" event={"ID":"7a4ed605-37a1-4b03-832c-5d399bc42b99","Type":"ContainerDied","Data":"5db97f828fca9626ca5ed28e5ad73d56af73c2e32594aa3919258862995fce12"} Nov 29 08:07:47 crc kubenswrapper[4947]: I1129 08:07:47.126094 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8gbkb" event={"ID":"7a4ed605-37a1-4b03-832c-5d399bc42b99","Type":"ContainerDied","Data":"fa7cce52e9c1e8fd41c8e471475f4e6811727aaea38ac0f2c2eef7e59872dfc1"} Nov 29 08:07:47 crc kubenswrapper[4947]: I1129 08:07:47.126114 4947 scope.go:117] "RemoveContainer" containerID="5db97f828fca9626ca5ed28e5ad73d56af73c2e32594aa3919258862995fce12" Nov 29 08:07:47 crc kubenswrapper[4947]: I1129 08:07:47.126113 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8gbkb" Nov 29 08:07:47 crc kubenswrapper[4947]: I1129 08:07:47.150386 4947 scope.go:117] "RemoveContainer" containerID="4047dc4631283839106e154940f48195ce3d79a776cef519f1a7dadc4d45567c" Nov 29 08:07:47 crc kubenswrapper[4947]: I1129 08:07:47.194422 4947 scope.go:117] "RemoveContainer" containerID="2da2210b0e4511e76c1569373e3514ff53cc50d430173d68aa4b78d0ffc0ee4b" Nov 29 08:07:47 crc kubenswrapper[4947]: I1129 08:07:47.203876 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8gbkb"] Nov 29 08:07:47 crc kubenswrapper[4947]: I1129 08:07:47.203924 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8gbkb"] Nov 29 08:07:47 crc kubenswrapper[4947]: I1129 08:07:47.239419 4947 scope.go:117] "RemoveContainer" containerID="5db97f828fca9626ca5ed28e5ad73d56af73c2e32594aa3919258862995fce12" Nov 29 08:07:47 crc kubenswrapper[4947]: E1129 08:07:47.243358 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5db97f828fca9626ca5ed28e5ad73d56af73c2e32594aa3919258862995fce12\": container with ID starting with 5db97f828fca9626ca5ed28e5ad73d56af73c2e32594aa3919258862995fce12 not found: ID does not exist" containerID="5db97f828fca9626ca5ed28e5ad73d56af73c2e32594aa3919258862995fce12" Nov 29 08:07:47 crc kubenswrapper[4947]: I1129 08:07:47.243410 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5db97f828fca9626ca5ed28e5ad73d56af73c2e32594aa3919258862995fce12"} err="failed to get container status \"5db97f828fca9626ca5ed28e5ad73d56af73c2e32594aa3919258862995fce12\": rpc error: code = NotFound desc = could not find container \"5db97f828fca9626ca5ed28e5ad73d56af73c2e32594aa3919258862995fce12\": container with ID starting with 5db97f828fca9626ca5ed28e5ad73d56af73c2e32594aa3919258862995fce12 not found: ID does not exist" Nov 29 08:07:47 crc kubenswrapper[4947]: I1129 08:07:47.243441 4947 scope.go:117] "RemoveContainer" containerID="4047dc4631283839106e154940f48195ce3d79a776cef519f1a7dadc4d45567c" Nov 29 08:07:47 crc kubenswrapper[4947]: E1129 08:07:47.246723 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4047dc4631283839106e154940f48195ce3d79a776cef519f1a7dadc4d45567c\": container with ID starting with 4047dc4631283839106e154940f48195ce3d79a776cef519f1a7dadc4d45567c not found: ID does not exist" containerID="4047dc4631283839106e154940f48195ce3d79a776cef519f1a7dadc4d45567c" Nov 29 08:07:47 crc kubenswrapper[4947]: I1129 08:07:47.246774 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4047dc4631283839106e154940f48195ce3d79a776cef519f1a7dadc4d45567c"} err="failed to get container status \"4047dc4631283839106e154940f48195ce3d79a776cef519f1a7dadc4d45567c\": rpc error: code = NotFound desc = could not find container \"4047dc4631283839106e154940f48195ce3d79a776cef519f1a7dadc4d45567c\": container with ID starting with 4047dc4631283839106e154940f48195ce3d79a776cef519f1a7dadc4d45567c not found: ID does not exist" Nov 29 08:07:47 crc kubenswrapper[4947]: I1129 08:07:47.246807 4947 scope.go:117] "RemoveContainer" containerID="2da2210b0e4511e76c1569373e3514ff53cc50d430173d68aa4b78d0ffc0ee4b" Nov 29 08:07:47 crc kubenswrapper[4947]: E1129 08:07:47.250357 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2da2210b0e4511e76c1569373e3514ff53cc50d430173d68aa4b78d0ffc0ee4b\": container with ID starting with 2da2210b0e4511e76c1569373e3514ff53cc50d430173d68aa4b78d0ffc0ee4b not found: ID does not exist" containerID="2da2210b0e4511e76c1569373e3514ff53cc50d430173d68aa4b78d0ffc0ee4b" Nov 29 08:07:47 crc kubenswrapper[4947]: I1129 08:07:47.250409 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2da2210b0e4511e76c1569373e3514ff53cc50d430173d68aa4b78d0ffc0ee4b"} err="failed to get container status \"2da2210b0e4511e76c1569373e3514ff53cc50d430173d68aa4b78d0ffc0ee4b\": rpc error: code = NotFound desc = could not find container \"2da2210b0e4511e76c1569373e3514ff53cc50d430173d68aa4b78d0ffc0ee4b\": container with ID starting with 2da2210b0e4511e76c1569373e3514ff53cc50d430173d68aa4b78d0ffc0ee4b not found: ID does not exist" Nov 29 08:07:48 crc kubenswrapper[4947]: I1129 08:07:48.433448 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-p5kpt" Nov 29 08:07:48 crc kubenswrapper[4947]: I1129 08:07:48.486958 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-p5kpt" Nov 29 08:07:49 crc kubenswrapper[4947]: I1129 08:07:49.193589 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a4ed605-37a1-4b03-832c-5d399bc42b99" path="/var/lib/kubelet/pods/7a4ed605-37a1-4b03-832c-5d399bc42b99/volumes" Nov 29 08:07:50 crc kubenswrapper[4947]: I1129 08:07:50.482502 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p5kpt"] Nov 29 08:07:50 crc kubenswrapper[4947]: I1129 08:07:50.482832 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-p5kpt" podUID="ad83548f-5649-4f89-900c-313df6cc048e" containerName="registry-server" containerID="cri-o://d8b3e65118fb8c6f348102658433461abc03cfcceec0a0ad4189a2b189f4273f" gracePeriod=2 Nov 29 08:07:51 crc kubenswrapper[4947]: I1129 08:07:51.034830 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p5kpt" Nov 29 08:07:51 crc kubenswrapper[4947]: I1129 08:07:51.169006 4947 generic.go:334] "Generic (PLEG): container finished" podID="ad83548f-5649-4f89-900c-313df6cc048e" containerID="d8b3e65118fb8c6f348102658433461abc03cfcceec0a0ad4189a2b189f4273f" exitCode=0 Nov 29 08:07:51 crc kubenswrapper[4947]: I1129 08:07:51.169090 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p5kpt" Nov 29 08:07:51 crc kubenswrapper[4947]: I1129 08:07:51.169090 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p5kpt" event={"ID":"ad83548f-5649-4f89-900c-313df6cc048e","Type":"ContainerDied","Data":"d8b3e65118fb8c6f348102658433461abc03cfcceec0a0ad4189a2b189f4273f"} Nov 29 08:07:51 crc kubenswrapper[4947]: I1129 08:07:51.169313 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p5kpt" event={"ID":"ad83548f-5649-4f89-900c-313df6cc048e","Type":"ContainerDied","Data":"0034c75c095c84f746b833a02e9c70051e0cffc5008f9b4f021bb9c588ebc2cc"} Nov 29 08:07:51 crc kubenswrapper[4947]: I1129 08:07:51.169356 4947 scope.go:117] "RemoveContainer" containerID="d8b3e65118fb8c6f348102658433461abc03cfcceec0a0ad4189a2b189f4273f" Nov 29 08:07:51 crc kubenswrapper[4947]: I1129 08:07:51.189554 4947 scope.go:117] "RemoveContainer" containerID="670099837baee121157afc96614842aa494871d03c3b368051ce258ce4413b97" Nov 29 08:07:51 crc kubenswrapper[4947]: I1129 08:07:51.203015 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7xb2\" (UniqueName: \"kubernetes.io/projected/ad83548f-5649-4f89-900c-313df6cc048e-kube-api-access-s7xb2\") pod \"ad83548f-5649-4f89-900c-313df6cc048e\" (UID: \"ad83548f-5649-4f89-900c-313df6cc048e\") " Nov 29 08:07:51 crc kubenswrapper[4947]: I1129 08:07:51.203297 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad83548f-5649-4f89-900c-313df6cc048e-catalog-content\") pod \"ad83548f-5649-4f89-900c-313df6cc048e\" (UID: \"ad83548f-5649-4f89-900c-313df6cc048e\") " Nov 29 08:07:51 crc kubenswrapper[4947]: I1129 08:07:51.203363 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad83548f-5649-4f89-900c-313df6cc048e-utilities\") pod \"ad83548f-5649-4f89-900c-313df6cc048e\" (UID: \"ad83548f-5649-4f89-900c-313df6cc048e\") " Nov 29 08:07:51 crc kubenswrapper[4947]: I1129 08:07:51.204828 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad83548f-5649-4f89-900c-313df6cc048e-utilities" (OuterVolumeSpecName: "utilities") pod "ad83548f-5649-4f89-900c-313df6cc048e" (UID: "ad83548f-5649-4f89-900c-313df6cc048e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:07:51 crc kubenswrapper[4947]: I1129 08:07:51.210329 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad83548f-5649-4f89-900c-313df6cc048e-kube-api-access-s7xb2" (OuterVolumeSpecName: "kube-api-access-s7xb2") pod "ad83548f-5649-4f89-900c-313df6cc048e" (UID: "ad83548f-5649-4f89-900c-313df6cc048e"). InnerVolumeSpecName "kube-api-access-s7xb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:07:51 crc kubenswrapper[4947]: I1129 08:07:51.228305 4947 scope.go:117] "RemoveContainer" containerID="d0dc308eb078d44d7498d043e5b44fcb9c7663a76dfbe6203c85c3a9354b27fc" Nov 29 08:07:51 crc kubenswrapper[4947]: I1129 08:07:51.273176 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad83548f-5649-4f89-900c-313df6cc048e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ad83548f-5649-4f89-900c-313df6cc048e" (UID: "ad83548f-5649-4f89-900c-313df6cc048e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:07:51 crc kubenswrapper[4947]: I1129 08:07:51.307690 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad83548f-5649-4f89-900c-313df6cc048e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:51 crc kubenswrapper[4947]: I1129 08:07:51.307725 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad83548f-5649-4f89-900c-313df6cc048e-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:51 crc kubenswrapper[4947]: I1129 08:07:51.307734 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7xb2\" (UniqueName: \"kubernetes.io/projected/ad83548f-5649-4f89-900c-313df6cc048e-kube-api-access-s7xb2\") on node \"crc\" DevicePath \"\"" Nov 29 08:07:51 crc kubenswrapper[4947]: I1129 08:07:51.314404 4947 scope.go:117] "RemoveContainer" containerID="d8b3e65118fb8c6f348102658433461abc03cfcceec0a0ad4189a2b189f4273f" Nov 29 08:07:51 crc kubenswrapper[4947]: E1129 08:07:51.315121 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8b3e65118fb8c6f348102658433461abc03cfcceec0a0ad4189a2b189f4273f\": container with ID starting with d8b3e65118fb8c6f348102658433461abc03cfcceec0a0ad4189a2b189f4273f not found: ID does not exist" containerID="d8b3e65118fb8c6f348102658433461abc03cfcceec0a0ad4189a2b189f4273f" Nov 29 08:07:51 crc kubenswrapper[4947]: I1129 08:07:51.315156 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8b3e65118fb8c6f348102658433461abc03cfcceec0a0ad4189a2b189f4273f"} err="failed to get container status \"d8b3e65118fb8c6f348102658433461abc03cfcceec0a0ad4189a2b189f4273f\": rpc error: code = NotFound desc = could not find container \"d8b3e65118fb8c6f348102658433461abc03cfcceec0a0ad4189a2b189f4273f\": container with ID starting with d8b3e65118fb8c6f348102658433461abc03cfcceec0a0ad4189a2b189f4273f not found: ID does not exist" Nov 29 08:07:51 crc kubenswrapper[4947]: I1129 08:07:51.315178 4947 scope.go:117] "RemoveContainer" containerID="670099837baee121157afc96614842aa494871d03c3b368051ce258ce4413b97" Nov 29 08:07:51 crc kubenswrapper[4947]: E1129 08:07:51.315700 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"670099837baee121157afc96614842aa494871d03c3b368051ce258ce4413b97\": container with ID starting with 670099837baee121157afc96614842aa494871d03c3b368051ce258ce4413b97 not found: ID does not exist" containerID="670099837baee121157afc96614842aa494871d03c3b368051ce258ce4413b97" Nov 29 08:07:51 crc kubenswrapper[4947]: I1129 08:07:51.315728 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"670099837baee121157afc96614842aa494871d03c3b368051ce258ce4413b97"} err="failed to get container status \"670099837baee121157afc96614842aa494871d03c3b368051ce258ce4413b97\": rpc error: code = NotFound desc = could not find container \"670099837baee121157afc96614842aa494871d03c3b368051ce258ce4413b97\": container with ID starting with 670099837baee121157afc96614842aa494871d03c3b368051ce258ce4413b97 not found: ID does not exist" Nov 29 08:07:51 crc kubenswrapper[4947]: I1129 08:07:51.315744 4947 scope.go:117] "RemoveContainer" containerID="d0dc308eb078d44d7498d043e5b44fcb9c7663a76dfbe6203c85c3a9354b27fc" Nov 29 08:07:51 crc kubenswrapper[4947]: E1129 08:07:51.316079 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0dc308eb078d44d7498d043e5b44fcb9c7663a76dfbe6203c85c3a9354b27fc\": container with ID starting with d0dc308eb078d44d7498d043e5b44fcb9c7663a76dfbe6203c85c3a9354b27fc not found: ID does not exist" containerID="d0dc308eb078d44d7498d043e5b44fcb9c7663a76dfbe6203c85c3a9354b27fc" Nov 29 08:07:51 crc kubenswrapper[4947]: I1129 08:07:51.316102 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0dc308eb078d44d7498d043e5b44fcb9c7663a76dfbe6203c85c3a9354b27fc"} err="failed to get container status \"d0dc308eb078d44d7498d043e5b44fcb9c7663a76dfbe6203c85c3a9354b27fc\": rpc error: code = NotFound desc = could not find container \"d0dc308eb078d44d7498d043e5b44fcb9c7663a76dfbe6203c85c3a9354b27fc\": container with ID starting with d0dc308eb078d44d7498d043e5b44fcb9c7663a76dfbe6203c85c3a9354b27fc not found: ID does not exist" Nov 29 08:07:51 crc kubenswrapper[4947]: I1129 08:07:51.507105 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p5kpt"] Nov 29 08:07:51 crc kubenswrapper[4947]: I1129 08:07:51.515791 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-p5kpt"] Nov 29 08:07:53 crc kubenswrapper[4947]: I1129 08:07:53.190164 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad83548f-5649-4f89-900c-313df6cc048e" path="/var/lib/kubelet/pods/ad83548f-5649-4f89-900c-313df6cc048e/volumes" Nov 29 08:08:56 crc kubenswrapper[4947]: I1129 08:08:56.024593 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4q9l5"] Nov 29 08:08:56 crc kubenswrapper[4947]: E1129 08:08:56.025722 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad83548f-5649-4f89-900c-313df6cc048e" containerName="extract-content" Nov 29 08:08:56 crc kubenswrapper[4947]: I1129 08:08:56.025740 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad83548f-5649-4f89-900c-313df6cc048e" containerName="extract-content" Nov 29 08:08:56 crc kubenswrapper[4947]: E1129 08:08:56.025756 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a4ed605-37a1-4b03-832c-5d399bc42b99" containerName="extract-utilities" Nov 29 08:08:56 crc kubenswrapper[4947]: I1129 08:08:56.025762 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a4ed605-37a1-4b03-832c-5d399bc42b99" containerName="extract-utilities" Nov 29 08:08:56 crc kubenswrapper[4947]: E1129 08:08:56.025774 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a4ed605-37a1-4b03-832c-5d399bc42b99" containerName="registry-server" Nov 29 08:08:56 crc kubenswrapper[4947]: I1129 08:08:56.025780 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a4ed605-37a1-4b03-832c-5d399bc42b99" containerName="registry-server" Nov 29 08:08:56 crc kubenswrapper[4947]: E1129 08:08:56.025793 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a4ed605-37a1-4b03-832c-5d399bc42b99" containerName="extract-content" Nov 29 08:08:56 crc kubenswrapper[4947]: I1129 08:08:56.025798 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a4ed605-37a1-4b03-832c-5d399bc42b99" containerName="extract-content" Nov 29 08:08:56 crc kubenswrapper[4947]: E1129 08:08:56.025809 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad83548f-5649-4f89-900c-313df6cc048e" containerName="extract-utilities" Nov 29 08:08:56 crc kubenswrapper[4947]: I1129 08:08:56.025814 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad83548f-5649-4f89-900c-313df6cc048e" containerName="extract-utilities" Nov 29 08:08:56 crc kubenswrapper[4947]: E1129 08:08:56.025855 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad83548f-5649-4f89-900c-313df6cc048e" containerName="registry-server" Nov 29 08:08:56 crc kubenswrapper[4947]: I1129 08:08:56.025865 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad83548f-5649-4f89-900c-313df6cc048e" containerName="registry-server" Nov 29 08:08:56 crc kubenswrapper[4947]: I1129 08:08:56.026117 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a4ed605-37a1-4b03-832c-5d399bc42b99" containerName="registry-server" Nov 29 08:08:56 crc kubenswrapper[4947]: I1129 08:08:56.026134 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad83548f-5649-4f89-900c-313df6cc048e" containerName="registry-server" Nov 29 08:08:56 crc kubenswrapper[4947]: I1129 08:08:56.027666 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4q9l5" Nov 29 08:08:56 crc kubenswrapper[4947]: I1129 08:08:56.041745 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4q9l5"] Nov 29 08:08:56 crc kubenswrapper[4947]: I1129 08:08:56.206797 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwwvv\" (UniqueName: \"kubernetes.io/projected/a8d44b84-c0eb-40dd-84f0-e2dbf76d9003-kube-api-access-jwwvv\") pod \"redhat-marketplace-4q9l5\" (UID: \"a8d44b84-c0eb-40dd-84f0-e2dbf76d9003\") " pod="openshift-marketplace/redhat-marketplace-4q9l5" Nov 29 08:08:56 crc kubenswrapper[4947]: I1129 08:08:56.207309 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8d44b84-c0eb-40dd-84f0-e2dbf76d9003-catalog-content\") pod \"redhat-marketplace-4q9l5\" (UID: \"a8d44b84-c0eb-40dd-84f0-e2dbf76d9003\") " pod="openshift-marketplace/redhat-marketplace-4q9l5" Nov 29 08:08:56 crc kubenswrapper[4947]: I1129 08:08:56.207539 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8d44b84-c0eb-40dd-84f0-e2dbf76d9003-utilities\") pod \"redhat-marketplace-4q9l5\" (UID: \"a8d44b84-c0eb-40dd-84f0-e2dbf76d9003\") " pod="openshift-marketplace/redhat-marketplace-4q9l5" Nov 29 08:08:56 crc kubenswrapper[4947]: I1129 08:08:56.309801 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwwvv\" (UniqueName: \"kubernetes.io/projected/a8d44b84-c0eb-40dd-84f0-e2dbf76d9003-kube-api-access-jwwvv\") pod \"redhat-marketplace-4q9l5\" (UID: \"a8d44b84-c0eb-40dd-84f0-e2dbf76d9003\") " pod="openshift-marketplace/redhat-marketplace-4q9l5" Nov 29 08:08:56 crc kubenswrapper[4947]: I1129 08:08:56.309948 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8d44b84-c0eb-40dd-84f0-e2dbf76d9003-catalog-content\") pod \"redhat-marketplace-4q9l5\" (UID: \"a8d44b84-c0eb-40dd-84f0-e2dbf76d9003\") " pod="openshift-marketplace/redhat-marketplace-4q9l5" Nov 29 08:08:56 crc kubenswrapper[4947]: I1129 08:08:56.310026 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8d44b84-c0eb-40dd-84f0-e2dbf76d9003-utilities\") pod \"redhat-marketplace-4q9l5\" (UID: \"a8d44b84-c0eb-40dd-84f0-e2dbf76d9003\") " pod="openshift-marketplace/redhat-marketplace-4q9l5" Nov 29 08:08:56 crc kubenswrapper[4947]: I1129 08:08:56.310661 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8d44b84-c0eb-40dd-84f0-e2dbf76d9003-utilities\") pod \"redhat-marketplace-4q9l5\" (UID: \"a8d44b84-c0eb-40dd-84f0-e2dbf76d9003\") " pod="openshift-marketplace/redhat-marketplace-4q9l5" Nov 29 08:08:56 crc kubenswrapper[4947]: I1129 08:08:56.310973 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8d44b84-c0eb-40dd-84f0-e2dbf76d9003-catalog-content\") pod \"redhat-marketplace-4q9l5\" (UID: \"a8d44b84-c0eb-40dd-84f0-e2dbf76d9003\") " pod="openshift-marketplace/redhat-marketplace-4q9l5" Nov 29 08:08:56 crc kubenswrapper[4947]: I1129 08:08:56.332469 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwwvv\" (UniqueName: \"kubernetes.io/projected/a8d44b84-c0eb-40dd-84f0-e2dbf76d9003-kube-api-access-jwwvv\") pod \"redhat-marketplace-4q9l5\" (UID: \"a8d44b84-c0eb-40dd-84f0-e2dbf76d9003\") " pod="openshift-marketplace/redhat-marketplace-4q9l5" Nov 29 08:08:56 crc kubenswrapper[4947]: I1129 08:08:56.356425 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4q9l5" Nov 29 08:08:56 crc kubenswrapper[4947]: I1129 08:08:56.690419 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4q9l5"] Nov 29 08:08:56 crc kubenswrapper[4947]: I1129 08:08:56.816421 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4q9l5" event={"ID":"a8d44b84-c0eb-40dd-84f0-e2dbf76d9003","Type":"ContainerStarted","Data":"c7a3c1c6017d4b7db0b302a2f33de1628600b910faf461d3bbdee51d3ad6fbac"} Nov 29 08:08:57 crc kubenswrapper[4947]: I1129 08:08:57.831451 4947 generic.go:334] "Generic (PLEG): container finished" podID="a8d44b84-c0eb-40dd-84f0-e2dbf76d9003" containerID="76d0155e87f8f27afae1ac1c8b1f6fce47d55b1b1d7464acf53bb747cfb946f3" exitCode=0 Nov 29 08:08:57 crc kubenswrapper[4947]: I1129 08:08:57.831920 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4q9l5" event={"ID":"a8d44b84-c0eb-40dd-84f0-e2dbf76d9003","Type":"ContainerDied","Data":"76d0155e87f8f27afae1ac1c8b1f6fce47d55b1b1d7464acf53bb747cfb946f3"} Nov 29 08:09:11 crc kubenswrapper[4947]: I1129 08:09:11.957058 4947 generic.go:334] "Generic (PLEG): container finished" podID="a8d44b84-c0eb-40dd-84f0-e2dbf76d9003" containerID="15d07d06143846a036b4250fc81fb488e79e29f76f74071c419d4950f42e53ca" exitCode=0 Nov 29 08:09:11 crc kubenswrapper[4947]: I1129 08:09:11.957135 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4q9l5" event={"ID":"a8d44b84-c0eb-40dd-84f0-e2dbf76d9003","Type":"ContainerDied","Data":"15d07d06143846a036b4250fc81fb488e79e29f76f74071c419d4950f42e53ca"} Nov 29 08:09:29 crc kubenswrapper[4947]: E1129 08:09:29.286922 4947 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad: can't talk to a V1 container registry" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Nov 29 08:09:29 crc kubenswrapper[4947]: E1129 08:09:29.287591 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:20MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jwwvv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-4q9l5_openshift-marketplace(a8d44b84-c0eb-40dd-84f0-e2dbf76d9003): ErrImagePull: initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad: can't talk to a V1 container registry" logger="UnhandledError" Nov 29 08:09:29 crc kubenswrapper[4947]: E1129 08:09:29.288698 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad: can't talk to a V1 container registry\"" pod="openshift-marketplace/redhat-marketplace-4q9l5" podUID="a8d44b84-c0eb-40dd-84f0-e2dbf76d9003" Nov 29 08:09:30 crc kubenswrapper[4947]: E1129 08:09:30.135189 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-4q9l5" podUID="a8d44b84-c0eb-40dd-84f0-e2dbf76d9003" Nov 29 08:09:52 crc kubenswrapper[4947]: I1129 08:09:52.987532 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:09:52 crc kubenswrapper[4947]: I1129 08:09:52.988153 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:10:01 crc kubenswrapper[4947]: I1129 08:10:01.448876 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4q9l5" event={"ID":"a8d44b84-c0eb-40dd-84f0-e2dbf76d9003","Type":"ContainerStarted","Data":"17598d66deb161094be5d2422d1ca1a22bc14989c6b49b8f45b41bbb167be9df"} Nov 29 08:10:01 crc kubenswrapper[4947]: I1129 08:10:01.477536 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4q9l5" podStartSLOduration=3.847714762 podStartE2EDuration="1m6.47751217s" podCreationTimestamp="2025-11-29 08:08:55 +0000 UTC" firstStartedPulling="2025-11-29 08:08:57.834902038 +0000 UTC m=+5688.879284119" lastFinishedPulling="2025-11-29 08:10:00.464699446 +0000 UTC m=+5751.509081527" observedRunningTime="2025-11-29 08:10:01.470690238 +0000 UTC m=+5752.515072319" watchObservedRunningTime="2025-11-29 08:10:01.47751217 +0000 UTC m=+5752.521894251" Nov 29 08:10:06 crc kubenswrapper[4947]: I1129 08:10:06.357578 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4q9l5" Nov 29 08:10:06 crc kubenswrapper[4947]: I1129 08:10:06.358238 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4q9l5" Nov 29 08:10:06 crc kubenswrapper[4947]: I1129 08:10:06.412076 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4q9l5" Nov 29 08:10:06 crc kubenswrapper[4947]: I1129 08:10:06.543280 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4q9l5" Nov 29 08:10:06 crc kubenswrapper[4947]: I1129 08:10:06.647066 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4q9l5"] Nov 29 08:10:08 crc kubenswrapper[4947]: I1129 08:10:08.516151 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4q9l5" podUID="a8d44b84-c0eb-40dd-84f0-e2dbf76d9003" containerName="registry-server" containerID="cri-o://17598d66deb161094be5d2422d1ca1a22bc14989c6b49b8f45b41bbb167be9df" gracePeriod=2 Nov 29 08:10:09 crc kubenswrapper[4947]: I1129 08:10:09.108295 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4q9l5" Nov 29 08:10:09 crc kubenswrapper[4947]: I1129 08:10:09.138501 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8d44b84-c0eb-40dd-84f0-e2dbf76d9003-catalog-content\") pod \"a8d44b84-c0eb-40dd-84f0-e2dbf76d9003\" (UID: \"a8d44b84-c0eb-40dd-84f0-e2dbf76d9003\") " Nov 29 08:10:09 crc kubenswrapper[4947]: I1129 08:10:09.138727 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8d44b84-c0eb-40dd-84f0-e2dbf76d9003-utilities\") pod \"a8d44b84-c0eb-40dd-84f0-e2dbf76d9003\" (UID: \"a8d44b84-c0eb-40dd-84f0-e2dbf76d9003\") " Nov 29 08:10:09 crc kubenswrapper[4947]: I1129 08:10:09.138754 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwwvv\" (UniqueName: \"kubernetes.io/projected/a8d44b84-c0eb-40dd-84f0-e2dbf76d9003-kube-api-access-jwwvv\") pod \"a8d44b84-c0eb-40dd-84f0-e2dbf76d9003\" (UID: \"a8d44b84-c0eb-40dd-84f0-e2dbf76d9003\") " Nov 29 08:10:09 crc kubenswrapper[4947]: I1129 08:10:09.140008 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8d44b84-c0eb-40dd-84f0-e2dbf76d9003-utilities" (OuterVolumeSpecName: "utilities") pod "a8d44b84-c0eb-40dd-84f0-e2dbf76d9003" (UID: "a8d44b84-c0eb-40dd-84f0-e2dbf76d9003"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:10:09 crc kubenswrapper[4947]: I1129 08:10:09.146599 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8d44b84-c0eb-40dd-84f0-e2dbf76d9003-kube-api-access-jwwvv" (OuterVolumeSpecName: "kube-api-access-jwwvv") pod "a8d44b84-c0eb-40dd-84f0-e2dbf76d9003" (UID: "a8d44b84-c0eb-40dd-84f0-e2dbf76d9003"). InnerVolumeSpecName "kube-api-access-jwwvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:10:09 crc kubenswrapper[4947]: I1129 08:10:09.164705 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8d44b84-c0eb-40dd-84f0-e2dbf76d9003-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a8d44b84-c0eb-40dd-84f0-e2dbf76d9003" (UID: "a8d44b84-c0eb-40dd-84f0-e2dbf76d9003"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:10:09 crc kubenswrapper[4947]: I1129 08:10:09.241061 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwwvv\" (UniqueName: \"kubernetes.io/projected/a8d44b84-c0eb-40dd-84f0-e2dbf76d9003-kube-api-access-jwwvv\") on node \"crc\" DevicePath \"\"" Nov 29 08:10:09 crc kubenswrapper[4947]: I1129 08:10:09.241100 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8d44b84-c0eb-40dd-84f0-e2dbf76d9003-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:10:09 crc kubenswrapper[4947]: I1129 08:10:09.241109 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8d44b84-c0eb-40dd-84f0-e2dbf76d9003-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:10:09 crc kubenswrapper[4947]: I1129 08:10:09.526942 4947 generic.go:334] "Generic (PLEG): container finished" podID="a8d44b84-c0eb-40dd-84f0-e2dbf76d9003" containerID="17598d66deb161094be5d2422d1ca1a22bc14989c6b49b8f45b41bbb167be9df" exitCode=0 Nov 29 08:10:09 crc kubenswrapper[4947]: I1129 08:10:09.526992 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4q9l5" event={"ID":"a8d44b84-c0eb-40dd-84f0-e2dbf76d9003","Type":"ContainerDied","Data":"17598d66deb161094be5d2422d1ca1a22bc14989c6b49b8f45b41bbb167be9df"} Nov 29 08:10:09 crc kubenswrapper[4947]: I1129 08:10:09.527031 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4q9l5" event={"ID":"a8d44b84-c0eb-40dd-84f0-e2dbf76d9003","Type":"ContainerDied","Data":"c7a3c1c6017d4b7db0b302a2f33de1628600b910faf461d3bbdee51d3ad6fbac"} Nov 29 08:10:09 crc kubenswrapper[4947]: I1129 08:10:09.527035 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4q9l5" Nov 29 08:10:09 crc kubenswrapper[4947]: I1129 08:10:09.527051 4947 scope.go:117] "RemoveContainer" containerID="17598d66deb161094be5d2422d1ca1a22bc14989c6b49b8f45b41bbb167be9df" Nov 29 08:10:09 crc kubenswrapper[4947]: I1129 08:10:09.552583 4947 scope.go:117] "RemoveContainer" containerID="15d07d06143846a036b4250fc81fb488e79e29f76f74071c419d4950f42e53ca" Nov 29 08:10:09 crc kubenswrapper[4947]: I1129 08:10:09.554957 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4q9l5"] Nov 29 08:10:09 crc kubenswrapper[4947]: I1129 08:10:09.562924 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4q9l5"] Nov 29 08:10:09 crc kubenswrapper[4947]: I1129 08:10:09.578552 4947 scope.go:117] "RemoveContainer" containerID="76d0155e87f8f27afae1ac1c8b1f6fce47d55b1b1d7464acf53bb747cfb946f3" Nov 29 08:10:09 crc kubenswrapper[4947]: I1129 08:10:09.634340 4947 scope.go:117] "RemoveContainer" containerID="17598d66deb161094be5d2422d1ca1a22bc14989c6b49b8f45b41bbb167be9df" Nov 29 08:10:09 crc kubenswrapper[4947]: E1129 08:10:09.636010 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17598d66deb161094be5d2422d1ca1a22bc14989c6b49b8f45b41bbb167be9df\": container with ID starting with 17598d66deb161094be5d2422d1ca1a22bc14989c6b49b8f45b41bbb167be9df not found: ID does not exist" containerID="17598d66deb161094be5d2422d1ca1a22bc14989c6b49b8f45b41bbb167be9df" Nov 29 08:10:09 crc kubenswrapper[4947]: I1129 08:10:09.636078 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17598d66deb161094be5d2422d1ca1a22bc14989c6b49b8f45b41bbb167be9df"} err="failed to get container status \"17598d66deb161094be5d2422d1ca1a22bc14989c6b49b8f45b41bbb167be9df\": rpc error: code = NotFound desc = could not find container \"17598d66deb161094be5d2422d1ca1a22bc14989c6b49b8f45b41bbb167be9df\": container with ID starting with 17598d66deb161094be5d2422d1ca1a22bc14989c6b49b8f45b41bbb167be9df not found: ID does not exist" Nov 29 08:10:09 crc kubenswrapper[4947]: I1129 08:10:09.636148 4947 scope.go:117] "RemoveContainer" containerID="15d07d06143846a036b4250fc81fb488e79e29f76f74071c419d4950f42e53ca" Nov 29 08:10:09 crc kubenswrapper[4947]: E1129 08:10:09.638591 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15d07d06143846a036b4250fc81fb488e79e29f76f74071c419d4950f42e53ca\": container with ID starting with 15d07d06143846a036b4250fc81fb488e79e29f76f74071c419d4950f42e53ca not found: ID does not exist" containerID="15d07d06143846a036b4250fc81fb488e79e29f76f74071c419d4950f42e53ca" Nov 29 08:10:09 crc kubenswrapper[4947]: I1129 08:10:09.638643 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15d07d06143846a036b4250fc81fb488e79e29f76f74071c419d4950f42e53ca"} err="failed to get container status \"15d07d06143846a036b4250fc81fb488e79e29f76f74071c419d4950f42e53ca\": rpc error: code = NotFound desc = could not find container \"15d07d06143846a036b4250fc81fb488e79e29f76f74071c419d4950f42e53ca\": container with ID starting with 15d07d06143846a036b4250fc81fb488e79e29f76f74071c419d4950f42e53ca not found: ID does not exist" Nov 29 08:10:09 crc kubenswrapper[4947]: I1129 08:10:09.638672 4947 scope.go:117] "RemoveContainer" containerID="76d0155e87f8f27afae1ac1c8b1f6fce47d55b1b1d7464acf53bb747cfb946f3" Nov 29 08:10:09 crc kubenswrapper[4947]: E1129 08:10:09.638971 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76d0155e87f8f27afae1ac1c8b1f6fce47d55b1b1d7464acf53bb747cfb946f3\": container with ID starting with 76d0155e87f8f27afae1ac1c8b1f6fce47d55b1b1d7464acf53bb747cfb946f3 not found: ID does not exist" containerID="76d0155e87f8f27afae1ac1c8b1f6fce47d55b1b1d7464acf53bb747cfb946f3" Nov 29 08:10:09 crc kubenswrapper[4947]: I1129 08:10:09.638995 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76d0155e87f8f27afae1ac1c8b1f6fce47d55b1b1d7464acf53bb747cfb946f3"} err="failed to get container status \"76d0155e87f8f27afae1ac1c8b1f6fce47d55b1b1d7464acf53bb747cfb946f3\": rpc error: code = NotFound desc = could not find container \"76d0155e87f8f27afae1ac1c8b1f6fce47d55b1b1d7464acf53bb747cfb946f3\": container with ID starting with 76d0155e87f8f27afae1ac1c8b1f6fce47d55b1b1d7464acf53bb747cfb946f3 not found: ID does not exist" Nov 29 08:10:10 crc kubenswrapper[4947]: E1129 08:10:10.561610 4947 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8d44b84_c0eb_40dd_84f0_e2dbf76d9003.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8d44b84_c0eb_40dd_84f0_e2dbf76d9003.slice/crio-c7a3c1c6017d4b7db0b302a2f33de1628600b910faf461d3bbdee51d3ad6fbac\": RecentStats: unable to find data in memory cache]" Nov 29 08:10:11 crc kubenswrapper[4947]: I1129 08:10:11.194546 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8d44b84-c0eb-40dd-84f0-e2dbf76d9003" path="/var/lib/kubelet/pods/a8d44b84-c0eb-40dd-84f0-e2dbf76d9003/volumes" Nov 29 08:10:20 crc kubenswrapper[4947]: E1129 08:10:20.833042 4947 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8d44b84_c0eb_40dd_84f0_e2dbf76d9003.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8d44b84_c0eb_40dd_84f0_e2dbf76d9003.slice/crio-c7a3c1c6017d4b7db0b302a2f33de1628600b910faf461d3bbdee51d3ad6fbac\": RecentStats: unable to find data in memory cache]" Nov 29 08:10:22 crc kubenswrapper[4947]: I1129 08:10:22.988074 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:10:22 crc kubenswrapper[4947]: I1129 08:10:22.988641 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:10:31 crc kubenswrapper[4947]: E1129 08:10:31.099803 4947 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8d44b84_c0eb_40dd_84f0_e2dbf76d9003.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8d44b84_c0eb_40dd_84f0_e2dbf76d9003.slice/crio-c7a3c1c6017d4b7db0b302a2f33de1628600b910faf461d3bbdee51d3ad6fbac\": RecentStats: unable to find data in memory cache]" Nov 29 08:10:41 crc kubenswrapper[4947]: E1129 08:10:41.353942 4947 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8d44b84_c0eb_40dd_84f0_e2dbf76d9003.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8d44b84_c0eb_40dd_84f0_e2dbf76d9003.slice/crio-c7a3c1c6017d4b7db0b302a2f33de1628600b910faf461d3bbdee51d3ad6fbac\": RecentStats: unable to find data in memory cache]" Nov 29 08:10:51 crc kubenswrapper[4947]: E1129 08:10:51.596193 4947 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8d44b84_c0eb_40dd_84f0_e2dbf76d9003.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8d44b84_c0eb_40dd_84f0_e2dbf76d9003.slice/crio-c7a3c1c6017d4b7db0b302a2f33de1628600b910faf461d3bbdee51d3ad6fbac\": RecentStats: unable to find data in memory cache]" Nov 29 08:10:52 crc kubenswrapper[4947]: I1129 08:10:52.988398 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:10:52 crc kubenswrapper[4947]: I1129 08:10:52.988933 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:10:52 crc kubenswrapper[4947]: I1129 08:10:52.989002 4947 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" Nov 29 08:10:52 crc kubenswrapper[4947]: I1129 08:10:52.990120 4947 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f016ad3b48559c4e1b707ec7329efae28a02ca95c948d4232c8f1dc15b1126b6"} pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 08:10:52 crc kubenswrapper[4947]: I1129 08:10:52.990180 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" containerID="cri-o://f016ad3b48559c4e1b707ec7329efae28a02ca95c948d4232c8f1dc15b1126b6" gracePeriod=600 Nov 29 08:10:54 crc kubenswrapper[4947]: I1129 08:10:54.028520 4947 generic.go:334] "Generic (PLEG): container finished" podID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerID="f016ad3b48559c4e1b707ec7329efae28a02ca95c948d4232c8f1dc15b1126b6" exitCode=0 Nov 29 08:10:54 crc kubenswrapper[4947]: I1129 08:10:54.028594 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" event={"ID":"5f4d791f-bb61-4aaa-a09c-3007b59645a7","Type":"ContainerDied","Data":"f016ad3b48559c4e1b707ec7329efae28a02ca95c948d4232c8f1dc15b1126b6"} Nov 29 08:10:54 crc kubenswrapper[4947]: I1129 08:10:54.029034 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" event={"ID":"5f4d791f-bb61-4aaa-a09c-3007b59645a7","Type":"ContainerStarted","Data":"d67dc1b8e5602a5dcf9a398c7d5f30c1377f7bf68c3763bf602bafe49c0e95c9"} Nov 29 08:10:54 crc kubenswrapper[4947]: I1129 08:10:54.029064 4947 scope.go:117] "RemoveContainer" containerID="9efc0bea37e0aeeedba6f448c245a84df0d0229c909c70cf18ea4cc93f9ccdd5" Nov 29 08:11:01 crc kubenswrapper[4947]: E1129 08:11:01.852571 4947 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8d44b84_c0eb_40dd_84f0_e2dbf76d9003.slice/crio-c7a3c1c6017d4b7db0b302a2f33de1628600b910faf461d3bbdee51d3ad6fbac\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8d44b84_c0eb_40dd_84f0_e2dbf76d9003.slice\": RecentStats: unable to find data in memory cache]" Nov 29 08:13:22 crc kubenswrapper[4947]: I1129 08:13:22.987637 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:13:22 crc kubenswrapper[4947]: I1129 08:13:22.988315 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:13:52 crc kubenswrapper[4947]: I1129 08:13:52.987870 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:13:52 crc kubenswrapper[4947]: I1129 08:13:52.988620 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:14:22 crc kubenswrapper[4947]: I1129 08:14:22.987574 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:14:22 crc kubenswrapper[4947]: I1129 08:14:22.988196 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:14:22 crc kubenswrapper[4947]: I1129 08:14:22.988264 4947 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" Nov 29 08:14:22 crc kubenswrapper[4947]: I1129 08:14:22.989169 4947 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d67dc1b8e5602a5dcf9a398c7d5f30c1377f7bf68c3763bf602bafe49c0e95c9"} pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 08:14:22 crc kubenswrapper[4947]: I1129 08:14:22.989238 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" containerID="cri-o://d67dc1b8e5602a5dcf9a398c7d5f30c1377f7bf68c3763bf602bafe49c0e95c9" gracePeriod=600 Nov 29 08:14:23 crc kubenswrapper[4947]: E1129 08:14:23.123176 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 08:14:23 crc kubenswrapper[4947]: I1129 08:14:23.313687 4947 generic.go:334] "Generic (PLEG): container finished" podID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerID="d67dc1b8e5602a5dcf9a398c7d5f30c1377f7bf68c3763bf602bafe49c0e95c9" exitCode=0 Nov 29 08:14:23 crc kubenswrapper[4947]: I1129 08:14:23.313738 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" event={"ID":"5f4d791f-bb61-4aaa-a09c-3007b59645a7","Type":"ContainerDied","Data":"d67dc1b8e5602a5dcf9a398c7d5f30c1377f7bf68c3763bf602bafe49c0e95c9"} Nov 29 08:14:23 crc kubenswrapper[4947]: I1129 08:14:23.313775 4947 scope.go:117] "RemoveContainer" containerID="f016ad3b48559c4e1b707ec7329efae28a02ca95c948d4232c8f1dc15b1126b6" Nov 29 08:14:23 crc kubenswrapper[4947]: I1129 08:14:23.314752 4947 scope.go:117] "RemoveContainer" containerID="d67dc1b8e5602a5dcf9a398c7d5f30c1377f7bf68c3763bf602bafe49c0e95c9" Nov 29 08:14:23 crc kubenswrapper[4947]: E1129 08:14:23.316631 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 08:14:36 crc kubenswrapper[4947]: I1129 08:14:36.179762 4947 scope.go:117] "RemoveContainer" containerID="d67dc1b8e5602a5dcf9a398c7d5f30c1377f7bf68c3763bf602bafe49c0e95c9" Nov 29 08:14:36 crc kubenswrapper[4947]: E1129 08:14:36.180990 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 08:14:48 crc kubenswrapper[4947]: I1129 08:14:48.179785 4947 scope.go:117] "RemoveContainer" containerID="d67dc1b8e5602a5dcf9a398c7d5f30c1377f7bf68c3763bf602bafe49c0e95c9" Nov 29 08:14:48 crc kubenswrapper[4947]: E1129 08:14:48.180498 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 08:15:00 crc kubenswrapper[4947]: I1129 08:15:00.148995 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406735-c5qnh"] Nov 29 08:15:00 crc kubenswrapper[4947]: E1129 08:15:00.150201 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8d44b84-c0eb-40dd-84f0-e2dbf76d9003" containerName="registry-server" Nov 29 08:15:00 crc kubenswrapper[4947]: I1129 08:15:00.150270 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8d44b84-c0eb-40dd-84f0-e2dbf76d9003" containerName="registry-server" Nov 29 08:15:00 crc kubenswrapper[4947]: E1129 08:15:00.150315 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8d44b84-c0eb-40dd-84f0-e2dbf76d9003" containerName="extract-utilities" Nov 29 08:15:00 crc kubenswrapper[4947]: I1129 08:15:00.150324 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8d44b84-c0eb-40dd-84f0-e2dbf76d9003" containerName="extract-utilities" Nov 29 08:15:00 crc kubenswrapper[4947]: E1129 08:15:00.150360 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8d44b84-c0eb-40dd-84f0-e2dbf76d9003" containerName="extract-content" Nov 29 08:15:00 crc kubenswrapper[4947]: I1129 08:15:00.150369 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8d44b84-c0eb-40dd-84f0-e2dbf76d9003" containerName="extract-content" Nov 29 08:15:00 crc kubenswrapper[4947]: I1129 08:15:00.150641 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8d44b84-c0eb-40dd-84f0-e2dbf76d9003" containerName="registry-server" Nov 29 08:15:00 crc kubenswrapper[4947]: I1129 08:15:00.151516 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-c5qnh" Nov 29 08:15:00 crc kubenswrapper[4947]: I1129 08:15:00.154724 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 29 08:15:00 crc kubenswrapper[4947]: I1129 08:15:00.154730 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 29 08:15:00 crc kubenswrapper[4947]: I1129 08:15:00.159620 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406735-c5qnh"] Nov 29 08:15:00 crc kubenswrapper[4947]: I1129 08:15:00.179654 4947 scope.go:117] "RemoveContainer" containerID="d67dc1b8e5602a5dcf9a398c7d5f30c1377f7bf68c3763bf602bafe49c0e95c9" Nov 29 08:15:00 crc kubenswrapper[4947]: E1129 08:15:00.179986 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 08:15:00 crc kubenswrapper[4947]: I1129 08:15:00.293973 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0610d9a5-1571-498f-a368-733c4c6bf73a-config-volume\") pod \"collect-profiles-29406735-c5qnh\" (UID: \"0610d9a5-1571-498f-a368-733c4c6bf73a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-c5qnh" Nov 29 08:15:00 crc kubenswrapper[4947]: I1129 08:15:00.295095 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0610d9a5-1571-498f-a368-733c4c6bf73a-secret-volume\") pod \"collect-profiles-29406735-c5qnh\" (UID: \"0610d9a5-1571-498f-a368-733c4c6bf73a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-c5qnh" Nov 29 08:15:00 crc kubenswrapper[4947]: I1129 08:15:00.295656 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55z57\" (UniqueName: \"kubernetes.io/projected/0610d9a5-1571-498f-a368-733c4c6bf73a-kube-api-access-55z57\") pod \"collect-profiles-29406735-c5qnh\" (UID: \"0610d9a5-1571-498f-a368-733c4c6bf73a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-c5qnh" Nov 29 08:15:00 crc kubenswrapper[4947]: I1129 08:15:00.398056 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55z57\" (UniqueName: \"kubernetes.io/projected/0610d9a5-1571-498f-a368-733c4c6bf73a-kube-api-access-55z57\") pod \"collect-profiles-29406735-c5qnh\" (UID: \"0610d9a5-1571-498f-a368-733c4c6bf73a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-c5qnh" Nov 29 08:15:00 crc kubenswrapper[4947]: I1129 08:15:00.398181 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0610d9a5-1571-498f-a368-733c4c6bf73a-config-volume\") pod \"collect-profiles-29406735-c5qnh\" (UID: \"0610d9a5-1571-498f-a368-733c4c6bf73a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-c5qnh" Nov 29 08:15:00 crc kubenswrapper[4947]: I1129 08:15:00.398371 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0610d9a5-1571-498f-a368-733c4c6bf73a-secret-volume\") pod \"collect-profiles-29406735-c5qnh\" (UID: \"0610d9a5-1571-498f-a368-733c4c6bf73a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-c5qnh" Nov 29 08:15:00 crc kubenswrapper[4947]: I1129 08:15:00.399236 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0610d9a5-1571-498f-a368-733c4c6bf73a-config-volume\") pod \"collect-profiles-29406735-c5qnh\" (UID: \"0610d9a5-1571-498f-a368-733c4c6bf73a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-c5qnh" Nov 29 08:15:00 crc kubenswrapper[4947]: I1129 08:15:00.417921 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55z57\" (UniqueName: \"kubernetes.io/projected/0610d9a5-1571-498f-a368-733c4c6bf73a-kube-api-access-55z57\") pod \"collect-profiles-29406735-c5qnh\" (UID: \"0610d9a5-1571-498f-a368-733c4c6bf73a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-c5qnh" Nov 29 08:15:00 crc kubenswrapper[4947]: I1129 08:15:00.420160 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0610d9a5-1571-498f-a368-733c4c6bf73a-secret-volume\") pod \"collect-profiles-29406735-c5qnh\" (UID: \"0610d9a5-1571-498f-a368-733c4c6bf73a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-c5qnh" Nov 29 08:15:00 crc kubenswrapper[4947]: I1129 08:15:00.493061 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-c5qnh" Nov 29 08:15:00 crc kubenswrapper[4947]: I1129 08:15:00.943663 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406735-c5qnh"] Nov 29 08:15:00 crc kubenswrapper[4947]: W1129 08:15:00.955682 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0610d9a5_1571_498f_a368_733c4c6bf73a.slice/crio-2d21323a274c5fa266afbe159987ef8f37bc08470a1bc81c9a8464f788bd7b98 WatchSource:0}: Error finding container 2d21323a274c5fa266afbe159987ef8f37bc08470a1bc81c9a8464f788bd7b98: Status 404 returned error can't find the container with id 2d21323a274c5fa266afbe159987ef8f37bc08470a1bc81c9a8464f788bd7b98 Nov 29 08:15:01 crc kubenswrapper[4947]: I1129 08:15:01.697035 4947 generic.go:334] "Generic (PLEG): container finished" podID="0610d9a5-1571-498f-a368-733c4c6bf73a" containerID="c7dbfb1dec6c4ac31455d20457606ffd394d0ba0acb971aa776cce7d143f9aa1" exitCode=0 Nov 29 08:15:01 crc kubenswrapper[4947]: I1129 08:15:01.697122 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-c5qnh" event={"ID":"0610d9a5-1571-498f-a368-733c4c6bf73a","Type":"ContainerDied","Data":"c7dbfb1dec6c4ac31455d20457606ffd394d0ba0acb971aa776cce7d143f9aa1"} Nov 29 08:15:01 crc kubenswrapper[4947]: I1129 08:15:01.697520 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-c5qnh" event={"ID":"0610d9a5-1571-498f-a368-733c4c6bf73a","Type":"ContainerStarted","Data":"2d21323a274c5fa266afbe159987ef8f37bc08470a1bc81c9a8464f788bd7b98"} Nov 29 08:15:03 crc kubenswrapper[4947]: I1129 08:15:03.087626 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-c5qnh" Nov 29 08:15:03 crc kubenswrapper[4947]: I1129 08:15:03.158765 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55z57\" (UniqueName: \"kubernetes.io/projected/0610d9a5-1571-498f-a368-733c4c6bf73a-kube-api-access-55z57\") pod \"0610d9a5-1571-498f-a368-733c4c6bf73a\" (UID: \"0610d9a5-1571-498f-a368-733c4c6bf73a\") " Nov 29 08:15:03 crc kubenswrapper[4947]: I1129 08:15:03.159029 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0610d9a5-1571-498f-a368-733c4c6bf73a-secret-volume\") pod \"0610d9a5-1571-498f-a368-733c4c6bf73a\" (UID: \"0610d9a5-1571-498f-a368-733c4c6bf73a\") " Nov 29 08:15:03 crc kubenswrapper[4947]: I1129 08:15:03.159134 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0610d9a5-1571-498f-a368-733c4c6bf73a-config-volume\") pod \"0610d9a5-1571-498f-a368-733c4c6bf73a\" (UID: \"0610d9a5-1571-498f-a368-733c4c6bf73a\") " Nov 29 08:15:03 crc kubenswrapper[4947]: I1129 08:15:03.160617 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0610d9a5-1571-498f-a368-733c4c6bf73a-config-volume" (OuterVolumeSpecName: "config-volume") pod "0610d9a5-1571-498f-a368-733c4c6bf73a" (UID: "0610d9a5-1571-498f-a368-733c4c6bf73a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:15:03 crc kubenswrapper[4947]: I1129 08:15:03.168297 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0610d9a5-1571-498f-a368-733c4c6bf73a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0610d9a5-1571-498f-a368-733c4c6bf73a" (UID: "0610d9a5-1571-498f-a368-733c4c6bf73a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:15:03 crc kubenswrapper[4947]: I1129 08:15:03.169613 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0610d9a5-1571-498f-a368-733c4c6bf73a-kube-api-access-55z57" (OuterVolumeSpecName: "kube-api-access-55z57") pod "0610d9a5-1571-498f-a368-733c4c6bf73a" (UID: "0610d9a5-1571-498f-a368-733c4c6bf73a"). InnerVolumeSpecName "kube-api-access-55z57". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:15:03 crc kubenswrapper[4947]: I1129 08:15:03.262913 4947 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0610d9a5-1571-498f-a368-733c4c6bf73a-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 29 08:15:03 crc kubenswrapper[4947]: I1129 08:15:03.263092 4947 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0610d9a5-1571-498f-a368-733c4c6bf73a-config-volume\") on node \"crc\" DevicePath \"\"" Nov 29 08:15:03 crc kubenswrapper[4947]: I1129 08:15:03.263158 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55z57\" (UniqueName: \"kubernetes.io/projected/0610d9a5-1571-498f-a368-733c4c6bf73a-kube-api-access-55z57\") on node \"crc\" DevicePath \"\"" Nov 29 08:15:03 crc kubenswrapper[4947]: I1129 08:15:03.713917 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-c5qnh" event={"ID":"0610d9a5-1571-498f-a368-733c4c6bf73a","Type":"ContainerDied","Data":"2d21323a274c5fa266afbe159987ef8f37bc08470a1bc81c9a8464f788bd7b98"} Nov 29 08:15:03 crc kubenswrapper[4947]: I1129 08:15:03.714243 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d21323a274c5fa266afbe159987ef8f37bc08470a1bc81c9a8464f788bd7b98" Nov 29 08:15:03 crc kubenswrapper[4947]: I1129 08:15:03.713979 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-c5qnh" Nov 29 08:15:04 crc kubenswrapper[4947]: I1129 08:15:04.174642 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406690-qzj4k"] Nov 29 08:15:04 crc kubenswrapper[4947]: I1129 08:15:04.184929 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406690-qzj4k"] Nov 29 08:15:05 crc kubenswrapper[4947]: I1129 08:15:05.192810 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74e233d8-d395-4660-8bfa-de810efcc150" path="/var/lib/kubelet/pods/74e233d8-d395-4660-8bfa-de810efcc150/volumes" Nov 29 08:15:15 crc kubenswrapper[4947]: I1129 08:15:15.179162 4947 scope.go:117] "RemoveContainer" containerID="d67dc1b8e5602a5dcf9a398c7d5f30c1377f7bf68c3763bf602bafe49c0e95c9" Nov 29 08:15:15 crc kubenswrapper[4947]: E1129 08:15:15.180174 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 08:15:26 crc kubenswrapper[4947]: I1129 08:15:26.179046 4947 scope.go:117] "RemoveContainer" containerID="d67dc1b8e5602a5dcf9a398c7d5f30c1377f7bf68c3763bf602bafe49c0e95c9" Nov 29 08:15:26 crc kubenswrapper[4947]: E1129 08:15:26.179995 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 08:15:26 crc kubenswrapper[4947]: I1129 08:15:26.919467 4947 generic.go:334] "Generic (PLEG): container finished" podID="6adb2028-a62e-456d-8863-55da513e78f2" containerID="1bc5fd6cddbd2b45bc2863ddc970d5e883133108f1907a71d34f990f81884de8" exitCode=1 Nov 29 08:15:26 crc kubenswrapper[4947]: I1129 08:15:26.919517 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"6adb2028-a62e-456d-8863-55da513e78f2","Type":"ContainerDied","Data":"1bc5fd6cddbd2b45bc2863ddc970d5e883133108f1907a71d34f990f81884de8"} Nov 29 08:15:28 crc kubenswrapper[4947]: I1129 08:15:28.307178 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 29 08:15:28 crc kubenswrapper[4947]: I1129 08:15:28.309918 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6adb2028-a62e-456d-8863-55da513e78f2-ssh-key\") pod \"6adb2028-a62e-456d-8863-55da513e78f2\" (UID: \"6adb2028-a62e-456d-8863-55da513e78f2\") " Nov 29 08:15:28 crc kubenswrapper[4947]: I1129 08:15:28.309978 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6adb2028-a62e-456d-8863-55da513e78f2-config-data\") pod \"6adb2028-a62e-456d-8863-55da513e78f2\" (UID: \"6adb2028-a62e-456d-8863-55da513e78f2\") " Nov 29 08:15:28 crc kubenswrapper[4947]: I1129 08:15:28.310063 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6adb2028-a62e-456d-8863-55da513e78f2-openstack-config\") pod \"6adb2028-a62e-456d-8863-55da513e78f2\" (UID: \"6adb2028-a62e-456d-8863-55da513e78f2\") " Nov 29 08:15:28 crc kubenswrapper[4947]: I1129 08:15:28.310093 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6adb2028-a62e-456d-8863-55da513e78f2-openstack-config-secret\") pod \"6adb2028-a62e-456d-8863-55da513e78f2\" (UID: \"6adb2028-a62e-456d-8863-55da513e78f2\") " Nov 29 08:15:28 crc kubenswrapper[4947]: I1129 08:15:28.310143 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/6adb2028-a62e-456d-8863-55da513e78f2-ca-certs\") pod \"6adb2028-a62e-456d-8863-55da513e78f2\" (UID: \"6adb2028-a62e-456d-8863-55da513e78f2\") " Nov 29 08:15:28 crc kubenswrapper[4947]: I1129 08:15:28.310278 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xf875\" (UniqueName: \"kubernetes.io/projected/6adb2028-a62e-456d-8863-55da513e78f2-kube-api-access-xf875\") pod \"6adb2028-a62e-456d-8863-55da513e78f2\" (UID: \"6adb2028-a62e-456d-8863-55da513e78f2\") " Nov 29 08:15:28 crc kubenswrapper[4947]: I1129 08:15:28.310420 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/6adb2028-a62e-456d-8863-55da513e78f2-test-operator-ephemeral-workdir\") pod \"6adb2028-a62e-456d-8863-55da513e78f2\" (UID: \"6adb2028-a62e-456d-8863-55da513e78f2\") " Nov 29 08:15:28 crc kubenswrapper[4947]: I1129 08:15:28.310498 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/6adb2028-a62e-456d-8863-55da513e78f2-test-operator-ephemeral-temporary\") pod \"6adb2028-a62e-456d-8863-55da513e78f2\" (UID: \"6adb2028-a62e-456d-8863-55da513e78f2\") " Nov 29 08:15:28 crc kubenswrapper[4947]: I1129 08:15:28.310533 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"6adb2028-a62e-456d-8863-55da513e78f2\" (UID: \"6adb2028-a62e-456d-8863-55da513e78f2\") " Nov 29 08:15:28 crc kubenswrapper[4947]: I1129 08:15:28.312065 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6adb2028-a62e-456d-8863-55da513e78f2-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "6adb2028-a62e-456d-8863-55da513e78f2" (UID: "6adb2028-a62e-456d-8863-55da513e78f2"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:15:28 crc kubenswrapper[4947]: I1129 08:15:28.312757 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6adb2028-a62e-456d-8863-55da513e78f2-config-data" (OuterVolumeSpecName: "config-data") pod "6adb2028-a62e-456d-8863-55da513e78f2" (UID: "6adb2028-a62e-456d-8863-55da513e78f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:15:28 crc kubenswrapper[4947]: I1129 08:15:28.317877 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6adb2028-a62e-456d-8863-55da513e78f2-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "6adb2028-a62e-456d-8863-55da513e78f2" (UID: "6adb2028-a62e-456d-8863-55da513e78f2"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:15:28 crc kubenswrapper[4947]: I1129 08:15:28.320569 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6adb2028-a62e-456d-8863-55da513e78f2-kube-api-access-xf875" (OuterVolumeSpecName: "kube-api-access-xf875") pod "6adb2028-a62e-456d-8863-55da513e78f2" (UID: "6adb2028-a62e-456d-8863-55da513e78f2"). InnerVolumeSpecName "kube-api-access-xf875". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:15:28 crc kubenswrapper[4947]: I1129 08:15:28.320919 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "test-operator-logs") pod "6adb2028-a62e-456d-8863-55da513e78f2" (UID: "6adb2028-a62e-456d-8863-55da513e78f2"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 29 08:15:28 crc kubenswrapper[4947]: I1129 08:15:28.346947 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6adb2028-a62e-456d-8863-55da513e78f2-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "6adb2028-a62e-456d-8863-55da513e78f2" (UID: "6adb2028-a62e-456d-8863-55da513e78f2"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:15:28 crc kubenswrapper[4947]: I1129 08:15:28.355831 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6adb2028-a62e-456d-8863-55da513e78f2-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "6adb2028-a62e-456d-8863-55da513e78f2" (UID: "6adb2028-a62e-456d-8863-55da513e78f2"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:15:28 crc kubenswrapper[4947]: I1129 08:15:28.370671 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6adb2028-a62e-456d-8863-55da513e78f2-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6adb2028-a62e-456d-8863-55da513e78f2" (UID: "6adb2028-a62e-456d-8863-55da513e78f2"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:15:28 crc kubenswrapper[4947]: I1129 08:15:28.376199 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6adb2028-a62e-456d-8863-55da513e78f2-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "6adb2028-a62e-456d-8863-55da513e78f2" (UID: "6adb2028-a62e-456d-8863-55da513e78f2"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:15:28 crc kubenswrapper[4947]: I1129 08:15:28.413777 4947 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/6adb2028-a62e-456d-8863-55da513e78f2-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Nov 29 08:15:28 crc kubenswrapper[4947]: I1129 08:15:28.413816 4947 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/6adb2028-a62e-456d-8863-55da513e78f2-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Nov 29 08:15:28 crc kubenswrapper[4947]: I1129 08:15:28.414007 4947 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Nov 29 08:15:28 crc kubenswrapper[4947]: I1129 08:15:28.414023 4947 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6adb2028-a62e-456d-8863-55da513e78f2-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 08:15:28 crc kubenswrapper[4947]: I1129 08:15:28.414034 4947 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6adb2028-a62e-456d-8863-55da513e78f2-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:15:28 crc kubenswrapper[4947]: I1129 08:15:28.414045 4947 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6adb2028-a62e-456d-8863-55da513e78f2-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 29 08:15:28 crc kubenswrapper[4947]: I1129 08:15:28.414056 4947 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6adb2028-a62e-456d-8863-55da513e78f2-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 29 08:15:28 crc kubenswrapper[4947]: I1129 08:15:28.414098 4947 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/6adb2028-a62e-456d-8863-55da513e78f2-ca-certs\") on node \"crc\" DevicePath \"\"" Nov 29 08:15:28 crc kubenswrapper[4947]: I1129 08:15:28.414110 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xf875\" (UniqueName: \"kubernetes.io/projected/6adb2028-a62e-456d-8863-55da513e78f2-kube-api-access-xf875\") on node \"crc\" DevicePath \"\"" Nov 29 08:15:28 crc kubenswrapper[4947]: I1129 08:15:28.466600 4947 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Nov 29 08:15:28 crc kubenswrapper[4947]: I1129 08:15:28.516021 4947 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Nov 29 08:15:28 crc kubenswrapper[4947]: I1129 08:15:28.941419 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"6adb2028-a62e-456d-8863-55da513e78f2","Type":"ContainerDied","Data":"f3787fb4af1bfb24e6726dc546708bf62500f43e14803e2dbbbc591f52f1169a"} Nov 29 08:15:28 crc kubenswrapper[4947]: I1129 08:15:28.941463 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3787fb4af1bfb24e6726dc546708bf62500f43e14803e2dbbbc591f52f1169a" Nov 29 08:15:28 crc kubenswrapper[4947]: I1129 08:15:28.941554 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 29 08:15:31 crc kubenswrapper[4947]: I1129 08:15:31.490781 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 29 08:15:31 crc kubenswrapper[4947]: E1129 08:15:31.491710 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6adb2028-a62e-456d-8863-55da513e78f2" containerName="tempest-tests-tempest-tests-runner" Nov 29 08:15:31 crc kubenswrapper[4947]: I1129 08:15:31.491725 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="6adb2028-a62e-456d-8863-55da513e78f2" containerName="tempest-tests-tempest-tests-runner" Nov 29 08:15:31 crc kubenswrapper[4947]: E1129 08:15:31.491742 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0610d9a5-1571-498f-a368-733c4c6bf73a" containerName="collect-profiles" Nov 29 08:15:31 crc kubenswrapper[4947]: I1129 08:15:31.491749 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="0610d9a5-1571-498f-a368-733c4c6bf73a" containerName="collect-profiles" Nov 29 08:15:31 crc kubenswrapper[4947]: I1129 08:15:31.491987 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="0610d9a5-1571-498f-a368-733c4c6bf73a" containerName="collect-profiles" Nov 29 08:15:31 crc kubenswrapper[4947]: I1129 08:15:31.492007 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="6adb2028-a62e-456d-8863-55da513e78f2" containerName="tempest-tests-tempest-tests-runner" Nov 29 08:15:31 crc kubenswrapper[4947]: I1129 08:15:31.492807 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 29 08:15:31 crc kubenswrapper[4947]: I1129 08:15:31.494897 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-zh8zj" Nov 29 08:15:31 crc kubenswrapper[4947]: I1129 08:15:31.499076 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 29 08:15:31 crc kubenswrapper[4947]: I1129 08:15:31.683055 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e64f8968-10a4-446e-9d16-2758fca7b95a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 29 08:15:31 crc kubenswrapper[4947]: I1129 08:15:31.683232 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t94r\" (UniqueName: \"kubernetes.io/projected/e64f8968-10a4-446e-9d16-2758fca7b95a-kube-api-access-5t94r\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e64f8968-10a4-446e-9d16-2758fca7b95a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 29 08:15:31 crc kubenswrapper[4947]: I1129 08:15:31.784742 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e64f8968-10a4-446e-9d16-2758fca7b95a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 29 08:15:31 crc kubenswrapper[4947]: I1129 08:15:31.784930 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5t94r\" (UniqueName: \"kubernetes.io/projected/e64f8968-10a4-446e-9d16-2758fca7b95a-kube-api-access-5t94r\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e64f8968-10a4-446e-9d16-2758fca7b95a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 29 08:15:31 crc kubenswrapper[4947]: I1129 08:15:31.785495 4947 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e64f8968-10a4-446e-9d16-2758fca7b95a\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 29 08:15:31 crc kubenswrapper[4947]: I1129 08:15:31.806712 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5t94r\" (UniqueName: \"kubernetes.io/projected/e64f8968-10a4-446e-9d16-2758fca7b95a-kube-api-access-5t94r\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e64f8968-10a4-446e-9d16-2758fca7b95a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 29 08:15:31 crc kubenswrapper[4947]: I1129 08:15:31.820996 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e64f8968-10a4-446e-9d16-2758fca7b95a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 29 08:15:32 crc kubenswrapper[4947]: I1129 08:15:32.123827 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 29 08:15:32 crc kubenswrapper[4947]: I1129 08:15:32.599206 4947 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 08:15:32 crc kubenswrapper[4947]: I1129 08:15:32.604509 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 29 08:15:32 crc kubenswrapper[4947]: I1129 08:15:32.987387 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"e64f8968-10a4-446e-9d16-2758fca7b95a","Type":"ContainerStarted","Data":"f746ca7b911344e86fe0be6c139e16195abe09f014fe76600356e5769f0389fe"} Nov 29 08:15:33 crc kubenswrapper[4947]: I1129 08:15:33.998351 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"e64f8968-10a4-446e-9d16-2758fca7b95a","Type":"ContainerStarted","Data":"414468e9afbc12a921d5c459ae37c3915a3b6a03e047cdef385688ca540a9c4b"} Nov 29 08:15:35 crc kubenswrapper[4947]: I1129 08:15:35.031602 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=3.039418094 podStartE2EDuration="4.031581818s" podCreationTimestamp="2025-11-29 08:15:31 +0000 UTC" firstStartedPulling="2025-11-29 08:15:32.598961667 +0000 UTC m=+6083.643343748" lastFinishedPulling="2025-11-29 08:15:33.591125391 +0000 UTC m=+6084.635507472" observedRunningTime="2025-11-29 08:15:35.029213499 +0000 UTC m=+6086.073595580" watchObservedRunningTime="2025-11-29 08:15:35.031581818 +0000 UTC m=+6086.075963899" Nov 29 08:15:39 crc kubenswrapper[4947]: I1129 08:15:39.186951 4947 scope.go:117] "RemoveContainer" containerID="d67dc1b8e5602a5dcf9a398c7d5f30c1377f7bf68c3763bf602bafe49c0e95c9" Nov 29 08:15:39 crc kubenswrapper[4947]: E1129 08:15:39.187819 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 08:15:42 crc kubenswrapper[4947]: I1129 08:15:42.574804 4947 scope.go:117] "RemoveContainer" containerID="e649ec8e6d1b98eba04f63f2b9b94231519a035455f80d49e3c828bd3bf6bdb3" Nov 29 08:15:51 crc kubenswrapper[4947]: I1129 08:15:51.180351 4947 scope.go:117] "RemoveContainer" containerID="d67dc1b8e5602a5dcf9a398c7d5f30c1377f7bf68c3763bf602bafe49c0e95c9" Nov 29 08:15:51 crc kubenswrapper[4947]: E1129 08:15:51.181211 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 08:16:02 crc kubenswrapper[4947]: I1129 08:16:02.178885 4947 scope.go:117] "RemoveContainer" containerID="d67dc1b8e5602a5dcf9a398c7d5f30c1377f7bf68c3763bf602bafe49c0e95c9" Nov 29 08:16:02 crc kubenswrapper[4947]: E1129 08:16:02.179671 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 08:16:14 crc kubenswrapper[4947]: I1129 08:16:14.966867 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-p8sjr/must-gather-r59nr"] Nov 29 08:16:14 crc kubenswrapper[4947]: I1129 08:16:14.969044 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p8sjr/must-gather-r59nr" Nov 29 08:16:14 crc kubenswrapper[4947]: I1129 08:16:14.971735 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-p8sjr"/"openshift-service-ca.crt" Nov 29 08:16:14 crc kubenswrapper[4947]: I1129 08:16:14.978938 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-p8sjr"/"default-dockercfg-xgnbt" Nov 29 08:16:14 crc kubenswrapper[4947]: I1129 08:16:14.979389 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-p8sjr"/"kube-root-ca.crt" Nov 29 08:16:14 crc kubenswrapper[4947]: I1129 08:16:14.988063 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-p8sjr/must-gather-r59nr"] Nov 29 08:16:15 crc kubenswrapper[4947]: I1129 08:16:15.092957 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wg25\" (UniqueName: \"kubernetes.io/projected/ec8ff821-1ddd-4193-9469-4bdb054eb399-kube-api-access-6wg25\") pod \"must-gather-r59nr\" (UID: \"ec8ff821-1ddd-4193-9469-4bdb054eb399\") " pod="openshift-must-gather-p8sjr/must-gather-r59nr" Nov 29 08:16:15 crc kubenswrapper[4947]: I1129 08:16:15.093079 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ec8ff821-1ddd-4193-9469-4bdb054eb399-must-gather-output\") pod \"must-gather-r59nr\" (UID: \"ec8ff821-1ddd-4193-9469-4bdb054eb399\") " pod="openshift-must-gather-p8sjr/must-gather-r59nr" Nov 29 08:16:15 crc kubenswrapper[4947]: I1129 08:16:15.195326 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wg25\" (UniqueName: \"kubernetes.io/projected/ec8ff821-1ddd-4193-9469-4bdb054eb399-kube-api-access-6wg25\") pod \"must-gather-r59nr\" (UID: \"ec8ff821-1ddd-4193-9469-4bdb054eb399\") " pod="openshift-must-gather-p8sjr/must-gather-r59nr" Nov 29 08:16:15 crc kubenswrapper[4947]: I1129 08:16:15.195431 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ec8ff821-1ddd-4193-9469-4bdb054eb399-must-gather-output\") pod \"must-gather-r59nr\" (UID: \"ec8ff821-1ddd-4193-9469-4bdb054eb399\") " pod="openshift-must-gather-p8sjr/must-gather-r59nr" Nov 29 08:16:15 crc kubenswrapper[4947]: I1129 08:16:15.195903 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ec8ff821-1ddd-4193-9469-4bdb054eb399-must-gather-output\") pod \"must-gather-r59nr\" (UID: \"ec8ff821-1ddd-4193-9469-4bdb054eb399\") " pod="openshift-must-gather-p8sjr/must-gather-r59nr" Nov 29 08:16:15 crc kubenswrapper[4947]: I1129 08:16:15.222416 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wg25\" (UniqueName: \"kubernetes.io/projected/ec8ff821-1ddd-4193-9469-4bdb054eb399-kube-api-access-6wg25\") pod \"must-gather-r59nr\" (UID: \"ec8ff821-1ddd-4193-9469-4bdb054eb399\") " pod="openshift-must-gather-p8sjr/must-gather-r59nr" Nov 29 08:16:15 crc kubenswrapper[4947]: I1129 08:16:15.290974 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p8sjr/must-gather-r59nr" Nov 29 08:16:15 crc kubenswrapper[4947]: I1129 08:16:15.771069 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-p8sjr/must-gather-r59nr"] Nov 29 08:16:16 crc kubenswrapper[4947]: I1129 08:16:16.179785 4947 scope.go:117] "RemoveContainer" containerID="d67dc1b8e5602a5dcf9a398c7d5f30c1377f7bf68c3763bf602bafe49c0e95c9" Nov 29 08:16:16 crc kubenswrapper[4947]: E1129 08:16:16.180394 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 08:16:16 crc kubenswrapper[4947]: I1129 08:16:16.407651 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p8sjr/must-gather-r59nr" event={"ID":"ec8ff821-1ddd-4193-9469-4bdb054eb399","Type":"ContainerStarted","Data":"91ab552d5bd23da5b765e6eda5e48f4ab5498d1eb8fcf9449795c4c654dd0724"} Nov 29 08:16:20 crc kubenswrapper[4947]: I1129 08:16:20.464349 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p8sjr/must-gather-r59nr" event={"ID":"ec8ff821-1ddd-4193-9469-4bdb054eb399","Type":"ContainerStarted","Data":"f3b5e3f574383843e7add8cb7cb68bf00c2a8044690404ca41541db99ec91530"} Nov 29 08:16:20 crc kubenswrapper[4947]: I1129 08:16:20.465566 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p8sjr/must-gather-r59nr" event={"ID":"ec8ff821-1ddd-4193-9469-4bdb054eb399","Type":"ContainerStarted","Data":"c23c2872ff5f6ea5ba8ba3bdbc6e6e90cf0544d65962b452556c7a1c3a3b0db5"} Nov 29 08:16:20 crc kubenswrapper[4947]: I1129 08:16:20.487954 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-p8sjr/must-gather-r59nr" podStartSLOduration=2.5120756440000003 podStartE2EDuration="6.487931396s" podCreationTimestamp="2025-11-29 08:16:14 +0000 UTC" firstStartedPulling="2025-11-29 08:16:15.793035781 +0000 UTC m=+6126.837417862" lastFinishedPulling="2025-11-29 08:16:19.768891533 +0000 UTC m=+6130.813273614" observedRunningTime="2025-11-29 08:16:20.479113414 +0000 UTC m=+6131.523495495" watchObservedRunningTime="2025-11-29 08:16:20.487931396 +0000 UTC m=+6131.532313467" Nov 29 08:16:25 crc kubenswrapper[4947]: I1129 08:16:25.991645 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-p8sjr/crc-debug-jlmgk"] Nov 29 08:16:25 crc kubenswrapper[4947]: I1129 08:16:25.993862 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p8sjr/crc-debug-jlmgk" Nov 29 08:16:26 crc kubenswrapper[4947]: I1129 08:16:26.068384 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj98r\" (UniqueName: \"kubernetes.io/projected/64dd1f09-edaf-4d48-9082-c2f93d0feaac-kube-api-access-jj98r\") pod \"crc-debug-jlmgk\" (UID: \"64dd1f09-edaf-4d48-9082-c2f93d0feaac\") " pod="openshift-must-gather-p8sjr/crc-debug-jlmgk" Nov 29 08:16:26 crc kubenswrapper[4947]: I1129 08:16:26.069137 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/64dd1f09-edaf-4d48-9082-c2f93d0feaac-host\") pod \"crc-debug-jlmgk\" (UID: \"64dd1f09-edaf-4d48-9082-c2f93d0feaac\") " pod="openshift-must-gather-p8sjr/crc-debug-jlmgk" Nov 29 08:16:26 crc kubenswrapper[4947]: I1129 08:16:26.172347 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj98r\" (UniqueName: \"kubernetes.io/projected/64dd1f09-edaf-4d48-9082-c2f93d0feaac-kube-api-access-jj98r\") pod \"crc-debug-jlmgk\" (UID: \"64dd1f09-edaf-4d48-9082-c2f93d0feaac\") " pod="openshift-must-gather-p8sjr/crc-debug-jlmgk" Nov 29 08:16:26 crc kubenswrapper[4947]: I1129 08:16:26.173693 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/64dd1f09-edaf-4d48-9082-c2f93d0feaac-host\") pod \"crc-debug-jlmgk\" (UID: \"64dd1f09-edaf-4d48-9082-c2f93d0feaac\") " pod="openshift-must-gather-p8sjr/crc-debug-jlmgk" Nov 29 08:16:26 crc kubenswrapper[4947]: I1129 08:16:26.173850 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/64dd1f09-edaf-4d48-9082-c2f93d0feaac-host\") pod \"crc-debug-jlmgk\" (UID: \"64dd1f09-edaf-4d48-9082-c2f93d0feaac\") " pod="openshift-must-gather-p8sjr/crc-debug-jlmgk" Nov 29 08:16:26 crc kubenswrapper[4947]: I1129 08:16:26.202180 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj98r\" (UniqueName: \"kubernetes.io/projected/64dd1f09-edaf-4d48-9082-c2f93d0feaac-kube-api-access-jj98r\") pod \"crc-debug-jlmgk\" (UID: \"64dd1f09-edaf-4d48-9082-c2f93d0feaac\") " pod="openshift-must-gather-p8sjr/crc-debug-jlmgk" Nov 29 08:16:26 crc kubenswrapper[4947]: I1129 08:16:26.317060 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p8sjr/crc-debug-jlmgk" Nov 29 08:16:26 crc kubenswrapper[4947]: I1129 08:16:26.528020 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p8sjr/crc-debug-jlmgk" event={"ID":"64dd1f09-edaf-4d48-9082-c2f93d0feaac","Type":"ContainerStarted","Data":"9d00965d9a3eaeee924683672e2df8de08ac9ffd80084b8695247375011ea97e"} Nov 29 08:16:31 crc kubenswrapper[4947]: I1129 08:16:31.178939 4947 scope.go:117] "RemoveContainer" containerID="d67dc1b8e5602a5dcf9a398c7d5f30c1377f7bf68c3763bf602bafe49c0e95c9" Nov 29 08:16:31 crc kubenswrapper[4947]: E1129 08:16:31.180063 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 08:16:40 crc kubenswrapper[4947]: I1129 08:16:40.733653 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p8sjr/crc-debug-jlmgk" event={"ID":"64dd1f09-edaf-4d48-9082-c2f93d0feaac","Type":"ContainerStarted","Data":"6bcb6b23de89b102d6f2be30597fd83d927299e8f10c5721809e21867218e602"} Nov 29 08:16:40 crc kubenswrapper[4947]: I1129 08:16:40.761390 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-p8sjr/crc-debug-jlmgk" podStartSLOduration=1.7898847020000002 podStartE2EDuration="15.761360272s" podCreationTimestamp="2025-11-29 08:16:25 +0000 UTC" firstStartedPulling="2025-11-29 08:16:26.376113702 +0000 UTC m=+6137.420495783" lastFinishedPulling="2025-11-29 08:16:40.347589272 +0000 UTC m=+6151.391971353" observedRunningTime="2025-11-29 08:16:40.752794076 +0000 UTC m=+6151.797176157" watchObservedRunningTime="2025-11-29 08:16:40.761360272 +0000 UTC m=+6151.805742353" Nov 29 08:16:45 crc kubenswrapper[4947]: I1129 08:16:45.179540 4947 scope.go:117] "RemoveContainer" containerID="d67dc1b8e5602a5dcf9a398c7d5f30c1377f7bf68c3763bf602bafe49c0e95c9" Nov 29 08:16:45 crc kubenswrapper[4947]: E1129 08:16:45.180483 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 08:16:57 crc kubenswrapper[4947]: I1129 08:16:57.179874 4947 scope.go:117] "RemoveContainer" containerID="d67dc1b8e5602a5dcf9a398c7d5f30c1377f7bf68c3763bf602bafe49c0e95c9" Nov 29 08:16:57 crc kubenswrapper[4947]: E1129 08:16:57.180724 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 08:17:09 crc kubenswrapper[4947]: I1129 08:17:09.186071 4947 scope.go:117] "RemoveContainer" containerID="d67dc1b8e5602a5dcf9a398c7d5f30c1377f7bf68c3763bf602bafe49c0e95c9" Nov 29 08:17:09 crc kubenswrapper[4947]: E1129 08:17:09.187191 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 08:17:21 crc kubenswrapper[4947]: I1129 08:17:21.179466 4947 scope.go:117] "RemoveContainer" containerID="d67dc1b8e5602a5dcf9a398c7d5f30c1377f7bf68c3763bf602bafe49c0e95c9" Nov 29 08:17:21 crc kubenswrapper[4947]: E1129 08:17:21.180236 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 08:17:28 crc kubenswrapper[4947]: I1129 08:17:28.220410 4947 generic.go:334] "Generic (PLEG): container finished" podID="64dd1f09-edaf-4d48-9082-c2f93d0feaac" containerID="6bcb6b23de89b102d6f2be30597fd83d927299e8f10c5721809e21867218e602" exitCode=0 Nov 29 08:17:28 crc kubenswrapper[4947]: I1129 08:17:28.220491 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p8sjr/crc-debug-jlmgk" event={"ID":"64dd1f09-edaf-4d48-9082-c2f93d0feaac","Type":"ContainerDied","Data":"6bcb6b23de89b102d6f2be30597fd83d927299e8f10c5721809e21867218e602"} Nov 29 08:17:29 crc kubenswrapper[4947]: I1129 08:17:29.363811 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p8sjr/crc-debug-jlmgk" Nov 29 08:17:29 crc kubenswrapper[4947]: I1129 08:17:29.395643 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-p8sjr/crc-debug-jlmgk"] Nov 29 08:17:29 crc kubenswrapper[4947]: I1129 08:17:29.405329 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-p8sjr/crc-debug-jlmgk"] Nov 29 08:17:29 crc kubenswrapper[4947]: I1129 08:17:29.485532 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/64dd1f09-edaf-4d48-9082-c2f93d0feaac-host\") pod \"64dd1f09-edaf-4d48-9082-c2f93d0feaac\" (UID: \"64dd1f09-edaf-4d48-9082-c2f93d0feaac\") " Nov 29 08:17:29 crc kubenswrapper[4947]: I1129 08:17:29.485662 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jj98r\" (UniqueName: \"kubernetes.io/projected/64dd1f09-edaf-4d48-9082-c2f93d0feaac-kube-api-access-jj98r\") pod \"64dd1f09-edaf-4d48-9082-c2f93d0feaac\" (UID: \"64dd1f09-edaf-4d48-9082-c2f93d0feaac\") " Nov 29 08:17:29 crc kubenswrapper[4947]: I1129 08:17:29.485705 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64dd1f09-edaf-4d48-9082-c2f93d0feaac-host" (OuterVolumeSpecName: "host") pod "64dd1f09-edaf-4d48-9082-c2f93d0feaac" (UID: "64dd1f09-edaf-4d48-9082-c2f93d0feaac"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 08:17:29 crc kubenswrapper[4947]: I1129 08:17:29.486115 4947 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/64dd1f09-edaf-4d48-9082-c2f93d0feaac-host\") on node \"crc\" DevicePath \"\"" Nov 29 08:17:29 crc kubenswrapper[4947]: I1129 08:17:29.492294 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64dd1f09-edaf-4d48-9082-c2f93d0feaac-kube-api-access-jj98r" (OuterVolumeSpecName: "kube-api-access-jj98r") pod "64dd1f09-edaf-4d48-9082-c2f93d0feaac" (UID: "64dd1f09-edaf-4d48-9082-c2f93d0feaac"). InnerVolumeSpecName "kube-api-access-jj98r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:17:29 crc kubenswrapper[4947]: I1129 08:17:29.587718 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jj98r\" (UniqueName: \"kubernetes.io/projected/64dd1f09-edaf-4d48-9082-c2f93d0feaac-kube-api-access-jj98r\") on node \"crc\" DevicePath \"\"" Nov 29 08:17:30 crc kubenswrapper[4947]: I1129 08:17:30.240037 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d00965d9a3eaeee924683672e2df8de08ac9ffd80084b8695247375011ea97e" Nov 29 08:17:30 crc kubenswrapper[4947]: I1129 08:17:30.240115 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p8sjr/crc-debug-jlmgk" Nov 29 08:17:30 crc kubenswrapper[4947]: I1129 08:17:30.853857 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-p8sjr/crc-debug-rqb7r"] Nov 29 08:17:30 crc kubenswrapper[4947]: E1129 08:17:30.854768 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64dd1f09-edaf-4d48-9082-c2f93d0feaac" containerName="container-00" Nov 29 08:17:30 crc kubenswrapper[4947]: I1129 08:17:30.854785 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="64dd1f09-edaf-4d48-9082-c2f93d0feaac" containerName="container-00" Nov 29 08:17:30 crc kubenswrapper[4947]: I1129 08:17:30.855035 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="64dd1f09-edaf-4d48-9082-c2f93d0feaac" containerName="container-00" Nov 29 08:17:30 crc kubenswrapper[4947]: I1129 08:17:30.855863 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p8sjr/crc-debug-rqb7r" Nov 29 08:17:30 crc kubenswrapper[4947]: I1129 08:17:30.914237 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6thq\" (UniqueName: \"kubernetes.io/projected/9815f6db-468a-4481-886a-63cc05412339-kube-api-access-c6thq\") pod \"crc-debug-rqb7r\" (UID: \"9815f6db-468a-4481-886a-63cc05412339\") " pod="openshift-must-gather-p8sjr/crc-debug-rqb7r" Nov 29 08:17:30 crc kubenswrapper[4947]: I1129 08:17:30.914332 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9815f6db-468a-4481-886a-63cc05412339-host\") pod \"crc-debug-rqb7r\" (UID: \"9815f6db-468a-4481-886a-63cc05412339\") " pod="openshift-must-gather-p8sjr/crc-debug-rqb7r" Nov 29 08:17:31 crc kubenswrapper[4947]: I1129 08:17:31.017063 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6thq\" (UniqueName: \"kubernetes.io/projected/9815f6db-468a-4481-886a-63cc05412339-kube-api-access-c6thq\") pod \"crc-debug-rqb7r\" (UID: \"9815f6db-468a-4481-886a-63cc05412339\") " pod="openshift-must-gather-p8sjr/crc-debug-rqb7r" Nov 29 08:17:31 crc kubenswrapper[4947]: I1129 08:17:31.017160 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9815f6db-468a-4481-886a-63cc05412339-host\") pod \"crc-debug-rqb7r\" (UID: \"9815f6db-468a-4481-886a-63cc05412339\") " pod="openshift-must-gather-p8sjr/crc-debug-rqb7r" Nov 29 08:17:31 crc kubenswrapper[4947]: I1129 08:17:31.017314 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9815f6db-468a-4481-886a-63cc05412339-host\") pod \"crc-debug-rqb7r\" (UID: \"9815f6db-468a-4481-886a-63cc05412339\") " pod="openshift-must-gather-p8sjr/crc-debug-rqb7r" Nov 29 08:17:31 crc kubenswrapper[4947]: I1129 08:17:31.046109 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6thq\" (UniqueName: \"kubernetes.io/projected/9815f6db-468a-4481-886a-63cc05412339-kube-api-access-c6thq\") pod \"crc-debug-rqb7r\" (UID: \"9815f6db-468a-4481-886a-63cc05412339\") " pod="openshift-must-gather-p8sjr/crc-debug-rqb7r" Nov 29 08:17:31 crc kubenswrapper[4947]: I1129 08:17:31.178663 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p8sjr/crc-debug-rqb7r" Nov 29 08:17:31 crc kubenswrapper[4947]: I1129 08:17:31.193790 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64dd1f09-edaf-4d48-9082-c2f93d0feaac" path="/var/lib/kubelet/pods/64dd1f09-edaf-4d48-9082-c2f93d0feaac/volumes" Nov 29 08:17:31 crc kubenswrapper[4947]: I1129 08:17:31.253465 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p8sjr/crc-debug-rqb7r" event={"ID":"9815f6db-468a-4481-886a-63cc05412339","Type":"ContainerStarted","Data":"cae3a8c69daba273f4360acc3f6496912eadd0bdbf16fbf5178a8d6e8374b517"} Nov 29 08:17:32 crc kubenswrapper[4947]: I1129 08:17:32.264083 4947 generic.go:334] "Generic (PLEG): container finished" podID="9815f6db-468a-4481-886a-63cc05412339" containerID="645b34341bf23009183c7132bc344f65cc7e0f56d692b14530f246ab814f4220" exitCode=0 Nov 29 08:17:32 crc kubenswrapper[4947]: I1129 08:17:32.264408 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p8sjr/crc-debug-rqb7r" event={"ID":"9815f6db-468a-4481-886a-63cc05412339","Type":"ContainerDied","Data":"645b34341bf23009183c7132bc344f65cc7e0f56d692b14530f246ab814f4220"} Nov 29 08:17:33 crc kubenswrapper[4947]: I1129 08:17:33.373238 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p8sjr/crc-debug-rqb7r" Nov 29 08:17:33 crc kubenswrapper[4947]: I1129 08:17:33.462613 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6thq\" (UniqueName: \"kubernetes.io/projected/9815f6db-468a-4481-886a-63cc05412339-kube-api-access-c6thq\") pod \"9815f6db-468a-4481-886a-63cc05412339\" (UID: \"9815f6db-468a-4481-886a-63cc05412339\") " Nov 29 08:17:33 crc kubenswrapper[4947]: I1129 08:17:33.463315 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9815f6db-468a-4481-886a-63cc05412339-host\") pod \"9815f6db-468a-4481-886a-63cc05412339\" (UID: \"9815f6db-468a-4481-886a-63cc05412339\") " Nov 29 08:17:33 crc kubenswrapper[4947]: I1129 08:17:33.465335 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9815f6db-468a-4481-886a-63cc05412339-host" (OuterVolumeSpecName: "host") pod "9815f6db-468a-4481-886a-63cc05412339" (UID: "9815f6db-468a-4481-886a-63cc05412339"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 08:17:33 crc kubenswrapper[4947]: I1129 08:17:33.482417 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9815f6db-468a-4481-886a-63cc05412339-kube-api-access-c6thq" (OuterVolumeSpecName: "kube-api-access-c6thq") pod "9815f6db-468a-4481-886a-63cc05412339" (UID: "9815f6db-468a-4481-886a-63cc05412339"). InnerVolumeSpecName "kube-api-access-c6thq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:17:33 crc kubenswrapper[4947]: I1129 08:17:33.565783 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6thq\" (UniqueName: \"kubernetes.io/projected/9815f6db-468a-4481-886a-63cc05412339-kube-api-access-c6thq\") on node \"crc\" DevicePath \"\"" Nov 29 08:17:33 crc kubenswrapper[4947]: I1129 08:17:33.565829 4947 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9815f6db-468a-4481-886a-63cc05412339-host\") on node \"crc\" DevicePath \"\"" Nov 29 08:17:34 crc kubenswrapper[4947]: I1129 08:17:34.300014 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p8sjr/crc-debug-rqb7r" event={"ID":"9815f6db-468a-4481-886a-63cc05412339","Type":"ContainerDied","Data":"cae3a8c69daba273f4360acc3f6496912eadd0bdbf16fbf5178a8d6e8374b517"} Nov 29 08:17:34 crc kubenswrapper[4947]: I1129 08:17:34.300065 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cae3a8c69daba273f4360acc3f6496912eadd0bdbf16fbf5178a8d6e8374b517" Nov 29 08:17:34 crc kubenswrapper[4947]: I1129 08:17:34.300126 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p8sjr/crc-debug-rqb7r" Nov 29 08:17:34 crc kubenswrapper[4947]: I1129 08:17:34.936414 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-p8sjr/crc-debug-rqb7r"] Nov 29 08:17:34 crc kubenswrapper[4947]: I1129 08:17:34.949104 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-p8sjr/crc-debug-rqb7r"] Nov 29 08:17:35 crc kubenswrapper[4947]: I1129 08:17:35.191073 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9815f6db-468a-4481-886a-63cc05412339" path="/var/lib/kubelet/pods/9815f6db-468a-4481-886a-63cc05412339/volumes" Nov 29 08:17:36 crc kubenswrapper[4947]: I1129 08:17:36.162570 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-p8sjr/crc-debug-n58bp"] Nov 29 08:17:36 crc kubenswrapper[4947]: E1129 08:17:36.163053 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9815f6db-468a-4481-886a-63cc05412339" containerName="container-00" Nov 29 08:17:36 crc kubenswrapper[4947]: I1129 08:17:36.163066 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="9815f6db-468a-4481-886a-63cc05412339" containerName="container-00" Nov 29 08:17:36 crc kubenswrapper[4947]: I1129 08:17:36.163351 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="9815f6db-468a-4481-886a-63cc05412339" containerName="container-00" Nov 29 08:17:36 crc kubenswrapper[4947]: I1129 08:17:36.164088 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p8sjr/crc-debug-n58bp" Nov 29 08:17:36 crc kubenswrapper[4947]: I1129 08:17:36.183867 4947 scope.go:117] "RemoveContainer" containerID="d67dc1b8e5602a5dcf9a398c7d5f30c1377f7bf68c3763bf602bafe49c0e95c9" Nov 29 08:17:36 crc kubenswrapper[4947]: E1129 08:17:36.184186 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 08:17:36 crc kubenswrapper[4947]: I1129 08:17:36.328279 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72g5p\" (UniqueName: \"kubernetes.io/projected/de04b653-116c-4500-b39c-c40da93be1c0-kube-api-access-72g5p\") pod \"crc-debug-n58bp\" (UID: \"de04b653-116c-4500-b39c-c40da93be1c0\") " pod="openshift-must-gather-p8sjr/crc-debug-n58bp" Nov 29 08:17:36 crc kubenswrapper[4947]: I1129 08:17:36.328413 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/de04b653-116c-4500-b39c-c40da93be1c0-host\") pod \"crc-debug-n58bp\" (UID: \"de04b653-116c-4500-b39c-c40da93be1c0\") " pod="openshift-must-gather-p8sjr/crc-debug-n58bp" Nov 29 08:17:36 crc kubenswrapper[4947]: I1129 08:17:36.431233 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72g5p\" (UniqueName: \"kubernetes.io/projected/de04b653-116c-4500-b39c-c40da93be1c0-kube-api-access-72g5p\") pod \"crc-debug-n58bp\" (UID: \"de04b653-116c-4500-b39c-c40da93be1c0\") " pod="openshift-must-gather-p8sjr/crc-debug-n58bp" Nov 29 08:17:36 crc kubenswrapper[4947]: I1129 08:17:36.431669 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/de04b653-116c-4500-b39c-c40da93be1c0-host\") pod \"crc-debug-n58bp\" (UID: \"de04b653-116c-4500-b39c-c40da93be1c0\") " pod="openshift-must-gather-p8sjr/crc-debug-n58bp" Nov 29 08:17:36 crc kubenswrapper[4947]: I1129 08:17:36.431754 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/de04b653-116c-4500-b39c-c40da93be1c0-host\") pod \"crc-debug-n58bp\" (UID: \"de04b653-116c-4500-b39c-c40da93be1c0\") " pod="openshift-must-gather-p8sjr/crc-debug-n58bp" Nov 29 08:17:36 crc kubenswrapper[4947]: I1129 08:17:36.460113 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72g5p\" (UniqueName: \"kubernetes.io/projected/de04b653-116c-4500-b39c-c40da93be1c0-kube-api-access-72g5p\") pod \"crc-debug-n58bp\" (UID: \"de04b653-116c-4500-b39c-c40da93be1c0\") " pod="openshift-must-gather-p8sjr/crc-debug-n58bp" Nov 29 08:17:36 crc kubenswrapper[4947]: I1129 08:17:36.491477 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p8sjr/crc-debug-n58bp" Nov 29 08:17:37 crc kubenswrapper[4947]: I1129 08:17:37.331372 4947 generic.go:334] "Generic (PLEG): container finished" podID="de04b653-116c-4500-b39c-c40da93be1c0" containerID="83170bd0554e46c3674a57a10fb1655568dacec8df71eecc67e6d1e19aab7cf9" exitCode=0 Nov 29 08:17:37 crc kubenswrapper[4947]: I1129 08:17:37.331507 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p8sjr/crc-debug-n58bp" event={"ID":"de04b653-116c-4500-b39c-c40da93be1c0","Type":"ContainerDied","Data":"83170bd0554e46c3674a57a10fb1655568dacec8df71eecc67e6d1e19aab7cf9"} Nov 29 08:17:37 crc kubenswrapper[4947]: I1129 08:17:37.331781 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p8sjr/crc-debug-n58bp" event={"ID":"de04b653-116c-4500-b39c-c40da93be1c0","Type":"ContainerStarted","Data":"6ea0c6138d70983b85248186df2e921173866ba69d2cad924e1ac6ac82816323"} Nov 29 08:17:37 crc kubenswrapper[4947]: I1129 08:17:37.371767 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-p8sjr/crc-debug-n58bp"] Nov 29 08:17:37 crc kubenswrapper[4947]: I1129 08:17:37.380787 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-p8sjr/crc-debug-n58bp"] Nov 29 08:17:38 crc kubenswrapper[4947]: I1129 08:17:38.272774 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bbqqh"] Nov 29 08:17:38 crc kubenswrapper[4947]: E1129 08:17:38.274103 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de04b653-116c-4500-b39c-c40da93be1c0" containerName="container-00" Nov 29 08:17:38 crc kubenswrapper[4947]: I1129 08:17:38.274137 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="de04b653-116c-4500-b39c-c40da93be1c0" containerName="container-00" Nov 29 08:17:38 crc kubenswrapper[4947]: I1129 08:17:38.274531 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="de04b653-116c-4500-b39c-c40da93be1c0" containerName="container-00" Nov 29 08:17:38 crc kubenswrapper[4947]: I1129 08:17:38.277350 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bbqqh" Nov 29 08:17:38 crc kubenswrapper[4947]: I1129 08:17:38.288115 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bbqqh"] Nov 29 08:17:38 crc kubenswrapper[4947]: I1129 08:17:38.391784 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be430ea7-e7df-4bdf-9750-167af2ac6a09-catalog-content\") pod \"certified-operators-bbqqh\" (UID: \"be430ea7-e7df-4bdf-9750-167af2ac6a09\") " pod="openshift-marketplace/certified-operators-bbqqh" Nov 29 08:17:38 crc kubenswrapper[4947]: I1129 08:17:38.391921 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be430ea7-e7df-4bdf-9750-167af2ac6a09-utilities\") pod \"certified-operators-bbqqh\" (UID: \"be430ea7-e7df-4bdf-9750-167af2ac6a09\") " pod="openshift-marketplace/certified-operators-bbqqh" Nov 29 08:17:38 crc kubenswrapper[4947]: I1129 08:17:38.392044 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz6f8\" (UniqueName: \"kubernetes.io/projected/be430ea7-e7df-4bdf-9750-167af2ac6a09-kube-api-access-dz6f8\") pod \"certified-operators-bbqqh\" (UID: \"be430ea7-e7df-4bdf-9750-167af2ac6a09\") " pod="openshift-marketplace/certified-operators-bbqqh" Nov 29 08:17:38 crc kubenswrapper[4947]: I1129 08:17:38.467260 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p8sjr/crc-debug-n58bp" Nov 29 08:17:38 crc kubenswrapper[4947]: I1129 08:17:38.494354 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dz6f8\" (UniqueName: \"kubernetes.io/projected/be430ea7-e7df-4bdf-9750-167af2ac6a09-kube-api-access-dz6f8\") pod \"certified-operators-bbqqh\" (UID: \"be430ea7-e7df-4bdf-9750-167af2ac6a09\") " pod="openshift-marketplace/certified-operators-bbqqh" Nov 29 08:17:38 crc kubenswrapper[4947]: I1129 08:17:38.495128 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be430ea7-e7df-4bdf-9750-167af2ac6a09-catalog-content\") pod \"certified-operators-bbqqh\" (UID: \"be430ea7-e7df-4bdf-9750-167af2ac6a09\") " pod="openshift-marketplace/certified-operators-bbqqh" Nov 29 08:17:38 crc kubenswrapper[4947]: I1129 08:17:38.495212 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be430ea7-e7df-4bdf-9750-167af2ac6a09-utilities\") pod \"certified-operators-bbqqh\" (UID: \"be430ea7-e7df-4bdf-9750-167af2ac6a09\") " pod="openshift-marketplace/certified-operators-bbqqh" Nov 29 08:17:38 crc kubenswrapper[4947]: I1129 08:17:38.495860 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be430ea7-e7df-4bdf-9750-167af2ac6a09-utilities\") pod \"certified-operators-bbqqh\" (UID: \"be430ea7-e7df-4bdf-9750-167af2ac6a09\") " pod="openshift-marketplace/certified-operators-bbqqh" Nov 29 08:17:38 crc kubenswrapper[4947]: I1129 08:17:38.496541 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be430ea7-e7df-4bdf-9750-167af2ac6a09-catalog-content\") pod \"certified-operators-bbqqh\" (UID: \"be430ea7-e7df-4bdf-9750-167af2ac6a09\") " pod="openshift-marketplace/certified-operators-bbqqh" Nov 29 08:17:38 crc kubenswrapper[4947]: I1129 08:17:38.518652 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz6f8\" (UniqueName: \"kubernetes.io/projected/be430ea7-e7df-4bdf-9750-167af2ac6a09-kube-api-access-dz6f8\") pod \"certified-operators-bbqqh\" (UID: \"be430ea7-e7df-4bdf-9750-167af2ac6a09\") " pod="openshift-marketplace/certified-operators-bbqqh" Nov 29 08:17:38 crc kubenswrapper[4947]: I1129 08:17:38.596223 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/de04b653-116c-4500-b39c-c40da93be1c0-host\") pod \"de04b653-116c-4500-b39c-c40da93be1c0\" (UID: \"de04b653-116c-4500-b39c-c40da93be1c0\") " Nov 29 08:17:38 crc kubenswrapper[4947]: I1129 08:17:38.596383 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72g5p\" (UniqueName: \"kubernetes.io/projected/de04b653-116c-4500-b39c-c40da93be1c0-kube-api-access-72g5p\") pod \"de04b653-116c-4500-b39c-c40da93be1c0\" (UID: \"de04b653-116c-4500-b39c-c40da93be1c0\") " Nov 29 08:17:38 crc kubenswrapper[4947]: I1129 08:17:38.597565 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de04b653-116c-4500-b39c-c40da93be1c0-host" (OuterVolumeSpecName: "host") pod "de04b653-116c-4500-b39c-c40da93be1c0" (UID: "de04b653-116c-4500-b39c-c40da93be1c0"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 08:17:38 crc kubenswrapper[4947]: I1129 08:17:38.601160 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de04b653-116c-4500-b39c-c40da93be1c0-kube-api-access-72g5p" (OuterVolumeSpecName: "kube-api-access-72g5p") pod "de04b653-116c-4500-b39c-c40da93be1c0" (UID: "de04b653-116c-4500-b39c-c40da93be1c0"). InnerVolumeSpecName "kube-api-access-72g5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:17:38 crc kubenswrapper[4947]: I1129 08:17:38.605334 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bbqqh" Nov 29 08:17:38 crc kubenswrapper[4947]: I1129 08:17:38.699492 4947 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/de04b653-116c-4500-b39c-c40da93be1c0-host\") on node \"crc\" DevicePath \"\"" Nov 29 08:17:38 crc kubenswrapper[4947]: I1129 08:17:38.699768 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72g5p\" (UniqueName: \"kubernetes.io/projected/de04b653-116c-4500-b39c-c40da93be1c0-kube-api-access-72g5p\") on node \"crc\" DevicePath \"\"" Nov 29 08:17:39 crc kubenswrapper[4947]: W1129 08:17:39.194273 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe430ea7_e7df_4bdf_9750_167af2ac6a09.slice/crio-3f08dcb40b692d1463753115cf28b1a5339581beec5dbd783f248575c77d65c0 WatchSource:0}: Error finding container 3f08dcb40b692d1463753115cf28b1a5339581beec5dbd783f248575c77d65c0: Status 404 returned error can't find the container with id 3f08dcb40b692d1463753115cf28b1a5339581beec5dbd783f248575c77d65c0 Nov 29 08:17:39 crc kubenswrapper[4947]: I1129 08:17:39.203851 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de04b653-116c-4500-b39c-c40da93be1c0" path="/var/lib/kubelet/pods/de04b653-116c-4500-b39c-c40da93be1c0/volumes" Nov 29 08:17:39 crc kubenswrapper[4947]: I1129 08:17:39.204733 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bbqqh"] Nov 29 08:17:39 crc kubenswrapper[4947]: I1129 08:17:39.357607 4947 scope.go:117] "RemoveContainer" containerID="83170bd0554e46c3674a57a10fb1655568dacec8df71eecc67e6d1e19aab7cf9" Nov 29 08:17:39 crc kubenswrapper[4947]: I1129 08:17:39.357748 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p8sjr/crc-debug-n58bp" Nov 29 08:17:39 crc kubenswrapper[4947]: I1129 08:17:39.361387 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bbqqh" event={"ID":"be430ea7-e7df-4bdf-9750-167af2ac6a09","Type":"ContainerStarted","Data":"3f08dcb40b692d1463753115cf28b1a5339581beec5dbd783f248575c77d65c0"} Nov 29 08:17:40 crc kubenswrapper[4947]: I1129 08:17:40.377700 4947 generic.go:334] "Generic (PLEG): container finished" podID="be430ea7-e7df-4bdf-9750-167af2ac6a09" containerID="d5d77cef23b7d14c2a9476937b95d9d7d4fa893828fe84d620d222985026915b" exitCode=0 Nov 29 08:17:40 crc kubenswrapper[4947]: I1129 08:17:40.377822 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bbqqh" event={"ID":"be430ea7-e7df-4bdf-9750-167af2ac6a09","Type":"ContainerDied","Data":"d5d77cef23b7d14c2a9476937b95d9d7d4fa893828fe84d620d222985026915b"} Nov 29 08:17:42 crc kubenswrapper[4947]: I1129 08:17:42.398073 4947 generic.go:334] "Generic (PLEG): container finished" podID="be430ea7-e7df-4bdf-9750-167af2ac6a09" containerID="cb89bbc8198bf787912ccc6c6b47126ab60f6d128971c22afac58fbd5b91eeb8" exitCode=0 Nov 29 08:17:42 crc kubenswrapper[4947]: I1129 08:17:42.398179 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bbqqh" event={"ID":"be430ea7-e7df-4bdf-9750-167af2ac6a09","Type":"ContainerDied","Data":"cb89bbc8198bf787912ccc6c6b47126ab60f6d128971c22afac58fbd5b91eeb8"} Nov 29 08:17:43 crc kubenswrapper[4947]: I1129 08:17:43.421022 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bbqqh" event={"ID":"be430ea7-e7df-4bdf-9750-167af2ac6a09","Type":"ContainerStarted","Data":"20cc126d4cf04cdf7ab70c80cd2ba860e57122e0467ef699a5949b34922516fa"} Nov 29 08:17:47 crc kubenswrapper[4947]: I1129 08:17:47.180185 4947 scope.go:117] "RemoveContainer" containerID="d67dc1b8e5602a5dcf9a398c7d5f30c1377f7bf68c3763bf602bafe49c0e95c9" Nov 29 08:17:47 crc kubenswrapper[4947]: E1129 08:17:47.181214 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 08:17:48 crc kubenswrapper[4947]: I1129 08:17:48.605779 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bbqqh" Nov 29 08:17:48 crc kubenswrapper[4947]: I1129 08:17:48.605882 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bbqqh" Nov 29 08:17:48 crc kubenswrapper[4947]: I1129 08:17:48.669052 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bbqqh" Nov 29 08:17:48 crc kubenswrapper[4947]: I1129 08:17:48.700406 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bbqqh" podStartSLOduration=7.885812453 podStartE2EDuration="10.700387183s" podCreationTimestamp="2025-11-29 08:17:38 +0000 UTC" firstStartedPulling="2025-11-29 08:17:40.379820252 +0000 UTC m=+6211.424202333" lastFinishedPulling="2025-11-29 08:17:43.194394982 +0000 UTC m=+6214.238777063" observedRunningTime="2025-11-29 08:17:43.441913177 +0000 UTC m=+6214.486295258" watchObservedRunningTime="2025-11-29 08:17:48.700387183 +0000 UTC m=+6219.744769264" Nov 29 08:17:49 crc kubenswrapper[4947]: I1129 08:17:49.526462 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bbqqh" Nov 29 08:17:49 crc kubenswrapper[4947]: I1129 08:17:49.570509 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bbqqh"] Nov 29 08:17:51 crc kubenswrapper[4947]: I1129 08:17:51.496955 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bbqqh" podUID="be430ea7-e7df-4bdf-9750-167af2ac6a09" containerName="registry-server" containerID="cri-o://20cc126d4cf04cdf7ab70c80cd2ba860e57122e0467ef699a5949b34922516fa" gracePeriod=2 Nov 29 08:17:52 crc kubenswrapper[4947]: I1129 08:17:52.511199 4947 generic.go:334] "Generic (PLEG): container finished" podID="be430ea7-e7df-4bdf-9750-167af2ac6a09" containerID="20cc126d4cf04cdf7ab70c80cd2ba860e57122e0467ef699a5949b34922516fa" exitCode=0 Nov 29 08:17:52 crc kubenswrapper[4947]: I1129 08:17:52.511526 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bbqqh" event={"ID":"be430ea7-e7df-4bdf-9750-167af2ac6a09","Type":"ContainerDied","Data":"20cc126d4cf04cdf7ab70c80cd2ba860e57122e0467ef699a5949b34922516fa"} Nov 29 08:17:52 crc kubenswrapper[4947]: I1129 08:17:52.726134 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bbqqh" Nov 29 08:17:52 crc kubenswrapper[4947]: I1129 08:17:52.810493 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dz6f8\" (UniqueName: \"kubernetes.io/projected/be430ea7-e7df-4bdf-9750-167af2ac6a09-kube-api-access-dz6f8\") pod \"be430ea7-e7df-4bdf-9750-167af2ac6a09\" (UID: \"be430ea7-e7df-4bdf-9750-167af2ac6a09\") " Nov 29 08:17:52 crc kubenswrapper[4947]: I1129 08:17:52.810847 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be430ea7-e7df-4bdf-9750-167af2ac6a09-catalog-content\") pod \"be430ea7-e7df-4bdf-9750-167af2ac6a09\" (UID: \"be430ea7-e7df-4bdf-9750-167af2ac6a09\") " Nov 29 08:17:52 crc kubenswrapper[4947]: I1129 08:17:52.810949 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be430ea7-e7df-4bdf-9750-167af2ac6a09-utilities\") pod \"be430ea7-e7df-4bdf-9750-167af2ac6a09\" (UID: \"be430ea7-e7df-4bdf-9750-167af2ac6a09\") " Nov 29 08:17:52 crc kubenswrapper[4947]: I1129 08:17:52.812268 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be430ea7-e7df-4bdf-9750-167af2ac6a09-utilities" (OuterVolumeSpecName: "utilities") pod "be430ea7-e7df-4bdf-9750-167af2ac6a09" (UID: "be430ea7-e7df-4bdf-9750-167af2ac6a09"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:17:52 crc kubenswrapper[4947]: I1129 08:17:52.819699 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be430ea7-e7df-4bdf-9750-167af2ac6a09-kube-api-access-dz6f8" (OuterVolumeSpecName: "kube-api-access-dz6f8") pod "be430ea7-e7df-4bdf-9750-167af2ac6a09" (UID: "be430ea7-e7df-4bdf-9750-167af2ac6a09"). InnerVolumeSpecName "kube-api-access-dz6f8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:17:52 crc kubenswrapper[4947]: I1129 08:17:52.868977 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be430ea7-e7df-4bdf-9750-167af2ac6a09-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "be430ea7-e7df-4bdf-9750-167af2ac6a09" (UID: "be430ea7-e7df-4bdf-9750-167af2ac6a09"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:17:52 crc kubenswrapper[4947]: I1129 08:17:52.914694 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dz6f8\" (UniqueName: \"kubernetes.io/projected/be430ea7-e7df-4bdf-9750-167af2ac6a09-kube-api-access-dz6f8\") on node \"crc\" DevicePath \"\"" Nov 29 08:17:52 crc kubenswrapper[4947]: I1129 08:17:52.914743 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be430ea7-e7df-4bdf-9750-167af2ac6a09-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:17:52 crc kubenswrapper[4947]: I1129 08:17:52.914751 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be430ea7-e7df-4bdf-9750-167af2ac6a09-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:17:53 crc kubenswrapper[4947]: I1129 08:17:53.527086 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bbqqh" event={"ID":"be430ea7-e7df-4bdf-9750-167af2ac6a09","Type":"ContainerDied","Data":"3f08dcb40b692d1463753115cf28b1a5339581beec5dbd783f248575c77d65c0"} Nov 29 08:17:53 crc kubenswrapper[4947]: I1129 08:17:53.527454 4947 scope.go:117] "RemoveContainer" containerID="20cc126d4cf04cdf7ab70c80cd2ba860e57122e0467ef699a5949b34922516fa" Nov 29 08:17:53 crc kubenswrapper[4947]: I1129 08:17:53.527143 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bbqqh" Nov 29 08:17:53 crc kubenswrapper[4947]: I1129 08:17:53.561340 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bbqqh"] Nov 29 08:17:53 crc kubenswrapper[4947]: I1129 08:17:53.570382 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bbqqh"] Nov 29 08:17:53 crc kubenswrapper[4947]: I1129 08:17:53.589546 4947 scope.go:117] "RemoveContainer" containerID="cb89bbc8198bf787912ccc6c6b47126ab60f6d128971c22afac58fbd5b91eeb8" Nov 29 08:17:53 crc kubenswrapper[4947]: I1129 08:17:53.620177 4947 scope.go:117] "RemoveContainer" containerID="d5d77cef23b7d14c2a9476937b95d9d7d4fa893828fe84d620d222985026915b" Nov 29 08:17:54 crc kubenswrapper[4947]: I1129 08:17:54.975772 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-j9fcq"] Nov 29 08:17:54 crc kubenswrapper[4947]: E1129 08:17:54.976275 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be430ea7-e7df-4bdf-9750-167af2ac6a09" containerName="extract-utilities" Nov 29 08:17:54 crc kubenswrapper[4947]: I1129 08:17:54.976293 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="be430ea7-e7df-4bdf-9750-167af2ac6a09" containerName="extract-utilities" Nov 29 08:17:54 crc kubenswrapper[4947]: E1129 08:17:54.976312 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be430ea7-e7df-4bdf-9750-167af2ac6a09" containerName="registry-server" Nov 29 08:17:54 crc kubenswrapper[4947]: I1129 08:17:54.976320 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="be430ea7-e7df-4bdf-9750-167af2ac6a09" containerName="registry-server" Nov 29 08:17:54 crc kubenswrapper[4947]: E1129 08:17:54.976340 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be430ea7-e7df-4bdf-9750-167af2ac6a09" containerName="extract-content" Nov 29 08:17:54 crc kubenswrapper[4947]: I1129 08:17:54.976348 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="be430ea7-e7df-4bdf-9750-167af2ac6a09" containerName="extract-content" Nov 29 08:17:54 crc kubenswrapper[4947]: I1129 08:17:54.976580 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="be430ea7-e7df-4bdf-9750-167af2ac6a09" containerName="registry-server" Nov 29 08:17:54 crc kubenswrapper[4947]: I1129 08:17:54.979868 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j9fcq" Nov 29 08:17:54 crc kubenswrapper[4947]: I1129 08:17:54.991309 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j9fcq"] Nov 29 08:17:55 crc kubenswrapper[4947]: I1129 08:17:55.066001 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35347c86-909d-4ea3-b70f-70642eea076f-catalog-content\") pod \"redhat-operators-j9fcq\" (UID: \"35347c86-909d-4ea3-b70f-70642eea076f\") " pod="openshift-marketplace/redhat-operators-j9fcq" Nov 29 08:17:55 crc kubenswrapper[4947]: I1129 08:17:55.066059 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35347c86-909d-4ea3-b70f-70642eea076f-utilities\") pod \"redhat-operators-j9fcq\" (UID: \"35347c86-909d-4ea3-b70f-70642eea076f\") " pod="openshift-marketplace/redhat-operators-j9fcq" Nov 29 08:17:55 crc kubenswrapper[4947]: I1129 08:17:55.066119 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7xfp\" (UniqueName: \"kubernetes.io/projected/35347c86-909d-4ea3-b70f-70642eea076f-kube-api-access-t7xfp\") pod \"redhat-operators-j9fcq\" (UID: \"35347c86-909d-4ea3-b70f-70642eea076f\") " pod="openshift-marketplace/redhat-operators-j9fcq" Nov 29 08:17:55 crc kubenswrapper[4947]: I1129 08:17:55.167933 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35347c86-909d-4ea3-b70f-70642eea076f-catalog-content\") pod \"redhat-operators-j9fcq\" (UID: \"35347c86-909d-4ea3-b70f-70642eea076f\") " pod="openshift-marketplace/redhat-operators-j9fcq" Nov 29 08:17:55 crc kubenswrapper[4947]: I1129 08:17:55.167981 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35347c86-909d-4ea3-b70f-70642eea076f-utilities\") pod \"redhat-operators-j9fcq\" (UID: \"35347c86-909d-4ea3-b70f-70642eea076f\") " pod="openshift-marketplace/redhat-operators-j9fcq" Nov 29 08:17:55 crc kubenswrapper[4947]: I1129 08:17:55.168014 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7xfp\" (UniqueName: \"kubernetes.io/projected/35347c86-909d-4ea3-b70f-70642eea076f-kube-api-access-t7xfp\") pod \"redhat-operators-j9fcq\" (UID: \"35347c86-909d-4ea3-b70f-70642eea076f\") " pod="openshift-marketplace/redhat-operators-j9fcq" Nov 29 08:17:55 crc kubenswrapper[4947]: I1129 08:17:55.168625 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35347c86-909d-4ea3-b70f-70642eea076f-catalog-content\") pod \"redhat-operators-j9fcq\" (UID: \"35347c86-909d-4ea3-b70f-70642eea076f\") " pod="openshift-marketplace/redhat-operators-j9fcq" Nov 29 08:17:55 crc kubenswrapper[4947]: I1129 08:17:55.168625 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35347c86-909d-4ea3-b70f-70642eea076f-utilities\") pod \"redhat-operators-j9fcq\" (UID: \"35347c86-909d-4ea3-b70f-70642eea076f\") " pod="openshift-marketplace/redhat-operators-j9fcq" Nov 29 08:17:55 crc kubenswrapper[4947]: I1129 08:17:55.188922 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be430ea7-e7df-4bdf-9750-167af2ac6a09" path="/var/lib/kubelet/pods/be430ea7-e7df-4bdf-9750-167af2ac6a09/volumes" Nov 29 08:17:55 crc kubenswrapper[4947]: I1129 08:17:55.193412 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7xfp\" (UniqueName: \"kubernetes.io/projected/35347c86-909d-4ea3-b70f-70642eea076f-kube-api-access-t7xfp\") pod \"redhat-operators-j9fcq\" (UID: \"35347c86-909d-4ea3-b70f-70642eea076f\") " pod="openshift-marketplace/redhat-operators-j9fcq" Nov 29 08:17:55 crc kubenswrapper[4947]: I1129 08:17:55.298673 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j9fcq" Nov 29 08:17:55 crc kubenswrapper[4947]: I1129 08:17:55.790577 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j9fcq"] Nov 29 08:17:56 crc kubenswrapper[4947]: I1129 08:17:56.573172 4947 generic.go:334] "Generic (PLEG): container finished" podID="35347c86-909d-4ea3-b70f-70642eea076f" containerID="4e7ea393d05e2fbfef94085b4522d15d3bd01c9630395c65e3de07977c8e4b9c" exitCode=0 Nov 29 08:17:56 crc kubenswrapper[4947]: I1129 08:17:56.573273 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9fcq" event={"ID":"35347c86-909d-4ea3-b70f-70642eea076f","Type":"ContainerDied","Data":"4e7ea393d05e2fbfef94085b4522d15d3bd01c9630395c65e3de07977c8e4b9c"} Nov 29 08:17:56 crc kubenswrapper[4947]: I1129 08:17:56.573844 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9fcq" event={"ID":"35347c86-909d-4ea3-b70f-70642eea076f","Type":"ContainerStarted","Data":"c77b1092ee9657c334df520272c30721104f254e65410c33bac2373566cc949b"} Nov 29 08:17:58 crc kubenswrapper[4947]: I1129 08:17:58.179512 4947 scope.go:117] "RemoveContainer" containerID="d67dc1b8e5602a5dcf9a398c7d5f30c1377f7bf68c3763bf602bafe49c0e95c9" Nov 29 08:17:58 crc kubenswrapper[4947]: E1129 08:17:58.181200 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 08:17:58 crc kubenswrapper[4947]: I1129 08:17:58.604987 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9fcq" event={"ID":"35347c86-909d-4ea3-b70f-70642eea076f","Type":"ContainerStarted","Data":"6d86732c55f197eb2e953a404dfca37c2ce062bf6a6dd55eb0e06f04f52bcec6"} Nov 29 08:17:59 crc kubenswrapper[4947]: I1129 08:17:59.618432 4947 generic.go:334] "Generic (PLEG): container finished" podID="35347c86-909d-4ea3-b70f-70642eea076f" containerID="6d86732c55f197eb2e953a404dfca37c2ce062bf6a6dd55eb0e06f04f52bcec6" exitCode=0 Nov 29 08:17:59 crc kubenswrapper[4947]: I1129 08:17:59.618536 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9fcq" event={"ID":"35347c86-909d-4ea3-b70f-70642eea076f","Type":"ContainerDied","Data":"6d86732c55f197eb2e953a404dfca37c2ce062bf6a6dd55eb0e06f04f52bcec6"} Nov 29 08:18:00 crc kubenswrapper[4947]: I1129 08:18:00.644800 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9fcq" event={"ID":"35347c86-909d-4ea3-b70f-70642eea076f","Type":"ContainerStarted","Data":"7f723056594a6d3d2148054f93c19c3fd0dfe8aa96fb966c3386305ceaa69f48"} Nov 29 08:18:00 crc kubenswrapper[4947]: I1129 08:18:00.676184 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-j9fcq" podStartSLOduration=3.048327675 podStartE2EDuration="6.676163736s" podCreationTimestamp="2025-11-29 08:17:54 +0000 UTC" firstStartedPulling="2025-11-29 08:17:56.575143835 +0000 UTC m=+6227.619525916" lastFinishedPulling="2025-11-29 08:18:00.202979896 +0000 UTC m=+6231.247361977" observedRunningTime="2025-11-29 08:18:00.665078996 +0000 UTC m=+6231.709461077" watchObservedRunningTime="2025-11-29 08:18:00.676163736 +0000 UTC m=+6231.720545817" Nov 29 08:18:05 crc kubenswrapper[4947]: I1129 08:18:05.299315 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-j9fcq" Nov 29 08:18:05 crc kubenswrapper[4947]: I1129 08:18:05.300995 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-j9fcq" Nov 29 08:18:06 crc kubenswrapper[4947]: I1129 08:18:06.373078 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-j9fcq" podUID="35347c86-909d-4ea3-b70f-70642eea076f" containerName="registry-server" probeResult="failure" output=< Nov 29 08:18:06 crc kubenswrapper[4947]: timeout: failed to connect service ":50051" within 1s Nov 29 08:18:06 crc kubenswrapper[4947]: > Nov 29 08:18:09 crc kubenswrapper[4947]: I1129 08:18:09.204241 4947 scope.go:117] "RemoveContainer" containerID="d67dc1b8e5602a5dcf9a398c7d5f30c1377f7bf68c3763bf602bafe49c0e95c9" Nov 29 08:18:09 crc kubenswrapper[4947]: E1129 08:18:09.205181 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 08:18:09 crc kubenswrapper[4947]: I1129 08:18:09.847790 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6bf6f7cb48-d9zj4_f0759441-d9a0-4d4d-aead-69e48bcc16c7/barbican-api/0.log" Nov 29 08:18:09 crc kubenswrapper[4947]: I1129 08:18:09.943337 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6bf6f7cb48-d9zj4_f0759441-d9a0-4d4d-aead-69e48bcc16c7/barbican-api-log/0.log" Nov 29 08:18:10 crc kubenswrapper[4947]: I1129 08:18:10.100316 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-59fd76fd88-6ghpn_ae7d6be8-33b3-4771-aebf-d7302883bd3d/barbican-keystone-listener/0.log" Nov 29 08:18:10 crc kubenswrapper[4947]: I1129 08:18:10.351136 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-85c67749d7-4wkg2_ccadffe2-e81e-44ab-a879-fefa01177386/barbican-worker/0.log" Nov 29 08:18:10 crc kubenswrapper[4947]: I1129 08:18:10.366506 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-85c67749d7-4wkg2_ccadffe2-e81e-44ab-a879-fefa01177386/barbican-worker-log/0.log" Nov 29 08:18:10 crc kubenswrapper[4947]: I1129 08:18:10.421278 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-59fd76fd88-6ghpn_ae7d6be8-33b3-4771-aebf-d7302883bd3d/barbican-keystone-listener-log/0.log" Nov 29 08:18:10 crc kubenswrapper[4947]: I1129 08:18:10.636664 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-wb8qd_a1b54b1c-c811-4bd4-b2a4-0d26f1a2b039/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:18:10 crc kubenswrapper[4947]: I1129 08:18:10.665380 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_23c9606d-46f1-4079-a6b1-ecc87e1c99b1/ceilometer-central-agent/0.log" Nov 29 08:18:10 crc kubenswrapper[4947]: I1129 08:18:10.859877 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_23c9606d-46f1-4079-a6b1-ecc87e1c99b1/proxy-httpd/0.log" Nov 29 08:18:10 crc kubenswrapper[4947]: I1129 08:18:10.890731 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_23c9606d-46f1-4079-a6b1-ecc87e1c99b1/sg-core/0.log" Nov 29 08:18:11 crc kubenswrapper[4947]: I1129 08:18:11.746329 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sqrwc_5f41a587-e5e5-4f2a-becb-4870793e41c9/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:18:11 crc kubenswrapper[4947]: I1129 08:18:11.746665 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-kwkfr_5210261f-d57b-4ed1-bf74-7c4f73cb6a8f/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:18:11 crc kubenswrapper[4947]: I1129 08:18:11.796330 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_23c9606d-46f1-4079-a6b1-ecc87e1c99b1/ceilometer-notification-agent/0.log" Nov 29 08:18:12 crc kubenswrapper[4947]: I1129 08:18:12.427760 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_7887622b-8fe9-4ea2-a867-3490f70c1d87/probe/0.log" Nov 29 08:18:12 crc kubenswrapper[4947]: I1129 08:18:12.537739 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_53f96945-8da7-4cac-8579-990373298a91/cinder-api/0.log" Nov 29 08:18:12 crc kubenswrapper[4947]: I1129 08:18:12.849359 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_53f96945-8da7-4cac-8579-990373298a91/cinder-api-log/0.log" Nov 29 08:18:12 crc kubenswrapper[4947]: I1129 08:18:12.859995 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_6addf10a-a22b-445f-af40-5812ab69c7a0/cinder-scheduler/0.log" Nov 29 08:18:13 crc kubenswrapper[4947]: I1129 08:18:13.062722 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_6addf10a-a22b-445f-af40-5812ab69c7a0/probe/0.log" Nov 29 08:18:13 crc kubenswrapper[4947]: I1129 08:18:13.363978 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_06d1947f-99c6-4f58-93f6-b15ea0b89743/probe/0.log" Nov 29 08:18:13 crc kubenswrapper[4947]: I1129 08:18:13.504864 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-n25vz_46ef6c65-ceb7-4787-95bc-783fc372fdf7/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:18:13 crc kubenswrapper[4947]: I1129 08:18:13.633534 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-6p547_decf89ef-31dd-410d-a70c-a19245e90e55/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:18:13 crc kubenswrapper[4947]: I1129 08:18:13.906209 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_7887622b-8fe9-4ea2-a867-3490f70c1d87/cinder-backup/0.log" Nov 29 08:18:14 crc kubenswrapper[4947]: I1129 08:18:14.055679 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-mqmgd_3eb0b137-b0e2-495d-afdf-a81bdc9b10b2/init/0.log" Nov 29 08:18:14 crc kubenswrapper[4947]: I1129 08:18:14.116815 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-mqmgd_3eb0b137-b0e2-495d-afdf-a81bdc9b10b2/init/0.log" Nov 29 08:18:14 crc kubenswrapper[4947]: I1129 08:18:14.299935 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-mqmgd_3eb0b137-b0e2-495d-afdf-a81bdc9b10b2/dnsmasq-dns/0.log" Nov 29 08:18:14 crc kubenswrapper[4947]: I1129 08:18:14.355268 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_5577dab3-c69a-480a-8fed-a9fdaf4152c8/glance-log/0.log" Nov 29 08:18:14 crc kubenswrapper[4947]: I1129 08:18:14.419378 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_5577dab3-c69a-480a-8fed-a9fdaf4152c8/glance-httpd/0.log" Nov 29 08:18:14 crc kubenswrapper[4947]: I1129 08:18:14.621606 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_7473ae66-4c18-4a4a-92ab-cd0ce58ace1c/glance-log/0.log" Nov 29 08:18:14 crc kubenswrapper[4947]: I1129 08:18:14.661796 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_7473ae66-4c18-4a4a-92ab-cd0ce58ace1c/glance-httpd/0.log" Nov 29 08:18:14 crc kubenswrapper[4947]: I1129 08:18:14.984069 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-8v7dd_d33adfe1-c0d7-4896-8acf-00e6c0a4afc7/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:18:15 crc kubenswrapper[4947]: I1129 08:18:15.010075 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6757b657b4-vdhrb_4d917345-655a-4f24-bfd1-57dd9a7e9880/horizon/0.log" Nov 29 08:18:15 crc kubenswrapper[4947]: I1129 08:18:15.310458 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-nkrl9_b4bb992c-9305-4cf1-aa77-789ee88999fd/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:18:15 crc kubenswrapper[4947]: I1129 08:18:15.363138 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-j9fcq" Nov 29 08:18:15 crc kubenswrapper[4947]: I1129 08:18:15.420362 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-j9fcq" Nov 29 08:18:15 crc kubenswrapper[4947]: I1129 08:18:15.480535 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6757b657b4-vdhrb_4d917345-655a-4f24-bfd1-57dd9a7e9880/horizon-log/0.log" Nov 29 08:18:15 crc kubenswrapper[4947]: I1129 08:18:15.545603 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29406661-f885s_bb316ee8-4e92-455d-b09c-e6f1b57d9a98/keystone-cron/0.log" Nov 29 08:18:15 crc kubenswrapper[4947]: I1129 08:18:15.613672 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j9fcq"] Nov 29 08:18:15 crc kubenswrapper[4947]: I1129 08:18:15.774384 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29406721-fnfl5_cf41f5ca-60a1-44e7-ac93-c4230e7d8be3/keystone-cron/0.log" Nov 29 08:18:15 crc kubenswrapper[4947]: I1129 08:18:15.975740 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_b54f1c17-a225-4975-9d6f-100e5c5b92bd/kube-state-metrics/0.log" Nov 29 08:18:16 crc kubenswrapper[4947]: I1129 08:18:16.262449 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-4x8l8_97ae78bf-a258-4fbc-912c-f7d6ce706e54/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:18:16 crc kubenswrapper[4947]: I1129 08:18:16.504767 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_1fa713b2-c361-4226-b17d-933dee71ba86/manila-api-log/0.log" Nov 29 08:18:16 crc kubenswrapper[4947]: I1129 08:18:16.518951 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_1fa713b2-c361-4226-b17d-933dee71ba86/manila-api/0.log" Nov 29 08:18:16 crc kubenswrapper[4947]: I1129 08:18:16.568685 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-569f86dc65-rkk52_e1fdddd0-fe06-43ae-b805-15dbd74c0107/keystone-api/0.log" Nov 29 08:18:16 crc kubenswrapper[4947]: I1129 08:18:16.711830 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_4f3eed66-4789-4722-91df-8260e5af75a6/probe/0.log" Nov 29 08:18:16 crc kubenswrapper[4947]: I1129 08:18:16.795269 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-j9fcq" podUID="35347c86-909d-4ea3-b70f-70642eea076f" containerName="registry-server" containerID="cri-o://7f723056594a6d3d2148054f93c19c3fd0dfe8aa96fb966c3386305ceaa69f48" gracePeriod=2 Nov 29 08:18:16 crc kubenswrapper[4947]: I1129 08:18:16.850081 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_4f3eed66-4789-4722-91df-8260e5af75a6/manila-scheduler/0.log" Nov 29 08:18:16 crc kubenswrapper[4947]: I1129 08:18:16.949684 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_fcb6f32f-e314-410e-8902-c7812cf9bbdc/probe/0.log" Nov 29 08:18:17 crc kubenswrapper[4947]: I1129 08:18:17.173387 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_fcb6f32f-e314-410e-8902-c7812cf9bbdc/manila-share/0.log" Nov 29 08:18:17 crc kubenswrapper[4947]: I1129 08:18:17.606412 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-dbcc54df-b7blt_fc94a354-bf43-4d41-bd15-33a8c766752f/neutron-httpd/0.log" Nov 29 08:18:17 crc kubenswrapper[4947]: I1129 08:18:17.676895 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-dbcc54df-b7blt_fc94a354-bf43-4d41-bd15-33a8c766752f/neutron-api/0.log" Nov 29 08:18:17 crc kubenswrapper[4947]: I1129 08:18:17.769479 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-8sg9l_60c10307-3d24-4e37-b1a6-e165784f8f3c/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:18:17 crc kubenswrapper[4947]: I1129 08:18:17.808778 4947 generic.go:334] "Generic (PLEG): container finished" podID="35347c86-909d-4ea3-b70f-70642eea076f" containerID="7f723056594a6d3d2148054f93c19c3fd0dfe8aa96fb966c3386305ceaa69f48" exitCode=0 Nov 29 08:18:17 crc kubenswrapper[4947]: I1129 08:18:17.809243 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9fcq" event={"ID":"35347c86-909d-4ea3-b70f-70642eea076f","Type":"ContainerDied","Data":"7f723056594a6d3d2148054f93c19c3fd0dfe8aa96fb966c3386305ceaa69f48"} Nov 29 08:18:17 crc kubenswrapper[4947]: I1129 08:18:17.924391 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j9fcq" Nov 29 08:18:17 crc kubenswrapper[4947]: I1129 08:18:17.947753 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35347c86-909d-4ea3-b70f-70642eea076f-catalog-content\") pod \"35347c86-909d-4ea3-b70f-70642eea076f\" (UID: \"35347c86-909d-4ea3-b70f-70642eea076f\") " Nov 29 08:18:17 crc kubenswrapper[4947]: I1129 08:18:17.947842 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7xfp\" (UniqueName: \"kubernetes.io/projected/35347c86-909d-4ea3-b70f-70642eea076f-kube-api-access-t7xfp\") pod \"35347c86-909d-4ea3-b70f-70642eea076f\" (UID: \"35347c86-909d-4ea3-b70f-70642eea076f\") " Nov 29 08:18:17 crc kubenswrapper[4947]: I1129 08:18:17.947868 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35347c86-909d-4ea3-b70f-70642eea076f-utilities\") pod \"35347c86-909d-4ea3-b70f-70642eea076f\" (UID: \"35347c86-909d-4ea3-b70f-70642eea076f\") " Nov 29 08:18:17 crc kubenswrapper[4947]: I1129 08:18:17.949177 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35347c86-909d-4ea3-b70f-70642eea076f-utilities" (OuterVolumeSpecName: "utilities") pod "35347c86-909d-4ea3-b70f-70642eea076f" (UID: "35347c86-909d-4ea3-b70f-70642eea076f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:18:17 crc kubenswrapper[4947]: I1129 08:18:17.981780 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35347c86-909d-4ea3-b70f-70642eea076f-kube-api-access-t7xfp" (OuterVolumeSpecName: "kube-api-access-t7xfp") pod "35347c86-909d-4ea3-b70f-70642eea076f" (UID: "35347c86-909d-4ea3-b70f-70642eea076f"). InnerVolumeSpecName "kube-api-access-t7xfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:18:18 crc kubenswrapper[4947]: I1129 08:18:18.052032 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7xfp\" (UniqueName: \"kubernetes.io/projected/35347c86-909d-4ea3-b70f-70642eea076f-kube-api-access-t7xfp\") on node \"crc\" DevicePath \"\"" Nov 29 08:18:18 crc kubenswrapper[4947]: I1129 08:18:18.052105 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35347c86-909d-4ea3-b70f-70642eea076f-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:18:18 crc kubenswrapper[4947]: I1129 08:18:18.124042 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35347c86-909d-4ea3-b70f-70642eea076f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "35347c86-909d-4ea3-b70f-70642eea076f" (UID: "35347c86-909d-4ea3-b70f-70642eea076f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:18:18 crc kubenswrapper[4947]: I1129 08:18:18.154196 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35347c86-909d-4ea3-b70f-70642eea076f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:18:18 crc kubenswrapper[4947]: I1129 08:18:18.727699 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_52e8eb08-68e9-4fac-b7bf-b7481388c22e/nova-cell0-conductor-conductor/0.log" Nov 29 08:18:18 crc kubenswrapper[4947]: I1129 08:18:18.829006 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9fcq" event={"ID":"35347c86-909d-4ea3-b70f-70642eea076f","Type":"ContainerDied","Data":"c77b1092ee9657c334df520272c30721104f254e65410c33bac2373566cc949b"} Nov 29 08:18:18 crc kubenswrapper[4947]: I1129 08:18:18.829520 4947 scope.go:117] "RemoveContainer" containerID="7f723056594a6d3d2148054f93c19c3fd0dfe8aa96fb966c3386305ceaa69f48" Nov 29 08:18:18 crc kubenswrapper[4947]: I1129 08:18:18.829766 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j9fcq" Nov 29 08:18:18 crc kubenswrapper[4947]: I1129 08:18:18.849642 4947 scope.go:117] "RemoveContainer" containerID="6d86732c55f197eb2e953a404dfca37c2ce062bf6a6dd55eb0e06f04f52bcec6" Nov 29 08:18:18 crc kubenswrapper[4947]: I1129 08:18:18.873697 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j9fcq"] Nov 29 08:18:18 crc kubenswrapper[4947]: I1129 08:18:18.887973 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-j9fcq"] Nov 29 08:18:18 crc kubenswrapper[4947]: I1129 08:18:18.902482 4947 scope.go:117] "RemoveContainer" containerID="4e7ea393d05e2fbfef94085b4522d15d3bd01c9630395c65e3de07977c8e4b9c" Nov 29 08:18:19 crc kubenswrapper[4947]: I1129 08:18:19.101172 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_e9b456d6-c5bb-4832-9497-b029168ba297/nova-api-log/0.log" Nov 29 08:18:19 crc kubenswrapper[4947]: I1129 08:18:19.191832 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35347c86-909d-4ea3-b70f-70642eea076f" path="/var/lib/kubelet/pods/35347c86-909d-4ea3-b70f-70642eea076f/volumes" Nov 29 08:18:19 crc kubenswrapper[4947]: I1129 08:18:19.387117 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_c41e44dd-b8f5-41e1-b296-abf1ed9bfda1/nova-cell1-conductor-conductor/0.log" Nov 29 08:18:19 crc kubenswrapper[4947]: I1129 08:18:19.698371 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_e9b456d6-c5bb-4832-9497-b029168ba297/nova-api-api/0.log" Nov 29 08:18:19 crc kubenswrapper[4947]: I1129 08:18:19.737251 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_50cf9c2c-f6ba-4845-a384-9b689cf99484/nova-cell1-novncproxy-novncproxy/0.log" Nov 29 08:18:19 crc kubenswrapper[4947]: I1129 08:18:19.958651 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nd5bx_f14a521e-01c4-4720-984d-65f1123397ae/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:18:20 crc kubenswrapper[4947]: I1129 08:18:20.106792 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_3e8cb9c8-c54d-4269-b59b-6e865d503815/nova-metadata-log/0.log" Nov 29 08:18:20 crc kubenswrapper[4947]: I1129 08:18:20.680776 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_3531a457-8faa-47a6-8db0-4bfc82898e36/nova-scheduler-scheduler/0.log" Nov 29 08:18:20 crc kubenswrapper[4947]: I1129 08:18:20.734292 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_1f1620f7-4f3f-47bf-995a-f4eef1c9cf0f/mysql-bootstrap/0.log" Nov 29 08:18:20 crc kubenswrapper[4947]: I1129 08:18:20.907420 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_1f1620f7-4f3f-47bf-995a-f4eef1c9cf0f/mysql-bootstrap/0.log" Nov 29 08:18:21 crc kubenswrapper[4947]: I1129 08:18:21.000734 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_1f1620f7-4f3f-47bf-995a-f4eef1c9cf0f/galera/0.log" Nov 29 08:18:21 crc kubenswrapper[4947]: I1129 08:18:21.218320 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_bc968903-97f7-437d-882d-1bb4278dab13/mysql-bootstrap/0.log" Nov 29 08:18:21 crc kubenswrapper[4947]: I1129 08:18:21.439760 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_bc968903-97f7-437d-882d-1bb4278dab13/mysql-bootstrap/0.log" Nov 29 08:18:21 crc kubenswrapper[4947]: I1129 08:18:21.583323 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_bc968903-97f7-437d-882d-1bb4278dab13/galera/0.log" Nov 29 08:18:21 crc kubenswrapper[4947]: I1129 08:18:21.802571 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_ee3635c4-674f-4ea9-890c-882857f766ab/openstackclient/0.log" Nov 29 08:18:22 crc kubenswrapper[4947]: I1129 08:18:22.016624 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-r7njh_e02dafe9-25b6-4837-8469-24d843eeff31/openstack-network-exporter/0.log" Nov 29 08:18:22 crc kubenswrapper[4947]: I1129 08:18:22.183088 4947 scope.go:117] "RemoveContainer" containerID="d67dc1b8e5602a5dcf9a398c7d5f30c1377f7bf68c3763bf602bafe49c0e95c9" Nov 29 08:18:22 crc kubenswrapper[4947]: E1129 08:18:22.183649 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 08:18:22 crc kubenswrapper[4947]: I1129 08:18:22.272366 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-ztbsf_5fd9f589-89f8-44d3-9e3e-17546dc61f7b/ovsdb-server-init/0.log" Nov 29 08:18:22 crc kubenswrapper[4947]: I1129 08:18:22.418506 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-ztbsf_5fd9f589-89f8-44d3-9e3e-17546dc61f7b/ovsdb-server-init/0.log" Nov 29 08:18:22 crc kubenswrapper[4947]: I1129 08:18:22.461919 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-ztbsf_5fd9f589-89f8-44d3-9e3e-17546dc61f7b/ovs-vswitchd/0.log" Nov 29 08:18:22 crc kubenswrapper[4947]: I1129 08:18:22.610247 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-ztbsf_5fd9f589-89f8-44d3-9e3e-17546dc61f7b/ovsdb-server/0.log" Nov 29 08:18:22 crc kubenswrapper[4947]: I1129 08:18:22.810481 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-sn9qf_fc369426-cee0-4a95-aa63-9b8d4df05e7a/ovn-controller/0.log" Nov 29 08:18:23 crc kubenswrapper[4947]: I1129 08:18:23.082959 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-t5tbd_ac3f3931-63b8-4dea-a2b4-33faca7b3a93/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:18:23 crc kubenswrapper[4947]: I1129 08:18:23.498129 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_869b4713-f63d-4d82-aa68-e1addc0ae4eb/openstack-network-exporter/0.log" Nov 29 08:18:23 crc kubenswrapper[4947]: I1129 08:18:23.599890 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_869b4713-f63d-4d82-aa68-e1addc0ae4eb/ovn-northd/0.log" Nov 29 08:18:23 crc kubenswrapper[4947]: I1129 08:18:23.661176 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_3e8cb9c8-c54d-4269-b59b-6e865d503815/nova-metadata-metadata/0.log" Nov 29 08:18:23 crc kubenswrapper[4947]: I1129 08:18:23.850829 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_c0e8a035-2cb6-413e-8c9a-86535635ae03/openstack-network-exporter/0.log" Nov 29 08:18:23 crc kubenswrapper[4947]: I1129 08:18:23.904291 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_c0e8a035-2cb6-413e-8c9a-86535635ae03/ovsdbserver-nb/0.log" Nov 29 08:18:24 crc kubenswrapper[4947]: I1129 08:18:24.022719 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_06d1947f-99c6-4f58-93f6-b15ea0b89743/cinder-volume/0.log" Nov 29 08:18:24 crc kubenswrapper[4947]: I1129 08:18:24.102643 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_03658f76-d11a-45a6-a60c-f43b6127225b/openstack-network-exporter/0.log" Nov 29 08:18:24 crc kubenswrapper[4947]: I1129 08:18:24.198578 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_03658f76-d11a-45a6-a60c-f43b6127225b/ovsdbserver-sb/0.log" Nov 29 08:18:24 crc kubenswrapper[4947]: I1129 08:18:24.590078 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-586fc4c58d-gvrpk_08d2e91f-9187-4632-b83f-b966435ebe7f/placement-log/0.log" Nov 29 08:18:24 crc kubenswrapper[4947]: I1129 08:18:24.594257 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-586fc4c58d-gvrpk_08d2e91f-9187-4632-b83f-b966435ebe7f/placement-api/0.log" Nov 29 08:18:24 crc kubenswrapper[4947]: I1129 08:18:24.636663 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_d4d9399d-41b3-40c1-89d4-8124e0966300/setup-container/0.log" Nov 29 08:18:24 crc kubenswrapper[4947]: I1129 08:18:24.934116 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_173b8534-1ee1-448a-bdd1-62369c58057b/setup-container/0.log" Nov 29 08:18:24 crc kubenswrapper[4947]: I1129 08:18:24.952722 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_d4d9399d-41b3-40c1-89d4-8124e0966300/setup-container/0.log" Nov 29 08:18:24 crc kubenswrapper[4947]: I1129 08:18:24.958566 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_d4d9399d-41b3-40c1-89d4-8124e0966300/rabbitmq/0.log" Nov 29 08:18:25 crc kubenswrapper[4947]: I1129 08:18:25.260250 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_173b8534-1ee1-448a-bdd1-62369c58057b/rabbitmq/0.log" Nov 29 08:18:25 crc kubenswrapper[4947]: I1129 08:18:25.326137 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_173b8534-1ee1-448a-bdd1-62369c58057b/setup-container/0.log" Nov 29 08:18:25 crc kubenswrapper[4947]: I1129 08:18:25.433209 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-7wvgn_7119042f-970c-41db-8f48-6543710b205e/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:18:25 crc kubenswrapper[4947]: I1129 08:18:25.604484 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-4xdqb_a0c726c3-2734-4336-a716-b21a6b32f9f9/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:18:25 crc kubenswrapper[4947]: I1129 08:18:25.720336 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-lhmkw_e443fdc5-130b-4c65-b8f6-54c118beadc6/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:18:25 crc kubenswrapper[4947]: I1129 08:18:25.955747 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-xkggt_e3ebc847-298f-4b57-920d-ed63fb69a427/ssh-known-hosts-edpm-deployment/0.log" Nov 29 08:18:26 crc kubenswrapper[4947]: I1129 08:18:26.162210 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_6adb2028-a62e-456d-8863-55da513e78f2/tempest-tests-tempest-tests-runner/0.log" Nov 29 08:18:26 crc kubenswrapper[4947]: I1129 08:18:26.295755 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_e64f8968-10a4-446e-9d16-2758fca7b95a/test-operator-logs-container/0.log" Nov 29 08:18:26 crc kubenswrapper[4947]: I1129 08:18:26.474187 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-fqwtm_b86cae48-3c9d-4647-96ac-bd6ac89ce895/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:18:26 crc kubenswrapper[4947]: I1129 08:18:26.589004 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_13265680-d7d8-4091-8dab-29ac0243dc05/memcached/0.log" Nov 29 08:18:36 crc kubenswrapper[4947]: I1129 08:18:36.178911 4947 scope.go:117] "RemoveContainer" containerID="d67dc1b8e5602a5dcf9a398c7d5f30c1377f7bf68c3763bf602bafe49c0e95c9" Nov 29 08:18:36 crc kubenswrapper[4947]: E1129 08:18:36.179690 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 08:18:48 crc kubenswrapper[4947]: I1129 08:18:48.914799 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2c09e5b1e9a9a4ca78abc65ab9e9961b7fb58875ca4cc4de57b3675e39j25cc_b36e655e-d836-4174-9e94-de2532d08dc4/util/0.log" Nov 29 08:18:49 crc kubenswrapper[4947]: I1129 08:18:49.115975 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2c09e5b1e9a9a4ca78abc65ab9e9961b7fb58875ca4cc4de57b3675e39j25cc_b36e655e-d836-4174-9e94-de2532d08dc4/util/0.log" Nov 29 08:18:49 crc kubenswrapper[4947]: I1129 08:18:49.186659 4947 scope.go:117] "RemoveContainer" containerID="d67dc1b8e5602a5dcf9a398c7d5f30c1377f7bf68c3763bf602bafe49c0e95c9" Nov 29 08:18:49 crc kubenswrapper[4947]: E1129 08:18:49.188063 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 08:18:49 crc kubenswrapper[4947]: I1129 08:18:49.207118 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2c09e5b1e9a9a4ca78abc65ab9e9961b7fb58875ca4cc4de57b3675e39j25cc_b36e655e-d836-4174-9e94-de2532d08dc4/pull/0.log" Nov 29 08:18:49 crc kubenswrapper[4947]: I1129 08:18:49.230420 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2c09e5b1e9a9a4ca78abc65ab9e9961b7fb58875ca4cc4de57b3675e39j25cc_b36e655e-d836-4174-9e94-de2532d08dc4/pull/0.log" Nov 29 08:18:49 crc kubenswrapper[4947]: I1129 08:18:49.460794 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2c09e5b1e9a9a4ca78abc65ab9e9961b7fb58875ca4cc4de57b3675e39j25cc_b36e655e-d836-4174-9e94-de2532d08dc4/util/0.log" Nov 29 08:18:49 crc kubenswrapper[4947]: I1129 08:18:49.473068 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2c09e5b1e9a9a4ca78abc65ab9e9961b7fb58875ca4cc4de57b3675e39j25cc_b36e655e-d836-4174-9e94-de2532d08dc4/pull/0.log" Nov 29 08:18:49 crc kubenswrapper[4947]: I1129 08:18:49.494172 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2c09e5b1e9a9a4ca78abc65ab9e9961b7fb58875ca4cc4de57b3675e39j25cc_b36e655e-d836-4174-9e94-de2532d08dc4/extract/0.log" Nov 29 08:18:49 crc kubenswrapper[4947]: I1129 08:18:49.681385 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-cpl5k_e9138b46-80df-4e49-a519-807c3037d727/kube-rbac-proxy/0.log" Nov 29 08:18:49 crc kubenswrapper[4947]: I1129 08:18:49.760644 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-cpl5k_e9138b46-80df-4e49-a519-807c3037d727/manager/0.log" Nov 29 08:18:49 crc kubenswrapper[4947]: I1129 08:18:49.846403 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5898f4cf77-vpq4v_a3977bd5-f0c4-4d95-bc6c-905bb2f03a07/kube-rbac-proxy/0.log" Nov 29 08:18:50 crc kubenswrapper[4947]: I1129 08:18:50.013882 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5898f4cf77-vpq4v_a3977bd5-f0c4-4d95-bc6c-905bb2f03a07/manager/0.log" Nov 29 08:18:50 crc kubenswrapper[4947]: I1129 08:18:50.024981 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-c45vq_278ee247-5381-4806-b4b7-9247f9ff162d/kube-rbac-proxy/0.log" Nov 29 08:18:50 crc kubenswrapper[4947]: I1129 08:18:50.078021 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-c45vq_278ee247-5381-4806-b4b7-9247f9ff162d/manager/0.log" Nov 29 08:18:50 crc kubenswrapper[4947]: I1129 08:18:50.310642 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-668d9c48b9-g24m9_b97d6c13-451c-43b2-a9cf-a1cb50dc4f71/kube-rbac-proxy/0.log" Nov 29 08:18:50 crc kubenswrapper[4947]: I1129 08:18:50.341960 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-668d9c48b9-g24m9_b97d6c13-451c-43b2-a9cf-a1cb50dc4f71/manager/0.log" Nov 29 08:18:50 crc kubenswrapper[4947]: I1129 08:18:50.511213 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-rrmmb_3c7abd7d-ad0b-4a8e-9de6-95a7da2c11de/kube-rbac-proxy/0.log" Nov 29 08:18:50 crc kubenswrapper[4947]: I1129 08:18:50.610317 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-rrmmb_3c7abd7d-ad0b-4a8e-9de6-95a7da2c11de/manager/0.log" Nov 29 08:18:50 crc kubenswrapper[4947]: I1129 08:18:50.679906 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-wwvll_0502c0dc-a197-41c4-a69a-ee8b633f4cb6/kube-rbac-proxy/0.log" Nov 29 08:18:50 crc kubenswrapper[4947]: I1129 08:18:50.800967 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-wwvll_0502c0dc-a197-41c4-a69a-ee8b633f4cb6/manager/0.log" Nov 29 08:18:50 crc kubenswrapper[4947]: I1129 08:18:50.914356 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-4ffsj_b1873231-ef75-414f-85a0-9536e7e45d24/kube-rbac-proxy/0.log" Nov 29 08:18:51 crc kubenswrapper[4947]: I1129 08:18:51.072520 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-4ffsj_b1873231-ef75-414f-85a0-9536e7e45d24/manager/0.log" Nov 29 08:18:51 crc kubenswrapper[4947]: I1129 08:18:51.192207 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-p6ml6_8f81b728-2bf1-4638-91f7-717ab75349f3/manager/0.log" Nov 29 08:18:51 crc kubenswrapper[4947]: I1129 08:18:51.203671 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-p6ml6_8f81b728-2bf1-4638-91f7-717ab75349f3/kube-rbac-proxy/0.log" Nov 29 08:18:51 crc kubenswrapper[4947]: I1129 08:18:51.359578 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-546d4bdf48-cfpc5_f5486cef-a49e-43c1-b4b2-798ee149238f/kube-rbac-proxy/0.log" Nov 29 08:18:51 crc kubenswrapper[4947]: I1129 08:18:51.503812 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-546d4bdf48-cfpc5_f5486cef-a49e-43c1-b4b2-798ee149238f/manager/0.log" Nov 29 08:18:51 crc kubenswrapper[4947]: I1129 08:18:51.564924 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6546668bfd-27k89_fcd59aa5-5edf-4200-aa2d-298f8b452fff/kube-rbac-proxy/0.log" Nov 29 08:18:51 crc kubenswrapper[4947]: I1129 08:18:51.651208 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6546668bfd-27k89_fcd59aa5-5edf-4200-aa2d-298f8b452fff/manager/0.log" Nov 29 08:18:51 crc kubenswrapper[4947]: I1129 08:18:51.741246 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-mmx2k_401765a6-8ea5-478e-bd29-5c2c717b57d3/kube-rbac-proxy/0.log" Nov 29 08:18:51 crc kubenswrapper[4947]: I1129 08:18:51.821968 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-mmx2k_401765a6-8ea5-478e-bd29-5c2c717b57d3/manager/0.log" Nov 29 08:18:51 crc kubenswrapper[4947]: I1129 08:18:51.956241 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-jkq9j_3c962745-1298-4fbc-a4c7-ae2b75c1ce49/kube-rbac-proxy/0.log" Nov 29 08:18:52 crc kubenswrapper[4947]: I1129 08:18:52.114971 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-jkq9j_3c962745-1298-4fbc-a4c7-ae2b75c1ce49/manager/0.log" Nov 29 08:18:52 crc kubenswrapper[4947]: I1129 08:18:52.133879 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-r7gd9_edb90e2e-84a1-4544-87b4-7e26c9dfd9bc/kube-rbac-proxy/0.log" Nov 29 08:18:52 crc kubenswrapper[4947]: I1129 08:18:52.274926 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-r7gd9_edb90e2e-84a1-4544-87b4-7e26c9dfd9bc/manager/0.log" Nov 29 08:18:52 crc kubenswrapper[4947]: I1129 08:18:52.342852 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-9bjj9_7834eae4-c153-4d24-be4b-cfeb03744cff/kube-rbac-proxy/0.log" Nov 29 08:18:52 crc kubenswrapper[4947]: I1129 08:18:52.399291 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-9bjj9_7834eae4-c153-4d24-be4b-cfeb03744cff/manager/0.log" Nov 29 08:18:52 crc kubenswrapper[4947]: I1129 08:18:52.564922 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd44g9x2_4af0248a-8357-4dce-95fc-1ed6384dc3f2/kube-rbac-proxy/0.log" Nov 29 08:18:52 crc kubenswrapper[4947]: I1129 08:18:52.566165 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd44g9x2_4af0248a-8357-4dce-95fc-1ed6384dc3f2/manager/0.log" Nov 29 08:18:53 crc kubenswrapper[4947]: I1129 08:18:53.087770 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-bnzxg_0074ce39-b36c-4694-869f-965e7109a7ff/registry-server/0.log" Nov 29 08:18:53 crc kubenswrapper[4947]: I1129 08:18:53.140863 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-5c6cf7c4d4-cpbpx_177cf389-1d51-4e5c-89f1-a0d377aae734/operator/0.log" Nov 29 08:18:53 crc kubenswrapper[4947]: I1129 08:18:53.312766 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-lhstd_d5a6afb3-6d78-488e-9f48-9d4b58f998bd/kube-rbac-proxy/0.log" Nov 29 08:18:53 crc kubenswrapper[4947]: I1129 08:18:53.508901 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-r4qxh_bb3b3efc-1204-4486-b13d-be927701b46a/kube-rbac-proxy/0.log" Nov 29 08:18:53 crc kubenswrapper[4947]: I1129 08:18:53.556694 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-lhstd_d5a6afb3-6d78-488e-9f48-9d4b58f998bd/manager/0.log" Nov 29 08:18:53 crc kubenswrapper[4947]: I1129 08:18:53.622235 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-r4qxh_bb3b3efc-1204-4486-b13d-be927701b46a/manager/0.log" Nov 29 08:18:53 crc kubenswrapper[4947]: I1129 08:18:53.862633 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-256pd_3b8c773c-790f-4897-bf54-8ea2a8780a9a/operator/0.log" Nov 29 08:18:53 crc kubenswrapper[4947]: I1129 08:18:53.913078 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-d5zp8_3741905b-b90e-4f39-b6f9-8e197ebd3b42/kube-rbac-proxy/0.log" Nov 29 08:18:53 crc kubenswrapper[4947]: I1129 08:18:53.960111 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-d5zp8_3741905b-b90e-4f39-b6f9-8e197ebd3b42/manager/0.log" Nov 29 08:18:54 crc kubenswrapper[4947]: I1129 08:18:54.106630 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6c7b7f98c7-qvfhd_e2802692-9854-4734-b9b0-c62eb59fb041/manager/0.log" Nov 29 08:18:54 crc kubenswrapper[4947]: I1129 08:18:54.179975 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-lc774_bae6a0cd-2803-4ddb-9de7-11cb8b39f0fb/kube-rbac-proxy/0.log" Nov 29 08:18:54 crc kubenswrapper[4947]: I1129 08:18:54.260668 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-lc774_bae6a0cd-2803-4ddb-9de7-11cb8b39f0fb/manager/0.log" Nov 29 08:18:54 crc kubenswrapper[4947]: I1129 08:18:54.358595 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-86lcn_49e564a6-0297-489a-8239-d195776466e7/manager/0.log" Nov 29 08:18:54 crc kubenswrapper[4947]: I1129 08:18:54.442426 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-86lcn_49e564a6-0297-489a-8239-d195776466e7/kube-rbac-proxy/0.log" Nov 29 08:18:54 crc kubenswrapper[4947]: I1129 08:18:54.487676 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-hxtg8_3c90fa55-0db3-435f-9927-983db02d2fac/manager/0.log" Nov 29 08:18:54 crc kubenswrapper[4947]: I1129 08:18:54.505408 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-hxtg8_3c90fa55-0db3-435f-9927-983db02d2fac/kube-rbac-proxy/0.log" Nov 29 08:19:00 crc kubenswrapper[4947]: I1129 08:19:00.179010 4947 scope.go:117] "RemoveContainer" containerID="d67dc1b8e5602a5dcf9a398c7d5f30c1377f7bf68c3763bf602bafe49c0e95c9" Nov 29 08:19:00 crc kubenswrapper[4947]: E1129 08:19:00.179795 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 08:19:13 crc kubenswrapper[4947]: I1129 08:19:13.179471 4947 scope.go:117] "RemoveContainer" containerID="d67dc1b8e5602a5dcf9a398c7d5f30c1377f7bf68c3763bf602bafe49c0e95c9" Nov 29 08:19:13 crc kubenswrapper[4947]: E1129 08:19:13.180261 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 08:19:13 crc kubenswrapper[4947]: I1129 08:19:13.429112 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-66xmw_4cd078d1-1fb6-4997-a55e-f90cfea7bf7a/control-plane-machine-set-operator/0.log" Nov 29 08:19:13 crc kubenswrapper[4947]: I1129 08:19:13.667300 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-flwtp_d0bccdee-ee49-4d76-9826-0e8ece077528/kube-rbac-proxy/0.log" Nov 29 08:19:13 crc kubenswrapper[4947]: I1129 08:19:13.713939 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-flwtp_d0bccdee-ee49-4d76-9826-0e8ece077528/machine-api-operator/0.log" Nov 29 08:19:26 crc kubenswrapper[4947]: I1129 08:19:26.959284 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-7lq6x_6a644d1d-1b54-443f-9b88-8540fca13140/cert-manager-controller/0.log" Nov 29 08:19:27 crc kubenswrapper[4947]: I1129 08:19:27.179743 4947 scope.go:117] "RemoveContainer" containerID="d67dc1b8e5602a5dcf9a398c7d5f30c1377f7bf68c3763bf602bafe49c0e95c9" Nov 29 08:19:27 crc kubenswrapper[4947]: I1129 08:19:27.276754 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-qs54j_dd5aa62c-e1b1-4ca2-b931-4599eacd883c/cert-manager-cainjector/0.log" Nov 29 08:19:27 crc kubenswrapper[4947]: I1129 08:19:27.343080 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-gwf74_1a5dba8a-b8b9-48d7-9bd7-cf9873deaaec/cert-manager-webhook/0.log" Nov 29 08:19:27 crc kubenswrapper[4947]: I1129 08:19:27.450183 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" event={"ID":"5f4d791f-bb61-4aaa-a09c-3007b59645a7","Type":"ContainerStarted","Data":"210f0cc487e230d9e23a7bb38858cc276d322f6705bdfbc58e2540a3540f2048"} Nov 29 08:19:40 crc kubenswrapper[4947]: I1129 08:19:40.966685 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-xsb6d_89a3bce9-0cbd-4794-9af1-2618110280cf/nmstate-console-plugin/0.log" Nov 29 08:19:41 crc kubenswrapper[4947]: I1129 08:19:41.125691 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-cb2bc_ec3e638f-25fc-45ef-b33a-696d06037f00/nmstate-handler/0.log" Nov 29 08:19:41 crc kubenswrapper[4947]: I1129 08:19:41.163281 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-w9rng_06e121e7-c32a-419c-ab90-3ac8cd45fb7c/kube-rbac-proxy/0.log" Nov 29 08:19:41 crc kubenswrapper[4947]: I1129 08:19:41.196974 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-w9rng_06e121e7-c32a-419c-ab90-3ac8cd45fb7c/nmstate-metrics/0.log" Nov 29 08:19:41 crc kubenswrapper[4947]: I1129 08:19:41.335508 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-wrff6_e1d1db66-4ff5-4958-b66a-788459bdfe64/nmstate-operator/0.log" Nov 29 08:19:41 crc kubenswrapper[4947]: I1129 08:19:41.371998 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-gkxps_b84ff824-fb24-473d-a9df-501bc25d8547/nmstate-webhook/0.log" Nov 29 08:19:55 crc kubenswrapper[4947]: I1129 08:19:55.925765 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-dmwxs_8215fd44-dc8d-4791-992b-42a573cdfbed/kube-rbac-proxy/0.log" Nov 29 08:19:56 crc kubenswrapper[4947]: I1129 08:19:56.094829 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-dmwxs_8215fd44-dc8d-4791-992b-42a573cdfbed/controller/0.log" Nov 29 08:19:56 crc kubenswrapper[4947]: I1129 08:19:56.157917 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j6qpj_b277e4bf-960a-45ad-a864-c6bfb22ade67/cp-frr-files/0.log" Nov 29 08:19:56 crc kubenswrapper[4947]: I1129 08:19:56.382718 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j6qpj_b277e4bf-960a-45ad-a864-c6bfb22ade67/cp-frr-files/0.log" Nov 29 08:19:56 crc kubenswrapper[4947]: I1129 08:19:56.384515 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j6qpj_b277e4bf-960a-45ad-a864-c6bfb22ade67/cp-metrics/0.log" Nov 29 08:19:56 crc kubenswrapper[4947]: I1129 08:19:56.393535 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j6qpj_b277e4bf-960a-45ad-a864-c6bfb22ade67/cp-reloader/0.log" Nov 29 08:19:56 crc kubenswrapper[4947]: I1129 08:19:56.405864 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j6qpj_b277e4bf-960a-45ad-a864-c6bfb22ade67/cp-reloader/0.log" Nov 29 08:19:56 crc kubenswrapper[4947]: I1129 08:19:56.594309 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j6qpj_b277e4bf-960a-45ad-a864-c6bfb22ade67/cp-metrics/0.log" Nov 29 08:19:56 crc kubenswrapper[4947]: I1129 08:19:56.604760 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j6qpj_b277e4bf-960a-45ad-a864-c6bfb22ade67/cp-reloader/0.log" Nov 29 08:19:56 crc kubenswrapper[4947]: I1129 08:19:56.613837 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j6qpj_b277e4bf-960a-45ad-a864-c6bfb22ade67/cp-frr-files/0.log" Nov 29 08:19:56 crc kubenswrapper[4947]: I1129 08:19:56.637294 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j6qpj_b277e4bf-960a-45ad-a864-c6bfb22ade67/cp-metrics/0.log" Nov 29 08:19:56 crc kubenswrapper[4947]: I1129 08:19:56.797936 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j6qpj_b277e4bf-960a-45ad-a864-c6bfb22ade67/cp-frr-files/0.log" Nov 29 08:19:56 crc kubenswrapper[4947]: I1129 08:19:56.812770 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j6qpj_b277e4bf-960a-45ad-a864-c6bfb22ade67/cp-metrics/0.log" Nov 29 08:19:56 crc kubenswrapper[4947]: I1129 08:19:56.831614 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j6qpj_b277e4bf-960a-45ad-a864-c6bfb22ade67/cp-reloader/0.log" Nov 29 08:19:56 crc kubenswrapper[4947]: I1129 08:19:56.835593 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j6qpj_b277e4bf-960a-45ad-a864-c6bfb22ade67/controller/0.log" Nov 29 08:19:57 crc kubenswrapper[4947]: I1129 08:19:57.086440 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j6qpj_b277e4bf-960a-45ad-a864-c6bfb22ade67/frr-metrics/0.log" Nov 29 08:19:57 crc kubenswrapper[4947]: I1129 08:19:57.087179 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j6qpj_b277e4bf-960a-45ad-a864-c6bfb22ade67/kube-rbac-proxy/0.log" Nov 29 08:19:57 crc kubenswrapper[4947]: I1129 08:19:57.155989 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j6qpj_b277e4bf-960a-45ad-a864-c6bfb22ade67/kube-rbac-proxy-frr/0.log" Nov 29 08:19:57 crc kubenswrapper[4947]: I1129 08:19:57.326750 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j6qpj_b277e4bf-960a-45ad-a864-c6bfb22ade67/reloader/0.log" Nov 29 08:19:57 crc kubenswrapper[4947]: I1129 08:19:57.419343 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-kkkmt_207ad8ab-e9b8-437c-a649-24d58d587eb9/frr-k8s-webhook-server/0.log" Nov 29 08:19:57 crc kubenswrapper[4947]: I1129 08:19:57.628321 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-65c8b8f597-t4rnb_41375968-ea4e-4c61-abf1-30e04742292b/manager/0.log" Nov 29 08:19:57 crc kubenswrapper[4947]: I1129 08:19:57.805453 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-77c88b6b8-m2k6x_6d948a37-6c6c-4abc-83ea-b9c87e8dd6e9/webhook-server/0.log" Nov 29 08:19:57 crc kubenswrapper[4947]: I1129 08:19:57.893711 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-6lb2j_e020145b-4005-44b0-89a6-293ea42f44f6/kube-rbac-proxy/0.log" Nov 29 08:19:58 crc kubenswrapper[4947]: I1129 08:19:58.538739 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-6lb2j_e020145b-4005-44b0-89a6-293ea42f44f6/speaker/0.log" Nov 29 08:19:58 crc kubenswrapper[4947]: I1129 08:19:58.915353 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j6qpj_b277e4bf-960a-45ad-a864-c6bfb22ade67/frr/0.log" Nov 29 08:20:11 crc kubenswrapper[4947]: I1129 08:20:11.331063 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvrkrr_8cbbb392-26e7-49a5-bd3f-992f9e5158cb/util/0.log" Nov 29 08:20:11 crc kubenswrapper[4947]: I1129 08:20:11.548489 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvrkrr_8cbbb392-26e7-49a5-bd3f-992f9e5158cb/util/0.log" Nov 29 08:20:11 crc kubenswrapper[4947]: I1129 08:20:11.554890 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvrkrr_8cbbb392-26e7-49a5-bd3f-992f9e5158cb/pull/0.log" Nov 29 08:20:11 crc kubenswrapper[4947]: I1129 08:20:11.578568 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvrkrr_8cbbb392-26e7-49a5-bd3f-992f9e5158cb/pull/0.log" Nov 29 08:20:11 crc kubenswrapper[4947]: I1129 08:20:11.748043 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvrkrr_8cbbb392-26e7-49a5-bd3f-992f9e5158cb/pull/0.log" Nov 29 08:20:11 crc kubenswrapper[4947]: I1129 08:20:11.770424 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvrkrr_8cbbb392-26e7-49a5-bd3f-992f9e5158cb/util/0.log" Nov 29 08:20:11 crc kubenswrapper[4947]: I1129 08:20:11.787552 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvrkrr_8cbbb392-26e7-49a5-bd3f-992f9e5158cb/extract/0.log" Nov 29 08:20:11 crc kubenswrapper[4947]: I1129 08:20:11.903564 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wksrm_221f5984-f762-4e06-8026-b933d54eb4d6/util/0.log" Nov 29 08:20:12 crc kubenswrapper[4947]: I1129 08:20:12.123532 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wksrm_221f5984-f762-4e06-8026-b933d54eb4d6/util/0.log" Nov 29 08:20:12 crc kubenswrapper[4947]: I1129 08:20:12.143175 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wksrm_221f5984-f762-4e06-8026-b933d54eb4d6/pull/0.log" Nov 29 08:20:12 crc kubenswrapper[4947]: I1129 08:20:12.177406 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wksrm_221f5984-f762-4e06-8026-b933d54eb4d6/pull/0.log" Nov 29 08:20:12 crc kubenswrapper[4947]: I1129 08:20:12.333872 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wksrm_221f5984-f762-4e06-8026-b933d54eb4d6/extract/0.log" Nov 29 08:20:12 crc kubenswrapper[4947]: I1129 08:20:12.349782 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wksrm_221f5984-f762-4e06-8026-b933d54eb4d6/pull/0.log" Nov 29 08:20:12 crc kubenswrapper[4947]: I1129 08:20:12.430363 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wksrm_221f5984-f762-4e06-8026-b933d54eb4d6/util/0.log" Nov 29 08:20:12 crc kubenswrapper[4947]: I1129 08:20:12.523422 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-k262s_afb2b7b2-3432-44f3-adb2-f347d656aac2/extract-utilities/0.log" Nov 29 08:20:12 crc kubenswrapper[4947]: I1129 08:20:12.748914 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-k262s_afb2b7b2-3432-44f3-adb2-f347d656aac2/extract-content/0.log" Nov 29 08:20:12 crc kubenswrapper[4947]: I1129 08:20:12.754466 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-k262s_afb2b7b2-3432-44f3-adb2-f347d656aac2/extract-content/0.log" Nov 29 08:20:12 crc kubenswrapper[4947]: I1129 08:20:12.775138 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-k262s_afb2b7b2-3432-44f3-adb2-f347d656aac2/extract-utilities/0.log" Nov 29 08:20:12 crc kubenswrapper[4947]: I1129 08:20:12.925487 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-k262s_afb2b7b2-3432-44f3-adb2-f347d656aac2/extract-utilities/0.log" Nov 29 08:20:12 crc kubenswrapper[4947]: I1129 08:20:12.939844 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-k262s_afb2b7b2-3432-44f3-adb2-f347d656aac2/extract-content/0.log" Nov 29 08:20:13 crc kubenswrapper[4947]: I1129 08:20:13.240845 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dq9tk_23aac1e3-893c-4b94-bc08-a719c7ad8566/extract-utilities/0.log" Nov 29 08:20:13 crc kubenswrapper[4947]: I1129 08:20:13.456846 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dq9tk_23aac1e3-893c-4b94-bc08-a719c7ad8566/extract-content/0.log" Nov 29 08:20:13 crc kubenswrapper[4947]: I1129 08:20:13.484480 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dq9tk_23aac1e3-893c-4b94-bc08-a719c7ad8566/extract-utilities/0.log" Nov 29 08:20:13 crc kubenswrapper[4947]: I1129 08:20:13.502086 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dq9tk_23aac1e3-893c-4b94-bc08-a719c7ad8566/extract-content/0.log" Nov 29 08:20:13 crc kubenswrapper[4947]: I1129 08:20:13.712000 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dq9tk_23aac1e3-893c-4b94-bc08-a719c7ad8566/extract-utilities/0.log" Nov 29 08:20:13 crc kubenswrapper[4947]: I1129 08:20:13.714337 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-k262s_afb2b7b2-3432-44f3-adb2-f347d656aac2/registry-server/0.log" Nov 29 08:20:13 crc kubenswrapper[4947]: I1129 08:20:13.728039 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dq9tk_23aac1e3-893c-4b94-bc08-a719c7ad8566/extract-content/0.log" Nov 29 08:20:13 crc kubenswrapper[4947]: I1129 08:20:13.921573 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dq9tk_23aac1e3-893c-4b94-bc08-a719c7ad8566/registry-server/0.log" Nov 29 08:20:13 crc kubenswrapper[4947]: I1129 08:20:13.966122 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-gfzmz_f48b107b-5cce-4f64-b7d9-20d9efaa76c6/marketplace-operator/0.log" Nov 29 08:20:14 crc kubenswrapper[4947]: I1129 08:20:14.104779 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8jm8l_32bcb09c-3f5f-40c7-ab5c-a7ae0c467f7e/extract-utilities/0.log" Nov 29 08:20:14 crc kubenswrapper[4947]: I1129 08:20:14.279806 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8jm8l_32bcb09c-3f5f-40c7-ab5c-a7ae0c467f7e/extract-utilities/0.log" Nov 29 08:20:14 crc kubenswrapper[4947]: I1129 08:20:14.309715 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8jm8l_32bcb09c-3f5f-40c7-ab5c-a7ae0c467f7e/extract-content/0.log" Nov 29 08:20:14 crc kubenswrapper[4947]: I1129 08:20:14.359968 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8jm8l_32bcb09c-3f5f-40c7-ab5c-a7ae0c467f7e/extract-content/0.log" Nov 29 08:20:14 crc kubenswrapper[4947]: I1129 08:20:14.558023 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8jm8l_32bcb09c-3f5f-40c7-ab5c-a7ae0c467f7e/extract-utilities/0.log" Nov 29 08:20:14 crc kubenswrapper[4947]: I1129 08:20:14.599529 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8jm8l_32bcb09c-3f5f-40c7-ab5c-a7ae0c467f7e/extract-content/0.log" Nov 29 08:20:14 crc kubenswrapper[4947]: I1129 08:20:14.771521 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8jm8l_32bcb09c-3f5f-40c7-ab5c-a7ae0c467f7e/registry-server/0.log" Nov 29 08:20:14 crc kubenswrapper[4947]: I1129 08:20:14.801843 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xvx2h_fb500be3-14fe-4b36-9690-6e603eee0771/extract-utilities/0.log" Nov 29 08:20:14 crc kubenswrapper[4947]: I1129 08:20:14.976367 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xvx2h_fb500be3-14fe-4b36-9690-6e603eee0771/extract-utilities/0.log" Nov 29 08:20:14 crc kubenswrapper[4947]: I1129 08:20:14.996453 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xvx2h_fb500be3-14fe-4b36-9690-6e603eee0771/extract-content/0.log" Nov 29 08:20:15 crc kubenswrapper[4947]: I1129 08:20:15.011196 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xvx2h_fb500be3-14fe-4b36-9690-6e603eee0771/extract-content/0.log" Nov 29 08:20:15 crc kubenswrapper[4947]: I1129 08:20:15.158729 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xvx2h_fb500be3-14fe-4b36-9690-6e603eee0771/extract-utilities/0.log" Nov 29 08:20:15 crc kubenswrapper[4947]: I1129 08:20:15.197459 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xvx2h_fb500be3-14fe-4b36-9690-6e603eee0771/extract-content/0.log" Nov 29 08:20:16 crc kubenswrapper[4947]: I1129 08:20:16.050659 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xvx2h_fb500be3-14fe-4b36-9690-6e603eee0771/registry-server/0.log" Nov 29 08:20:17 crc kubenswrapper[4947]: I1129 08:20:17.731260 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nbpvf"] Nov 29 08:20:17 crc kubenswrapper[4947]: E1129 08:20:17.732143 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35347c86-909d-4ea3-b70f-70642eea076f" containerName="extract-content" Nov 29 08:20:17 crc kubenswrapper[4947]: I1129 08:20:17.732165 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="35347c86-909d-4ea3-b70f-70642eea076f" containerName="extract-content" Nov 29 08:20:17 crc kubenswrapper[4947]: E1129 08:20:17.732190 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35347c86-909d-4ea3-b70f-70642eea076f" containerName="registry-server" Nov 29 08:20:17 crc kubenswrapper[4947]: I1129 08:20:17.732198 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="35347c86-909d-4ea3-b70f-70642eea076f" containerName="registry-server" Nov 29 08:20:17 crc kubenswrapper[4947]: E1129 08:20:17.735127 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35347c86-909d-4ea3-b70f-70642eea076f" containerName="extract-utilities" Nov 29 08:20:17 crc kubenswrapper[4947]: I1129 08:20:17.735162 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="35347c86-909d-4ea3-b70f-70642eea076f" containerName="extract-utilities" Nov 29 08:20:17 crc kubenswrapper[4947]: I1129 08:20:17.735514 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="35347c86-909d-4ea3-b70f-70642eea076f" containerName="registry-server" Nov 29 08:20:17 crc kubenswrapper[4947]: I1129 08:20:17.737411 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nbpvf" Nov 29 08:20:17 crc kubenswrapper[4947]: I1129 08:20:17.746295 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nbpvf"] Nov 29 08:20:17 crc kubenswrapper[4947]: I1129 08:20:17.860713 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/806e5668-f641-4d21-8835-3da033372cd0-catalog-content\") pod \"redhat-marketplace-nbpvf\" (UID: \"806e5668-f641-4d21-8835-3da033372cd0\") " pod="openshift-marketplace/redhat-marketplace-nbpvf" Nov 29 08:20:17 crc kubenswrapper[4947]: I1129 08:20:17.860782 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/806e5668-f641-4d21-8835-3da033372cd0-utilities\") pod \"redhat-marketplace-nbpvf\" (UID: \"806e5668-f641-4d21-8835-3da033372cd0\") " pod="openshift-marketplace/redhat-marketplace-nbpvf" Nov 29 08:20:17 crc kubenswrapper[4947]: I1129 08:20:17.860920 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z9lz\" (UniqueName: \"kubernetes.io/projected/806e5668-f641-4d21-8835-3da033372cd0-kube-api-access-2z9lz\") pod \"redhat-marketplace-nbpvf\" (UID: \"806e5668-f641-4d21-8835-3da033372cd0\") " pod="openshift-marketplace/redhat-marketplace-nbpvf" Nov 29 08:20:17 crc kubenswrapper[4947]: I1129 08:20:17.963065 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/806e5668-f641-4d21-8835-3da033372cd0-catalog-content\") pod \"redhat-marketplace-nbpvf\" (UID: \"806e5668-f641-4d21-8835-3da033372cd0\") " pod="openshift-marketplace/redhat-marketplace-nbpvf" Nov 29 08:20:17 crc kubenswrapper[4947]: I1129 08:20:17.963129 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/806e5668-f641-4d21-8835-3da033372cd0-utilities\") pod \"redhat-marketplace-nbpvf\" (UID: \"806e5668-f641-4d21-8835-3da033372cd0\") " pod="openshift-marketplace/redhat-marketplace-nbpvf" Nov 29 08:20:17 crc kubenswrapper[4947]: I1129 08:20:17.963281 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2z9lz\" (UniqueName: \"kubernetes.io/projected/806e5668-f641-4d21-8835-3da033372cd0-kube-api-access-2z9lz\") pod \"redhat-marketplace-nbpvf\" (UID: \"806e5668-f641-4d21-8835-3da033372cd0\") " pod="openshift-marketplace/redhat-marketplace-nbpvf" Nov 29 08:20:17 crc kubenswrapper[4947]: I1129 08:20:17.963983 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/806e5668-f641-4d21-8835-3da033372cd0-catalog-content\") pod \"redhat-marketplace-nbpvf\" (UID: \"806e5668-f641-4d21-8835-3da033372cd0\") " pod="openshift-marketplace/redhat-marketplace-nbpvf" Nov 29 08:20:17 crc kubenswrapper[4947]: I1129 08:20:17.964323 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/806e5668-f641-4d21-8835-3da033372cd0-utilities\") pod \"redhat-marketplace-nbpvf\" (UID: \"806e5668-f641-4d21-8835-3da033372cd0\") " pod="openshift-marketplace/redhat-marketplace-nbpvf" Nov 29 08:20:17 crc kubenswrapper[4947]: I1129 08:20:17.986160 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2z9lz\" (UniqueName: \"kubernetes.io/projected/806e5668-f641-4d21-8835-3da033372cd0-kube-api-access-2z9lz\") pod \"redhat-marketplace-nbpvf\" (UID: \"806e5668-f641-4d21-8835-3da033372cd0\") " pod="openshift-marketplace/redhat-marketplace-nbpvf" Nov 29 08:20:18 crc kubenswrapper[4947]: I1129 08:20:18.068819 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nbpvf" Nov 29 08:20:18 crc kubenswrapper[4947]: I1129 08:20:18.557383 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nbpvf"] Nov 29 08:20:18 crc kubenswrapper[4947]: I1129 08:20:18.950804 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nbpvf" event={"ID":"806e5668-f641-4d21-8835-3da033372cd0","Type":"ContainerStarted","Data":"faebec593dfbfb1b400559f060fc632149b59cc73a691d83e318aeb3076671b0"} Nov 29 08:20:19 crc kubenswrapper[4947]: I1129 08:20:19.960523 4947 generic.go:334] "Generic (PLEG): container finished" podID="806e5668-f641-4d21-8835-3da033372cd0" containerID="2332fa84c2bc4075e223ca30fff43f85bcde2870b31f0eac6e2e24954498226b" exitCode=0 Nov 29 08:20:19 crc kubenswrapper[4947]: I1129 08:20:19.960617 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nbpvf" event={"ID":"806e5668-f641-4d21-8835-3da033372cd0","Type":"ContainerDied","Data":"2332fa84c2bc4075e223ca30fff43f85bcde2870b31f0eac6e2e24954498226b"} Nov 29 08:20:20 crc kubenswrapper[4947]: I1129 08:20:20.974097 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nbpvf" event={"ID":"806e5668-f641-4d21-8835-3da033372cd0","Type":"ContainerStarted","Data":"945a2ec16681e41632e7fb0efed6710e60e0a6170fb3af29294a8c821f14a28f"} Nov 29 08:20:21 crc kubenswrapper[4947]: I1129 08:20:21.986672 4947 generic.go:334] "Generic (PLEG): container finished" podID="806e5668-f641-4d21-8835-3da033372cd0" containerID="945a2ec16681e41632e7fb0efed6710e60e0a6170fb3af29294a8c821f14a28f" exitCode=0 Nov 29 08:20:21 crc kubenswrapper[4947]: I1129 08:20:21.986734 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nbpvf" event={"ID":"806e5668-f641-4d21-8835-3da033372cd0","Type":"ContainerDied","Data":"945a2ec16681e41632e7fb0efed6710e60e0a6170fb3af29294a8c821f14a28f"} Nov 29 08:20:23 crc kubenswrapper[4947]: I1129 08:20:23.002506 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nbpvf" event={"ID":"806e5668-f641-4d21-8835-3da033372cd0","Type":"ContainerStarted","Data":"3053d1f7d97ac91ad085385b16232aacb2a23e23bcfb4a357f5f868995f1fd51"} Nov 29 08:20:23 crc kubenswrapper[4947]: I1129 08:20:23.027692 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nbpvf" podStartSLOduration=3.468740346 podStartE2EDuration="6.027667145s" podCreationTimestamp="2025-11-29 08:20:17 +0000 UTC" firstStartedPulling="2025-11-29 08:20:19.963040736 +0000 UTC m=+6371.007422817" lastFinishedPulling="2025-11-29 08:20:22.521967535 +0000 UTC m=+6373.566349616" observedRunningTime="2025-11-29 08:20:23.025503451 +0000 UTC m=+6374.069885532" watchObservedRunningTime="2025-11-29 08:20:23.027667145 +0000 UTC m=+6374.072049226" Nov 29 08:20:28 crc kubenswrapper[4947]: I1129 08:20:28.069637 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nbpvf" Nov 29 08:20:28 crc kubenswrapper[4947]: I1129 08:20:28.070267 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nbpvf" Nov 29 08:20:28 crc kubenswrapper[4947]: I1129 08:20:28.136633 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nbpvf" Nov 29 08:20:29 crc kubenswrapper[4947]: I1129 08:20:29.123129 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nbpvf" Nov 29 08:20:29 crc kubenswrapper[4947]: I1129 08:20:29.176129 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nbpvf"] Nov 29 08:20:31 crc kubenswrapper[4947]: I1129 08:20:31.073164 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nbpvf" podUID="806e5668-f641-4d21-8835-3da033372cd0" containerName="registry-server" containerID="cri-o://3053d1f7d97ac91ad085385b16232aacb2a23e23bcfb4a357f5f868995f1fd51" gracePeriod=2 Nov 29 08:20:31 crc kubenswrapper[4947]: I1129 08:20:31.515430 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nbpvf" Nov 29 08:20:31 crc kubenswrapper[4947]: I1129 08:20:31.691288 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/806e5668-f641-4d21-8835-3da033372cd0-catalog-content\") pod \"806e5668-f641-4d21-8835-3da033372cd0\" (UID: \"806e5668-f641-4d21-8835-3da033372cd0\") " Nov 29 08:20:31 crc kubenswrapper[4947]: I1129 08:20:31.691396 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2z9lz\" (UniqueName: \"kubernetes.io/projected/806e5668-f641-4d21-8835-3da033372cd0-kube-api-access-2z9lz\") pod \"806e5668-f641-4d21-8835-3da033372cd0\" (UID: \"806e5668-f641-4d21-8835-3da033372cd0\") " Nov 29 08:20:31 crc kubenswrapper[4947]: I1129 08:20:31.691559 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/806e5668-f641-4d21-8835-3da033372cd0-utilities\") pod \"806e5668-f641-4d21-8835-3da033372cd0\" (UID: \"806e5668-f641-4d21-8835-3da033372cd0\") " Nov 29 08:20:31 crc kubenswrapper[4947]: I1129 08:20:31.692950 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/806e5668-f641-4d21-8835-3da033372cd0-utilities" (OuterVolumeSpecName: "utilities") pod "806e5668-f641-4d21-8835-3da033372cd0" (UID: "806e5668-f641-4d21-8835-3da033372cd0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:20:31 crc kubenswrapper[4947]: I1129 08:20:31.704371 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/806e5668-f641-4d21-8835-3da033372cd0-kube-api-access-2z9lz" (OuterVolumeSpecName: "kube-api-access-2z9lz") pod "806e5668-f641-4d21-8835-3da033372cd0" (UID: "806e5668-f641-4d21-8835-3da033372cd0"). InnerVolumeSpecName "kube-api-access-2z9lz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:20:31 crc kubenswrapper[4947]: I1129 08:20:31.713201 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/806e5668-f641-4d21-8835-3da033372cd0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "806e5668-f641-4d21-8835-3da033372cd0" (UID: "806e5668-f641-4d21-8835-3da033372cd0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:20:31 crc kubenswrapper[4947]: I1129 08:20:31.794085 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2z9lz\" (UniqueName: \"kubernetes.io/projected/806e5668-f641-4d21-8835-3da033372cd0-kube-api-access-2z9lz\") on node \"crc\" DevicePath \"\"" Nov 29 08:20:31 crc kubenswrapper[4947]: I1129 08:20:31.794481 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/806e5668-f641-4d21-8835-3da033372cd0-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:20:31 crc kubenswrapper[4947]: I1129 08:20:31.794495 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/806e5668-f641-4d21-8835-3da033372cd0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:20:32 crc kubenswrapper[4947]: I1129 08:20:32.083757 4947 generic.go:334] "Generic (PLEG): container finished" podID="806e5668-f641-4d21-8835-3da033372cd0" containerID="3053d1f7d97ac91ad085385b16232aacb2a23e23bcfb4a357f5f868995f1fd51" exitCode=0 Nov 29 08:20:32 crc kubenswrapper[4947]: I1129 08:20:32.083802 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nbpvf" event={"ID":"806e5668-f641-4d21-8835-3da033372cd0","Type":"ContainerDied","Data":"3053d1f7d97ac91ad085385b16232aacb2a23e23bcfb4a357f5f868995f1fd51"} Nov 29 08:20:32 crc kubenswrapper[4947]: I1129 08:20:32.083817 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nbpvf" Nov 29 08:20:32 crc kubenswrapper[4947]: I1129 08:20:32.083828 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nbpvf" event={"ID":"806e5668-f641-4d21-8835-3da033372cd0","Type":"ContainerDied","Data":"faebec593dfbfb1b400559f060fc632149b59cc73a691d83e318aeb3076671b0"} Nov 29 08:20:32 crc kubenswrapper[4947]: I1129 08:20:32.083857 4947 scope.go:117] "RemoveContainer" containerID="3053d1f7d97ac91ad085385b16232aacb2a23e23bcfb4a357f5f868995f1fd51" Nov 29 08:20:32 crc kubenswrapper[4947]: I1129 08:20:32.110937 4947 scope.go:117] "RemoveContainer" containerID="945a2ec16681e41632e7fb0efed6710e60e0a6170fb3af29294a8c821f14a28f" Nov 29 08:20:32 crc kubenswrapper[4947]: I1129 08:20:32.125635 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nbpvf"] Nov 29 08:20:32 crc kubenswrapper[4947]: I1129 08:20:32.135864 4947 scope.go:117] "RemoveContainer" containerID="2332fa84c2bc4075e223ca30fff43f85bcde2870b31f0eac6e2e24954498226b" Nov 29 08:20:32 crc kubenswrapper[4947]: I1129 08:20:32.137011 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nbpvf"] Nov 29 08:20:32 crc kubenswrapper[4947]: I1129 08:20:32.197414 4947 scope.go:117] "RemoveContainer" containerID="3053d1f7d97ac91ad085385b16232aacb2a23e23bcfb4a357f5f868995f1fd51" Nov 29 08:20:32 crc kubenswrapper[4947]: E1129 08:20:32.197943 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3053d1f7d97ac91ad085385b16232aacb2a23e23bcfb4a357f5f868995f1fd51\": container with ID starting with 3053d1f7d97ac91ad085385b16232aacb2a23e23bcfb4a357f5f868995f1fd51 not found: ID does not exist" containerID="3053d1f7d97ac91ad085385b16232aacb2a23e23bcfb4a357f5f868995f1fd51" Nov 29 08:20:32 crc kubenswrapper[4947]: I1129 08:20:32.197985 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3053d1f7d97ac91ad085385b16232aacb2a23e23bcfb4a357f5f868995f1fd51"} err="failed to get container status \"3053d1f7d97ac91ad085385b16232aacb2a23e23bcfb4a357f5f868995f1fd51\": rpc error: code = NotFound desc = could not find container \"3053d1f7d97ac91ad085385b16232aacb2a23e23bcfb4a357f5f868995f1fd51\": container with ID starting with 3053d1f7d97ac91ad085385b16232aacb2a23e23bcfb4a357f5f868995f1fd51 not found: ID does not exist" Nov 29 08:20:32 crc kubenswrapper[4947]: I1129 08:20:32.198011 4947 scope.go:117] "RemoveContainer" containerID="945a2ec16681e41632e7fb0efed6710e60e0a6170fb3af29294a8c821f14a28f" Nov 29 08:20:32 crc kubenswrapper[4947]: E1129 08:20:32.198332 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"945a2ec16681e41632e7fb0efed6710e60e0a6170fb3af29294a8c821f14a28f\": container with ID starting with 945a2ec16681e41632e7fb0efed6710e60e0a6170fb3af29294a8c821f14a28f not found: ID does not exist" containerID="945a2ec16681e41632e7fb0efed6710e60e0a6170fb3af29294a8c821f14a28f" Nov 29 08:20:32 crc kubenswrapper[4947]: I1129 08:20:32.198390 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"945a2ec16681e41632e7fb0efed6710e60e0a6170fb3af29294a8c821f14a28f"} err="failed to get container status \"945a2ec16681e41632e7fb0efed6710e60e0a6170fb3af29294a8c821f14a28f\": rpc error: code = NotFound desc = could not find container \"945a2ec16681e41632e7fb0efed6710e60e0a6170fb3af29294a8c821f14a28f\": container with ID starting with 945a2ec16681e41632e7fb0efed6710e60e0a6170fb3af29294a8c821f14a28f not found: ID does not exist" Nov 29 08:20:32 crc kubenswrapper[4947]: I1129 08:20:32.198423 4947 scope.go:117] "RemoveContainer" containerID="2332fa84c2bc4075e223ca30fff43f85bcde2870b31f0eac6e2e24954498226b" Nov 29 08:20:32 crc kubenswrapper[4947]: E1129 08:20:32.198761 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2332fa84c2bc4075e223ca30fff43f85bcde2870b31f0eac6e2e24954498226b\": container with ID starting with 2332fa84c2bc4075e223ca30fff43f85bcde2870b31f0eac6e2e24954498226b not found: ID does not exist" containerID="2332fa84c2bc4075e223ca30fff43f85bcde2870b31f0eac6e2e24954498226b" Nov 29 08:20:32 crc kubenswrapper[4947]: I1129 08:20:32.198806 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2332fa84c2bc4075e223ca30fff43f85bcde2870b31f0eac6e2e24954498226b"} err="failed to get container status \"2332fa84c2bc4075e223ca30fff43f85bcde2870b31f0eac6e2e24954498226b\": rpc error: code = NotFound desc = could not find container \"2332fa84c2bc4075e223ca30fff43f85bcde2870b31f0eac6e2e24954498226b\": container with ID starting with 2332fa84c2bc4075e223ca30fff43f85bcde2870b31f0eac6e2e24954498226b not found: ID does not exist" Nov 29 08:20:33 crc kubenswrapper[4947]: I1129 08:20:33.200514 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="806e5668-f641-4d21-8835-3da033372cd0" path="/var/lib/kubelet/pods/806e5668-f641-4d21-8835-3da033372cd0/volumes" Nov 29 08:20:33 crc kubenswrapper[4947]: I1129 08:20:33.579846 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-84fdj"] Nov 29 08:20:33 crc kubenswrapper[4947]: E1129 08:20:33.580402 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="806e5668-f641-4d21-8835-3da033372cd0" containerName="extract-content" Nov 29 08:20:33 crc kubenswrapper[4947]: I1129 08:20:33.580425 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="806e5668-f641-4d21-8835-3da033372cd0" containerName="extract-content" Nov 29 08:20:33 crc kubenswrapper[4947]: E1129 08:20:33.580439 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="806e5668-f641-4d21-8835-3da033372cd0" containerName="registry-server" Nov 29 08:20:33 crc kubenswrapper[4947]: I1129 08:20:33.580446 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="806e5668-f641-4d21-8835-3da033372cd0" containerName="registry-server" Nov 29 08:20:33 crc kubenswrapper[4947]: E1129 08:20:33.580468 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="806e5668-f641-4d21-8835-3da033372cd0" containerName="extract-utilities" Nov 29 08:20:33 crc kubenswrapper[4947]: I1129 08:20:33.580476 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="806e5668-f641-4d21-8835-3da033372cd0" containerName="extract-utilities" Nov 29 08:20:33 crc kubenswrapper[4947]: I1129 08:20:33.580749 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="806e5668-f641-4d21-8835-3da033372cd0" containerName="registry-server" Nov 29 08:20:33 crc kubenswrapper[4947]: I1129 08:20:33.587313 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-84fdj" Nov 29 08:20:33 crc kubenswrapper[4947]: I1129 08:20:33.592747 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-84fdj"] Nov 29 08:20:33 crc kubenswrapper[4947]: I1129 08:20:33.662574 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cf34f7d-f690-405e-bfda-59ce4180bab3-utilities\") pod \"community-operators-84fdj\" (UID: \"2cf34f7d-f690-405e-bfda-59ce4180bab3\") " pod="openshift-marketplace/community-operators-84fdj" Nov 29 08:20:33 crc kubenswrapper[4947]: I1129 08:20:33.662780 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cf34f7d-f690-405e-bfda-59ce4180bab3-catalog-content\") pod \"community-operators-84fdj\" (UID: \"2cf34f7d-f690-405e-bfda-59ce4180bab3\") " pod="openshift-marketplace/community-operators-84fdj" Nov 29 08:20:33 crc kubenswrapper[4947]: I1129 08:20:33.662993 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwfgv\" (UniqueName: \"kubernetes.io/projected/2cf34f7d-f690-405e-bfda-59ce4180bab3-kube-api-access-xwfgv\") pod \"community-operators-84fdj\" (UID: \"2cf34f7d-f690-405e-bfda-59ce4180bab3\") " pod="openshift-marketplace/community-operators-84fdj" Nov 29 08:20:33 crc kubenswrapper[4947]: I1129 08:20:33.765727 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwfgv\" (UniqueName: \"kubernetes.io/projected/2cf34f7d-f690-405e-bfda-59ce4180bab3-kube-api-access-xwfgv\") pod \"community-operators-84fdj\" (UID: \"2cf34f7d-f690-405e-bfda-59ce4180bab3\") " pod="openshift-marketplace/community-operators-84fdj" Nov 29 08:20:33 crc kubenswrapper[4947]: I1129 08:20:33.765904 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cf34f7d-f690-405e-bfda-59ce4180bab3-utilities\") pod \"community-operators-84fdj\" (UID: \"2cf34f7d-f690-405e-bfda-59ce4180bab3\") " pod="openshift-marketplace/community-operators-84fdj" Nov 29 08:20:33 crc kubenswrapper[4947]: I1129 08:20:33.765963 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cf34f7d-f690-405e-bfda-59ce4180bab3-catalog-content\") pod \"community-operators-84fdj\" (UID: \"2cf34f7d-f690-405e-bfda-59ce4180bab3\") " pod="openshift-marketplace/community-operators-84fdj" Nov 29 08:20:33 crc kubenswrapper[4947]: I1129 08:20:33.766657 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cf34f7d-f690-405e-bfda-59ce4180bab3-catalog-content\") pod \"community-operators-84fdj\" (UID: \"2cf34f7d-f690-405e-bfda-59ce4180bab3\") " pod="openshift-marketplace/community-operators-84fdj" Nov 29 08:20:33 crc kubenswrapper[4947]: I1129 08:20:33.766945 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cf34f7d-f690-405e-bfda-59ce4180bab3-utilities\") pod \"community-operators-84fdj\" (UID: \"2cf34f7d-f690-405e-bfda-59ce4180bab3\") " pod="openshift-marketplace/community-operators-84fdj" Nov 29 08:20:33 crc kubenswrapper[4947]: I1129 08:20:33.796073 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwfgv\" (UniqueName: \"kubernetes.io/projected/2cf34f7d-f690-405e-bfda-59ce4180bab3-kube-api-access-xwfgv\") pod \"community-operators-84fdj\" (UID: \"2cf34f7d-f690-405e-bfda-59ce4180bab3\") " pod="openshift-marketplace/community-operators-84fdj" Nov 29 08:20:33 crc kubenswrapper[4947]: I1129 08:20:33.913949 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-84fdj" Nov 29 08:20:34 crc kubenswrapper[4947]: I1129 08:20:34.590787 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-84fdj"] Nov 29 08:20:35 crc kubenswrapper[4947]: I1129 08:20:35.127683 4947 generic.go:334] "Generic (PLEG): container finished" podID="2cf34f7d-f690-405e-bfda-59ce4180bab3" containerID="56a2bd2e753ad2182b97c11773b9a1ffdc37ac396946f832caa2fa6bc2de3d30" exitCode=0 Nov 29 08:20:35 crc kubenswrapper[4947]: I1129 08:20:35.127795 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84fdj" event={"ID":"2cf34f7d-f690-405e-bfda-59ce4180bab3","Type":"ContainerDied","Data":"56a2bd2e753ad2182b97c11773b9a1ffdc37ac396946f832caa2fa6bc2de3d30"} Nov 29 08:20:35 crc kubenswrapper[4947]: I1129 08:20:35.128022 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84fdj" event={"ID":"2cf34f7d-f690-405e-bfda-59ce4180bab3","Type":"ContainerStarted","Data":"0ab48eb8fb0a305526de743af2987bbde455cf2e66ea5e6914e5a13f6ea0ba64"} Nov 29 08:20:35 crc kubenswrapper[4947]: I1129 08:20:35.129839 4947 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 08:20:36 crc kubenswrapper[4947]: I1129 08:20:36.138867 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84fdj" event={"ID":"2cf34f7d-f690-405e-bfda-59ce4180bab3","Type":"ContainerStarted","Data":"fc479ea9c84b1eb40b7af0faae947e58f501bb73f066899ae3208ff82e462e9e"} Nov 29 08:20:37 crc kubenswrapper[4947]: I1129 08:20:37.149014 4947 generic.go:334] "Generic (PLEG): container finished" podID="2cf34f7d-f690-405e-bfda-59ce4180bab3" containerID="fc479ea9c84b1eb40b7af0faae947e58f501bb73f066899ae3208ff82e462e9e" exitCode=0 Nov 29 08:20:37 crc kubenswrapper[4947]: I1129 08:20:37.149360 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84fdj" event={"ID":"2cf34f7d-f690-405e-bfda-59ce4180bab3","Type":"ContainerDied","Data":"fc479ea9c84b1eb40b7af0faae947e58f501bb73f066899ae3208ff82e462e9e"} Nov 29 08:20:39 crc kubenswrapper[4947]: I1129 08:20:39.171672 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84fdj" event={"ID":"2cf34f7d-f690-405e-bfda-59ce4180bab3","Type":"ContainerStarted","Data":"dc94826fae4472b3f7ad2aa90a393a6f27a732b12e383ecb9cbe452915e6d921"} Nov 29 08:20:39 crc kubenswrapper[4947]: I1129 08:20:39.221744 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-84fdj" podStartSLOduration=2.833517533 podStartE2EDuration="6.221709556s" podCreationTimestamp="2025-11-29 08:20:33 +0000 UTC" firstStartedPulling="2025-11-29 08:20:35.12954772 +0000 UTC m=+6386.173929801" lastFinishedPulling="2025-11-29 08:20:38.517739743 +0000 UTC m=+6389.562121824" observedRunningTime="2025-11-29 08:20:39.208297988 +0000 UTC m=+6390.252680079" watchObservedRunningTime="2025-11-29 08:20:39.221709556 +0000 UTC m=+6390.266091637" Nov 29 08:20:43 crc kubenswrapper[4947]: I1129 08:20:43.914697 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-84fdj" Nov 29 08:20:43 crc kubenswrapper[4947]: I1129 08:20:43.915291 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-84fdj" Nov 29 08:20:43 crc kubenswrapper[4947]: I1129 08:20:43.971843 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-84fdj" Nov 29 08:20:44 crc kubenswrapper[4947]: I1129 08:20:44.274426 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-84fdj" Nov 29 08:20:44 crc kubenswrapper[4947]: I1129 08:20:44.575465 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-84fdj"] Nov 29 08:20:46 crc kubenswrapper[4947]: I1129 08:20:46.256013 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-84fdj" podUID="2cf34f7d-f690-405e-bfda-59ce4180bab3" containerName="registry-server" containerID="cri-o://dc94826fae4472b3f7ad2aa90a393a6f27a732b12e383ecb9cbe452915e6d921" gracePeriod=2 Nov 29 08:20:46 crc kubenswrapper[4947]: I1129 08:20:46.838144 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-84fdj" Nov 29 08:20:46 crc kubenswrapper[4947]: I1129 08:20:46.956979 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwfgv\" (UniqueName: \"kubernetes.io/projected/2cf34f7d-f690-405e-bfda-59ce4180bab3-kube-api-access-xwfgv\") pod \"2cf34f7d-f690-405e-bfda-59ce4180bab3\" (UID: \"2cf34f7d-f690-405e-bfda-59ce4180bab3\") " Nov 29 08:20:46 crc kubenswrapper[4947]: I1129 08:20:46.957120 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cf34f7d-f690-405e-bfda-59ce4180bab3-utilities\") pod \"2cf34f7d-f690-405e-bfda-59ce4180bab3\" (UID: \"2cf34f7d-f690-405e-bfda-59ce4180bab3\") " Nov 29 08:20:46 crc kubenswrapper[4947]: I1129 08:20:46.957197 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cf34f7d-f690-405e-bfda-59ce4180bab3-catalog-content\") pod \"2cf34f7d-f690-405e-bfda-59ce4180bab3\" (UID: \"2cf34f7d-f690-405e-bfda-59ce4180bab3\") " Nov 29 08:20:46 crc kubenswrapper[4947]: I1129 08:20:46.958360 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cf34f7d-f690-405e-bfda-59ce4180bab3-utilities" (OuterVolumeSpecName: "utilities") pod "2cf34f7d-f690-405e-bfda-59ce4180bab3" (UID: "2cf34f7d-f690-405e-bfda-59ce4180bab3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:20:46 crc kubenswrapper[4947]: I1129 08:20:46.979696 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cf34f7d-f690-405e-bfda-59ce4180bab3-kube-api-access-xwfgv" (OuterVolumeSpecName: "kube-api-access-xwfgv") pod "2cf34f7d-f690-405e-bfda-59ce4180bab3" (UID: "2cf34f7d-f690-405e-bfda-59ce4180bab3"). InnerVolumeSpecName "kube-api-access-xwfgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:20:47 crc kubenswrapper[4947]: I1129 08:20:47.031038 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cf34f7d-f690-405e-bfda-59ce4180bab3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2cf34f7d-f690-405e-bfda-59ce4180bab3" (UID: "2cf34f7d-f690-405e-bfda-59ce4180bab3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:20:47 crc kubenswrapper[4947]: I1129 08:20:47.062479 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwfgv\" (UniqueName: \"kubernetes.io/projected/2cf34f7d-f690-405e-bfda-59ce4180bab3-kube-api-access-xwfgv\") on node \"crc\" DevicePath \"\"" Nov 29 08:20:47 crc kubenswrapper[4947]: I1129 08:20:47.062521 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cf34f7d-f690-405e-bfda-59ce4180bab3-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:20:47 crc kubenswrapper[4947]: I1129 08:20:47.062533 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cf34f7d-f690-405e-bfda-59ce4180bab3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:20:47 crc kubenswrapper[4947]: I1129 08:20:47.272852 4947 generic.go:334] "Generic (PLEG): container finished" podID="2cf34f7d-f690-405e-bfda-59ce4180bab3" containerID="dc94826fae4472b3f7ad2aa90a393a6f27a732b12e383ecb9cbe452915e6d921" exitCode=0 Nov 29 08:20:47 crc kubenswrapper[4947]: I1129 08:20:47.272901 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84fdj" event={"ID":"2cf34f7d-f690-405e-bfda-59ce4180bab3","Type":"ContainerDied","Data":"dc94826fae4472b3f7ad2aa90a393a6f27a732b12e383ecb9cbe452915e6d921"} Nov 29 08:20:47 crc kubenswrapper[4947]: I1129 08:20:47.272929 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84fdj" event={"ID":"2cf34f7d-f690-405e-bfda-59ce4180bab3","Type":"ContainerDied","Data":"0ab48eb8fb0a305526de743af2987bbde455cf2e66ea5e6914e5a13f6ea0ba64"} Nov 29 08:20:47 crc kubenswrapper[4947]: I1129 08:20:47.272947 4947 scope.go:117] "RemoveContainer" containerID="dc94826fae4472b3f7ad2aa90a393a6f27a732b12e383ecb9cbe452915e6d921" Nov 29 08:20:47 crc kubenswrapper[4947]: I1129 08:20:47.273108 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-84fdj" Nov 29 08:20:47 crc kubenswrapper[4947]: I1129 08:20:47.299951 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-84fdj"] Nov 29 08:20:47 crc kubenswrapper[4947]: I1129 08:20:47.309315 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-84fdj"] Nov 29 08:20:47 crc kubenswrapper[4947]: I1129 08:20:47.313679 4947 scope.go:117] "RemoveContainer" containerID="fc479ea9c84b1eb40b7af0faae947e58f501bb73f066899ae3208ff82e462e9e" Nov 29 08:20:47 crc kubenswrapper[4947]: I1129 08:20:47.335829 4947 scope.go:117] "RemoveContainer" containerID="56a2bd2e753ad2182b97c11773b9a1ffdc37ac396946f832caa2fa6bc2de3d30" Nov 29 08:20:47 crc kubenswrapper[4947]: I1129 08:20:47.454474 4947 scope.go:117] "RemoveContainer" containerID="dc94826fae4472b3f7ad2aa90a393a6f27a732b12e383ecb9cbe452915e6d921" Nov 29 08:20:47 crc kubenswrapper[4947]: E1129 08:20:47.457797 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc94826fae4472b3f7ad2aa90a393a6f27a732b12e383ecb9cbe452915e6d921\": container with ID starting with dc94826fae4472b3f7ad2aa90a393a6f27a732b12e383ecb9cbe452915e6d921 not found: ID does not exist" containerID="dc94826fae4472b3f7ad2aa90a393a6f27a732b12e383ecb9cbe452915e6d921" Nov 29 08:20:47 crc kubenswrapper[4947]: I1129 08:20:47.457829 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc94826fae4472b3f7ad2aa90a393a6f27a732b12e383ecb9cbe452915e6d921"} err="failed to get container status \"dc94826fae4472b3f7ad2aa90a393a6f27a732b12e383ecb9cbe452915e6d921\": rpc error: code = NotFound desc = could not find container \"dc94826fae4472b3f7ad2aa90a393a6f27a732b12e383ecb9cbe452915e6d921\": container with ID starting with dc94826fae4472b3f7ad2aa90a393a6f27a732b12e383ecb9cbe452915e6d921 not found: ID does not exist" Nov 29 08:20:47 crc kubenswrapper[4947]: I1129 08:20:47.457852 4947 scope.go:117] "RemoveContainer" containerID="fc479ea9c84b1eb40b7af0faae947e58f501bb73f066899ae3208ff82e462e9e" Nov 29 08:20:47 crc kubenswrapper[4947]: E1129 08:20:47.464374 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc479ea9c84b1eb40b7af0faae947e58f501bb73f066899ae3208ff82e462e9e\": container with ID starting with fc479ea9c84b1eb40b7af0faae947e58f501bb73f066899ae3208ff82e462e9e not found: ID does not exist" containerID="fc479ea9c84b1eb40b7af0faae947e58f501bb73f066899ae3208ff82e462e9e" Nov 29 08:20:47 crc kubenswrapper[4947]: I1129 08:20:47.464452 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc479ea9c84b1eb40b7af0faae947e58f501bb73f066899ae3208ff82e462e9e"} err="failed to get container status \"fc479ea9c84b1eb40b7af0faae947e58f501bb73f066899ae3208ff82e462e9e\": rpc error: code = NotFound desc = could not find container \"fc479ea9c84b1eb40b7af0faae947e58f501bb73f066899ae3208ff82e462e9e\": container with ID starting with fc479ea9c84b1eb40b7af0faae947e58f501bb73f066899ae3208ff82e462e9e not found: ID does not exist" Nov 29 08:20:47 crc kubenswrapper[4947]: I1129 08:20:47.464486 4947 scope.go:117] "RemoveContainer" containerID="56a2bd2e753ad2182b97c11773b9a1ffdc37ac396946f832caa2fa6bc2de3d30" Nov 29 08:20:47 crc kubenswrapper[4947]: E1129 08:20:47.471333 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56a2bd2e753ad2182b97c11773b9a1ffdc37ac396946f832caa2fa6bc2de3d30\": container with ID starting with 56a2bd2e753ad2182b97c11773b9a1ffdc37ac396946f832caa2fa6bc2de3d30 not found: ID does not exist" containerID="56a2bd2e753ad2182b97c11773b9a1ffdc37ac396946f832caa2fa6bc2de3d30" Nov 29 08:20:47 crc kubenswrapper[4947]: I1129 08:20:47.471388 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56a2bd2e753ad2182b97c11773b9a1ffdc37ac396946f832caa2fa6bc2de3d30"} err="failed to get container status \"56a2bd2e753ad2182b97c11773b9a1ffdc37ac396946f832caa2fa6bc2de3d30\": rpc error: code = NotFound desc = could not find container \"56a2bd2e753ad2182b97c11773b9a1ffdc37ac396946f832caa2fa6bc2de3d30\": container with ID starting with 56a2bd2e753ad2182b97c11773b9a1ffdc37ac396946f832caa2fa6bc2de3d30 not found: ID does not exist" Nov 29 08:20:47 crc kubenswrapper[4947]: E1129 08:20:47.483999 4947 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2cf34f7d_f690_405e_bfda_59ce4180bab3.slice/crio-0ab48eb8fb0a305526de743af2987bbde455cf2e66ea5e6914e5a13f6ea0ba64\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2cf34f7d_f690_405e_bfda_59ce4180bab3.slice\": RecentStats: unable to find data in memory cache]" Nov 29 08:20:49 crc kubenswrapper[4947]: I1129 08:20:49.189192 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cf34f7d-f690-405e-bfda-59ce4180bab3" path="/var/lib/kubelet/pods/2cf34f7d-f690-405e-bfda-59ce4180bab3/volumes" Nov 29 08:20:53 crc kubenswrapper[4947]: E1129 08:20:53.499727 4947 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.47:34540->38.102.83.47:34209: read tcp 38.102.83.47:34540->38.102.83.47:34209: read: connection reset by peer Nov 29 08:21:52 crc kubenswrapper[4947]: I1129 08:21:52.987797 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:21:52 crc kubenswrapper[4947]: I1129 08:21:52.988466 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:22:22 crc kubenswrapper[4947]: I1129 08:22:22.988472 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:22:22 crc kubenswrapper[4947]: I1129 08:22:22.989134 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:22:28 crc kubenswrapper[4947]: I1129 08:22:28.344273 4947 generic.go:334] "Generic (PLEG): container finished" podID="ec8ff821-1ddd-4193-9469-4bdb054eb399" containerID="c23c2872ff5f6ea5ba8ba3bdbc6e6e90cf0544d65962b452556c7a1c3a3b0db5" exitCode=0 Nov 29 08:22:28 crc kubenswrapper[4947]: I1129 08:22:28.344368 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p8sjr/must-gather-r59nr" event={"ID":"ec8ff821-1ddd-4193-9469-4bdb054eb399","Type":"ContainerDied","Data":"c23c2872ff5f6ea5ba8ba3bdbc6e6e90cf0544d65962b452556c7a1c3a3b0db5"} Nov 29 08:22:28 crc kubenswrapper[4947]: I1129 08:22:28.345581 4947 scope.go:117] "RemoveContainer" containerID="c23c2872ff5f6ea5ba8ba3bdbc6e6e90cf0544d65962b452556c7a1c3a3b0db5" Nov 29 08:22:28 crc kubenswrapper[4947]: I1129 08:22:28.422821 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-p8sjr_must-gather-r59nr_ec8ff821-1ddd-4193-9469-4bdb054eb399/gather/0.log" Nov 29 08:22:36 crc kubenswrapper[4947]: I1129 08:22:36.374105 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-p8sjr/must-gather-r59nr"] Nov 29 08:22:36 crc kubenswrapper[4947]: I1129 08:22:36.374939 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-p8sjr/must-gather-r59nr" podUID="ec8ff821-1ddd-4193-9469-4bdb054eb399" containerName="copy" containerID="cri-o://f3b5e3f574383843e7add8cb7cb68bf00c2a8044690404ca41541db99ec91530" gracePeriod=2 Nov 29 08:22:36 crc kubenswrapper[4947]: I1129 08:22:36.389964 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-p8sjr/must-gather-r59nr"] Nov 29 08:22:36 crc kubenswrapper[4947]: I1129 08:22:36.882442 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-p8sjr_must-gather-r59nr_ec8ff821-1ddd-4193-9469-4bdb054eb399/copy/0.log" Nov 29 08:22:36 crc kubenswrapper[4947]: I1129 08:22:36.883041 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p8sjr/must-gather-r59nr" Nov 29 08:22:36 crc kubenswrapper[4947]: I1129 08:22:36.957639 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wg25\" (UniqueName: \"kubernetes.io/projected/ec8ff821-1ddd-4193-9469-4bdb054eb399-kube-api-access-6wg25\") pod \"ec8ff821-1ddd-4193-9469-4bdb054eb399\" (UID: \"ec8ff821-1ddd-4193-9469-4bdb054eb399\") " Nov 29 08:22:36 crc kubenswrapper[4947]: I1129 08:22:36.957800 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ec8ff821-1ddd-4193-9469-4bdb054eb399-must-gather-output\") pod \"ec8ff821-1ddd-4193-9469-4bdb054eb399\" (UID: \"ec8ff821-1ddd-4193-9469-4bdb054eb399\") " Nov 29 08:22:36 crc kubenswrapper[4947]: I1129 08:22:36.966337 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec8ff821-1ddd-4193-9469-4bdb054eb399-kube-api-access-6wg25" (OuterVolumeSpecName: "kube-api-access-6wg25") pod "ec8ff821-1ddd-4193-9469-4bdb054eb399" (UID: "ec8ff821-1ddd-4193-9469-4bdb054eb399"). InnerVolumeSpecName "kube-api-access-6wg25". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:22:37 crc kubenswrapper[4947]: I1129 08:22:37.061247 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wg25\" (UniqueName: \"kubernetes.io/projected/ec8ff821-1ddd-4193-9469-4bdb054eb399-kube-api-access-6wg25\") on node \"crc\" DevicePath \"\"" Nov 29 08:22:37 crc kubenswrapper[4947]: I1129 08:22:37.120914 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec8ff821-1ddd-4193-9469-4bdb054eb399-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "ec8ff821-1ddd-4193-9469-4bdb054eb399" (UID: "ec8ff821-1ddd-4193-9469-4bdb054eb399"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:22:37 crc kubenswrapper[4947]: I1129 08:22:37.163655 4947 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ec8ff821-1ddd-4193-9469-4bdb054eb399-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 29 08:22:37 crc kubenswrapper[4947]: I1129 08:22:37.190281 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec8ff821-1ddd-4193-9469-4bdb054eb399" path="/var/lib/kubelet/pods/ec8ff821-1ddd-4193-9469-4bdb054eb399/volumes" Nov 29 08:22:37 crc kubenswrapper[4947]: I1129 08:22:37.438121 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-p8sjr_must-gather-r59nr_ec8ff821-1ddd-4193-9469-4bdb054eb399/copy/0.log" Nov 29 08:22:37 crc kubenswrapper[4947]: I1129 08:22:37.443299 4947 generic.go:334] "Generic (PLEG): container finished" podID="ec8ff821-1ddd-4193-9469-4bdb054eb399" containerID="f3b5e3f574383843e7add8cb7cb68bf00c2a8044690404ca41541db99ec91530" exitCode=143 Nov 29 08:22:37 crc kubenswrapper[4947]: I1129 08:22:37.443362 4947 scope.go:117] "RemoveContainer" containerID="f3b5e3f574383843e7add8cb7cb68bf00c2a8044690404ca41541db99ec91530" Nov 29 08:22:37 crc kubenswrapper[4947]: I1129 08:22:37.443399 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p8sjr/must-gather-r59nr" Nov 29 08:22:37 crc kubenswrapper[4947]: I1129 08:22:37.483717 4947 scope.go:117] "RemoveContainer" containerID="c23c2872ff5f6ea5ba8ba3bdbc6e6e90cf0544d65962b452556c7a1c3a3b0db5" Nov 29 08:22:37 crc kubenswrapper[4947]: I1129 08:22:37.612717 4947 scope.go:117] "RemoveContainer" containerID="f3b5e3f574383843e7add8cb7cb68bf00c2a8044690404ca41541db99ec91530" Nov 29 08:22:37 crc kubenswrapper[4947]: E1129 08:22:37.613083 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3b5e3f574383843e7add8cb7cb68bf00c2a8044690404ca41541db99ec91530\": container with ID starting with f3b5e3f574383843e7add8cb7cb68bf00c2a8044690404ca41541db99ec91530 not found: ID does not exist" containerID="f3b5e3f574383843e7add8cb7cb68bf00c2a8044690404ca41541db99ec91530" Nov 29 08:22:37 crc kubenswrapper[4947]: I1129 08:22:37.613114 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3b5e3f574383843e7add8cb7cb68bf00c2a8044690404ca41541db99ec91530"} err="failed to get container status \"f3b5e3f574383843e7add8cb7cb68bf00c2a8044690404ca41541db99ec91530\": rpc error: code = NotFound desc = could not find container \"f3b5e3f574383843e7add8cb7cb68bf00c2a8044690404ca41541db99ec91530\": container with ID starting with f3b5e3f574383843e7add8cb7cb68bf00c2a8044690404ca41541db99ec91530 not found: ID does not exist" Nov 29 08:22:37 crc kubenswrapper[4947]: I1129 08:22:37.613158 4947 scope.go:117] "RemoveContainer" containerID="c23c2872ff5f6ea5ba8ba3bdbc6e6e90cf0544d65962b452556c7a1c3a3b0db5" Nov 29 08:22:37 crc kubenswrapper[4947]: E1129 08:22:37.613454 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c23c2872ff5f6ea5ba8ba3bdbc6e6e90cf0544d65962b452556c7a1c3a3b0db5\": container with ID starting with c23c2872ff5f6ea5ba8ba3bdbc6e6e90cf0544d65962b452556c7a1c3a3b0db5 not found: ID does not exist" containerID="c23c2872ff5f6ea5ba8ba3bdbc6e6e90cf0544d65962b452556c7a1c3a3b0db5" Nov 29 08:22:37 crc kubenswrapper[4947]: I1129 08:22:37.613476 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c23c2872ff5f6ea5ba8ba3bdbc6e6e90cf0544d65962b452556c7a1c3a3b0db5"} err="failed to get container status \"c23c2872ff5f6ea5ba8ba3bdbc6e6e90cf0544d65962b452556c7a1c3a3b0db5\": rpc error: code = NotFound desc = could not find container \"c23c2872ff5f6ea5ba8ba3bdbc6e6e90cf0544d65962b452556c7a1c3a3b0db5\": container with ID starting with c23c2872ff5f6ea5ba8ba3bdbc6e6e90cf0544d65962b452556c7a1c3a3b0db5 not found: ID does not exist" Nov 29 08:22:42 crc kubenswrapper[4947]: I1129 08:22:42.930169 4947 scope.go:117] "RemoveContainer" containerID="6bcb6b23de89b102d6f2be30597fd83d927299e8f10c5721809e21867218e602" Nov 29 08:22:52 crc kubenswrapper[4947]: I1129 08:22:52.987873 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:22:52 crc kubenswrapper[4947]: I1129 08:22:52.988429 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:22:52 crc kubenswrapper[4947]: I1129 08:22:52.988483 4947 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" Nov 29 08:22:52 crc kubenswrapper[4947]: I1129 08:22:52.989303 4947 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"210f0cc487e230d9e23a7bb38858cc276d322f6705bdfbc58e2540a3540f2048"} pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 08:22:52 crc kubenswrapper[4947]: I1129 08:22:52.989355 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" containerID="cri-o://210f0cc487e230d9e23a7bb38858cc276d322f6705bdfbc58e2540a3540f2048" gracePeriod=600 Nov 29 08:22:53 crc kubenswrapper[4947]: I1129 08:22:53.609573 4947 generic.go:334] "Generic (PLEG): container finished" podID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerID="210f0cc487e230d9e23a7bb38858cc276d322f6705bdfbc58e2540a3540f2048" exitCode=0 Nov 29 08:22:53 crc kubenswrapper[4947]: I1129 08:22:53.610853 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" event={"ID":"5f4d791f-bb61-4aaa-a09c-3007b59645a7","Type":"ContainerDied","Data":"210f0cc487e230d9e23a7bb38858cc276d322f6705bdfbc58e2540a3540f2048"} Nov 29 08:22:53 crc kubenswrapper[4947]: I1129 08:22:53.610909 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" event={"ID":"5f4d791f-bb61-4aaa-a09c-3007b59645a7","Type":"ContainerStarted","Data":"d63ce59a6e31b5f4f3dabe1559431fb026eede1c895b5c174b9aaa016b8c7198"} Nov 29 08:22:53 crc kubenswrapper[4947]: I1129 08:22:53.610939 4947 scope.go:117] "RemoveContainer" containerID="d67dc1b8e5602a5dcf9a398c7d5f30c1377f7bf68c3763bf602bafe49c0e95c9" Nov 29 08:23:43 crc kubenswrapper[4947]: I1129 08:23:43.048744 4947 scope.go:117] "RemoveContainer" containerID="645b34341bf23009183c7132bc344f65cc7e0f56d692b14530f246ab814f4220" Nov 29 08:25:22 crc kubenswrapper[4947]: I1129 08:25:22.987532 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:25:22 crc kubenswrapper[4947]: I1129 08:25:22.988291 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:25:52 crc kubenswrapper[4947]: I1129 08:25:52.987780 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:25:52 crc kubenswrapper[4947]: I1129 08:25:52.988515 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:26:22 crc kubenswrapper[4947]: I1129 08:26:22.988272 4947 patch_prober.go:28] interesting pod/machine-config-daemon-5zgvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:26:22 crc kubenswrapper[4947]: I1129 08:26:22.988941 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:26:22 crc kubenswrapper[4947]: I1129 08:26:22.989013 4947 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" Nov 29 08:26:22 crc kubenswrapper[4947]: I1129 08:26:22.989679 4947 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d63ce59a6e31b5f4f3dabe1559431fb026eede1c895b5c174b9aaa016b8c7198"} pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 08:26:22 crc kubenswrapper[4947]: I1129 08:26:22.989751 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerName="machine-config-daemon" containerID="cri-o://d63ce59a6e31b5f4f3dabe1559431fb026eede1c895b5c174b9aaa016b8c7198" gracePeriod=600 Nov 29 08:26:23 crc kubenswrapper[4947]: E1129 08:26:23.118345 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 08:26:23 crc kubenswrapper[4947]: I1129 08:26:23.789873 4947 generic.go:334] "Generic (PLEG): container finished" podID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" containerID="d63ce59a6e31b5f4f3dabe1559431fb026eede1c895b5c174b9aaa016b8c7198" exitCode=0 Nov 29 08:26:23 crc kubenswrapper[4947]: I1129 08:26:23.789930 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" event={"ID":"5f4d791f-bb61-4aaa-a09c-3007b59645a7","Type":"ContainerDied","Data":"d63ce59a6e31b5f4f3dabe1559431fb026eede1c895b5c174b9aaa016b8c7198"} Nov 29 08:26:23 crc kubenswrapper[4947]: I1129 08:26:23.789976 4947 scope.go:117] "RemoveContainer" containerID="210f0cc487e230d9e23a7bb38858cc276d322f6705bdfbc58e2540a3540f2048" Nov 29 08:26:23 crc kubenswrapper[4947]: I1129 08:26:23.791164 4947 scope.go:117] "RemoveContainer" containerID="d63ce59a6e31b5f4f3dabe1559431fb026eede1c895b5c174b9aaa016b8c7198" Nov 29 08:26:23 crc kubenswrapper[4947]: E1129 08:26:23.791549 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 08:26:37 crc kubenswrapper[4947]: I1129 08:26:37.179566 4947 scope.go:117] "RemoveContainer" containerID="d63ce59a6e31b5f4f3dabe1559431fb026eede1c895b5c174b9aaa016b8c7198" Nov 29 08:26:37 crc kubenswrapper[4947]: E1129 08:26:37.180741 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 08:26:51 crc kubenswrapper[4947]: I1129 08:26:51.179678 4947 scope.go:117] "RemoveContainer" containerID="d63ce59a6e31b5f4f3dabe1559431fb026eede1c895b5c174b9aaa016b8c7198" Nov 29 08:26:51 crc kubenswrapper[4947]: E1129 08:26:51.180978 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 08:27:02 crc kubenswrapper[4947]: I1129 08:27:02.179566 4947 scope.go:117] "RemoveContainer" containerID="d63ce59a6e31b5f4f3dabe1559431fb026eede1c895b5c174b9aaa016b8c7198" Nov 29 08:27:02 crc kubenswrapper[4947]: E1129 08:27:02.180469 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 08:27:13 crc kubenswrapper[4947]: I1129 08:27:13.179441 4947 scope.go:117] "RemoveContainer" containerID="d63ce59a6e31b5f4f3dabe1559431fb026eede1c895b5c174b9aaa016b8c7198" Nov 29 08:27:13 crc kubenswrapper[4947]: E1129 08:27:13.180529 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7" Nov 29 08:27:25 crc kubenswrapper[4947]: I1129 08:27:25.178992 4947 scope.go:117] "RemoveContainer" containerID="d63ce59a6e31b5f4f3dabe1559431fb026eede1c895b5c174b9aaa016b8c7198" Nov 29 08:27:25 crc kubenswrapper[4947]: E1129 08:27:25.179998 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5zgvc_openshift-machine-config-operator(5f4d791f-bb61-4aaa-a09c-3007b59645a7)\"" pod="openshift-machine-config-operator/machine-config-daemon-5zgvc" podUID="5f4d791f-bb61-4aaa-a09c-3007b59645a7"